id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
10475120
|
pes2o/s2orc
|
v3-fos-license
|
Medial Canthoplasty Combined with Conjunctivodacryocystorhinostomy for the Treatment of Delayed Medial Telecanthal Deformity
Background: Rupture of the medial canthal ligament can be caused by many events. It remains a challenge to rebuild the drainage system and restore the function. The aim of this study was to evaluate the clinical efficacy of medial canthoplasty combined with conjunctivodacryocystorhinostomy (CDCR) in patients with medial telecanthal deformities and lacrimal drainage system damage. Methods: Twenty-two patients (22 eyes) treated with medial canthoplasty and CDCR during June 2012 to June 2014 were included in this retrospective study. For all patients, a self-tapping, titanium, low-profile head microscrew was drilled into the solid bone on the posterior aspect of the anterior lacrimal crest at the attachment position of the medial canthal ligament. Medpor-coated tear drainage tubes were applied. Distance of patient's lateral displacement before and after operation was recorded and compared. The complications of CDCR were described. Results: Before the surgery, distance of patient's canthal displacement was 4–6 mm. The canthal distance between two eyes of patients with surgery was 1 mm or less. Among patients with CDCR, four patients had proximal obstruction and two patients had distal obstruction. Five patients had tube malposition, for example, tube extrusion 1–3 months after surgery. Conclusions: Medial canthoplasty combined with CDCR is an effective surgical method for treatment of patients with medial telecanthal deformity and lacrimal drainage system obstruction. The study indicates that medial canthoplasty combined with CDCR surgery rebuilds normal appearance of eyelid and contour of the medial canthus and successfully repairs the function of the lacrimal drainage system.
The procedure of CDCR includes rebuilding a new drainage system between the conjunctiva and the nasal cavity using a tear drainage tube and bypassing upper lacrimal system. Although CDCR with a tear drainage tube placement is a reliable method for patients with upper lacrimal system obstruction or damage, it remains some problems postoperatively.
To our knowledge, assessment of medial canthoplasty combined with CDCR is rare in clinical study. In this research, we aimed at evaluating clinical efficacy of medial canthoplasty-combined CDCR in patients with medial telecanthal deformities and lacrimal drainage system obstruction.
Patients
Twenty-two eyes of 22 patients with surgery of medial canthoplasty combined with CDCR during June 2012 to June 2014 were enrolled in this retrospective comparative study. Patients with <3-month follow-up and without regular show-up in follow-up were excluded from the study. The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethical Committees of the Department of Ophthalmology, Beijing Tongren Hospital, Capital Medical University.
Microscrew fixation
Operations were performed with general anesthesia. The direction and ideal position of the medial canthus were marked on the skin [ Figure 1a]. To acquire symmetric effect to the contralateral side, overreduction of 1 mm was adopted. Along the medial canthus to the marked point, a reverse Y-shaped incision was used. Scar tissues around the canthus under the canthal skin were dissected to reduce the resultant tension and avoid stretch of the tissue attachment.
Subperiosteal dissection was performed to expose the anterior lacrimal crest area. Next, a self-tapping titanium microscrew was driven into the solid bone on the posterior aspect of the anterior lacrimal crest at attached position of the medial canthal ligament [ Figure 1b]. The low-profile head screw was used to avoid gliding on bone surface.
The 3-0 wire suture was surrounded neck of screw and passed through the stump of the medial canthal ligament by twice. The wire was tightened and twisted until firm fixation of canthus. After that, wire above on skin surface was cut and tucked into soft tissue.
Conjunctivodacryocystorhinostomy
Nasal decongestion was facilitated by packing neurosurgical cottonoids soaked in 50/50 mixture of 40% lidocaine and oxymetazoline solution into the middle meatus. Further hemostasis was achieved by direct infiltration of lidocaine with epinephrine into the site of the initial osteotomy using a 22-gauge spinal needle. Approximately, 2 ml lidocaine was injected in nasal mucosa. A 4 mm Kerrison rongeur was used to create the initial osteotomy [ Figure 2a]. It should be cautious to avoid traumatizing the nasal septum or surrounding mucosa during all endonasal manipulations, as this might cause obstruction of Medpor-coated (Porex Surgical, Inc., USA) tube postoperatively. Next, bone and mucosa was removed. The desired position of the Medpor-coated tube was marked on the conjunctiva [ Figure 2b]. It corresponded to a site 2.5 mm posterior to the medial commissure at the junction between the caruncle and plica semilunaris. An 18-gauge needle was then used to create a tract from the conjunctival side to the right nasal cavity. The needle was aiming toward the osteotomy created previously [ Figure 2c]. The angle of entry into the nasal cavity was approximately 45°. Using simultaneous endoscopic monitoring, the 18-gauge needle could be visualized through the osteotomy site. A 23-gauge stainless steel guidewire was then placed into the lumen of the 18-gauge needle that was previously used. This stainless steel guidewire was taken from a standard silicone stent. Endoscopically, the guidewire could be seen in the lumen of the 18-gauge needle. The 18-gauge needle was positioned to avoid contact with the nasal septum. After that, outside part of the 18-gauge needle was clamped by a hemostat to measure the Medpor-coated tube. The 18-gauge needle was removed while the guidewire was left in place. The length of the clamped hemostat was measured with a caliper and the appropriately sized Medpor-coated tube was selected. The proximal and distal part of Medpor-coating part was separated from the glass tube [ Figure 2d]. Further, the conjunctival tract was enlarged with a 15-gauge though the guidewire to widen the conjunctival tract before placement of Medpor-coated tube. After a few minutes, the 15-gauge needle was removed and Medpor-coated tube selected early was passed over the guidewire [ Figure 2e]. The visualization with endoscopic confirmed the correct position of Medpor-coated tube [ Figure 2f]. Saline irrigation into the medial canthus showed excellent drainage through the tube.
Skin suture
Different sutures were used to close the soft tissue and skin, respectively. The incision was closed using 6-0 suture, and
results
In the present study, a total of 22 patients (22 eyes) were included. The mean age of the patients was 52.0 ± 14.3 years (range: 33-76 years). Fifteen patients were males (63.6%) and eight patients were females (36.4%). Thirteen patients had medial telecanthal deformities and lacrimal system obstruction in the right eye; nine patients were in the left eye. All of the postoperative data reported here were documented at the last visit of patient's follow-up.
Results of medial canthoplasty showed that, during follow-up, no common complications, such as infection, hematoma, or sensitive to temperature, were observed. The scars caused by the Y-shaped medial canthal incision were masked and tolerated well in most patients. Before surgery, distance of canthal displacement in all patients is 4 and 6 mm. The canthal distance between two eyes was corrected to 1 mm or less than in postoperative measurements of lateral displacement. Patient satisfaction survey of appearance of eyelid and contour of the medial canthus revealed high grade [ Figure 3a-3d].
Results of CDCR indicated that, in aspect of obstruction, proximal obstruction occurred in four cases due to conjunctival proliferation and distal obstruction and adhesion to septum occurred in two cases due to mucosal proliferation after the primary surgery. These patients obtained further surgical intervention including conjunctival excision. As for tube malposition, five patients had tube malposition, and tube extrusion was observed between 1 and 3 months after primary surgery, two tube extrusion developed in patients.
dIscussIon
Injury of the midface usually results in medial telecanthal deformity. Moreover, the damage of the lacrimal drainage system also is a common concurrent injury at periorbital region. It is a great challenge to repair a medial canthus and restore the function of lacrimal drainage system. In this study, medial canthoplasty and CDCR were combined together to perform treatment on patients with medial telecanthal deformity and lacrimal drainage system obstruction.
CDCR combined with Jones tube placement is a classical technique to treat lacrimal drainage obstruction. Jones tube is made by heat resistant glass material which has poor flexibility and is prone to prolapse and dislocate. Chang et al. [3] reported a 13-year follow-up results of CDCR with Jones tube placement. The most common cause of failure was medial migration of the Jones tube apart from inappropriate tube insertion in primary surgery and severe inflammation. In another report, [4] a new tube, named Metaireau tube (M-tube), was used in CDCR. Although the M-tube is simple to reposition when dislocated postoperatively, it does not show better than the Jones tube including migration and extrusion in the early postoperative period. In this study, we adopted a Medpor-coated tear drain which had shown lower rate of extrusion postoperatively as reported by others. [5] However, the complication of tube obstruction was also observed during follow-up. [6] Many surgery methods have been proposed in the management of medial telecanthal deformity. Some techniques are not applied currently, for instance, drilling two holes and inserting steel wire. In addition to the difficult procedures, they also cause damage to mucosal vessels and recurrent infection. [7] As for the transnasal medial canthopexy, it is more applicable to bilateral than unilateral f a e medial canthopexy. The procedure is required not only to expose larger surgical area to pass a wire through a bony fenestration, but also to dissect and protect the contralateral orbit. [8,9] In this study, we chose the posterior aspect of the solid anterior lacrimal crest to attach the medial canthus, which can restore the naso-orbital valley. As shown in the results, all patients satisfied with their appearance after surgery. The improved technique not only prevents complications which are common in other approaches but also provides an excellent method to repair ipsilateral medial canthal without causing complex naso-orbital fractures. The Y-shaped medial canthal incision described in this surgery is very small; however, it provides enough exposure area for operation under direct vision. Moreover, it is notable that the incision can minimize facial scarring and reduce operative time. It was reported that the coronal approach was complex and time-consuming for unilateral cases without craniomaxillofacial fractures. [10,11] To treat the upper nasolacrimal duct obstruction or absence, CDCR with a tear drainage tube placement is an appropriate surgical method. Nevertheless, it has several complications including tube malposition, extrusion, and proximal or distal obstruction, which are major problems that might influence surgical outcome. Other minor problems such as conjunctival irritation, corneal abrasion, infection, foreign body sensation in the eye, and lumen obstruction might also affect patient comfort.
During postoperative period, tube obstruction, caused by conjunctival or mucosal proliferation, is one of the most important reasons of CDCR surgery failure. In previous studies, Pyrex drainage tube implantation in CDCR surgeries was obstructed with tissue proliferation at a rate of 7-12%. [12][13][14] Fan et al. [15] used Medpor-coated tear drainage tubes in their surgeries, the rate of obstruction was higher compared to that of previous studies, and nevertheless, they did not show a reasonable explanation. An obstruction rate of 27.3% (6/22) was observed in the study. In our view, the obstruction might be caused due to Medpor coating. First, Medpor coating is easy to vascularization. Second, Medpor might irritate the mucous membranes around the tube and cause pyogenic granulomas.
Tube malposition or migration is a very severe problem in CDCR, which lead to failure of surgery. Malposition or shift of the tube outward could damage the ocular surface, whereas shifting inward may lead to pain, infection, obstruction, or mucosal damage. [16] During sniffing or coughing, the tube could move toward the medial or lateral, which need to be revised by surgery. [17,18] Because Medpor coating is prone to vascularization which contributes to the stability, malposition rarely happens when using Medpor-coated tube. In our series, tube malposition occurred in five patients. We questioned whether we had inappropriate surgical technique when we made the tube bed. In early stage, the tube bed was made by osteotomy, which might enlarge overly the tube bed. Afterward, we changed the surgical technique. The bone which the tube bed located was less grinded, and then the needle which is the same size with tube was pushed into the created tract from the conjunctival side to nasal cavity. In this case, tube had a very tight location. Debris or mucus accumulation could obstruct the lumen of the Medpor-coated tube in CDCR surgery. Although revision might not be required, it definitely affects patient comfort. The incidence of lumen obstruction is widely considered lower in Pyrex tubes than in silicon and polyethylene tubes. [19] Tube extrusion is the most important complication after CDCR procedure, which leads to surgical failure and usually happened before a fistula formed during the first 6 months after surgery. [20][21][22] Multiple factors could influence the tube extrusion, for instance, the etiology of canalicular obstruction, the surgical method, or the shape and material of the tube. [23][24][25] In previously reports, Pyrex glass tube was used most commonly for its satisfactory and ideal drainage, but its extrusion rate was high as 18-51%, [16,17,20] so Medpor-coated tear drainage seemed much more stable. Fan et al. [15] reported that there was no case of tube extrusion observed in Medpor-coated tear drainage tube implanted cases. This study revealed two tube extrusion in patients with Medpor-coated tubes and the reason might be same as tube malposition mentioned above, as the inappropriate osteotomy cause oversized tube bed. The results provide evidence that the porous-coating Medpor tubes have good tissue compatibility, although efforts are still required to improve the ability of vascularization to prevent tube extrusion.
Lots of patients with medial telecanthal deformity are also suffered from lacrimal drainage system damage, especially after trauma. To achieve optimal anatomic outcomes and functional recovery at the same time, medial canthoplasty and CDCR were combined together to manage such kind of patients. If medial canthoplasty was performed first, the titanium microscrew might be pull away from the anterior lacrimal crest during the procedure of CDCR, leading to medial telecanthal deformities again. On the other side, the tear drainage tube might be shifted distally or proximal or even broken, during the procedure of propelling the microscrew or fastening the wire if CDCR was taken first. Therefore, the combination of medial canthoplasty and CDCR leads to a better appearance and function recovery in one time.
There are some limitations in this study. First, duration of the follow-up is short. It was only 3 months to observe the effectiveness and complications. We will continue to investigate the long-term effect of the surgical technique. Second, this study was performed in a small group of patients. Large cohort of patients is needed for the evaluation of efficacy and complications of the surgical procedure in prolonged study.
In conclusion, according to the current study, the combination of medial canthoplasty and CDCR is shown to be a priority surgery method for treatment of medial telecanthal deformity and lacrimal drainage system obstruction. Further studies with prolonged follow-up and larger number of cases are needed.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
|
2018-04-03T00:17:03.922Z
|
2017-03-20T00:00:00.000
|
{
"year": 2017,
"sha1": "40e4af41e88178c380dc0e6e649807996047bec3",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0366-6999.201594",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "40e4af41e88178c380dc0e6e649807996047bec3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257405083
|
pes2o/s2orc
|
v3-fos-license
|
RADAM: Texture Recognition through Randomized Aggregated Encoding of Deep Activation Maps
Texture analysis is a classical yet challenging task in computer vision for which deep neural networks are actively being applied. Most approaches are based on building feature aggregation modules around a pre-trained backbone and then fine-tuning the new architecture on specific texture recognition tasks. Here we propose a new method named \textbf{R}andom encoding of \textbf{A}ggregated \textbf{D}eep \textbf{A}ctivation \textbf{M}aps (RADAM) which extracts rich texture representations without ever changing the backbone. The technique consists of encoding the output at different depths of a pre-trained deep convolutional network using a Randomized Autoencoder (RAE). The RAE is trained locally to each image using a closed-form solution, and its decoder weights are used to compose a 1-dimensional texture representation that is fed into a linear SVM. This means that no fine-tuning or backpropagation is needed. We explore RADAM on several texture benchmarks and achieve state-of-the-art results with different computational budgets. Our results suggest that pre-trained backbones may not require additional fine-tuning for texture recognition if their learned representations are better encoded.
Introduction
For several decades, texture has been studied in Computer Vision as a fundamental visual cue for image recognition in several applications. Despite lacking a widely accepted theoretical definition, we all have developed an intuition for textures by analyzing the world around us from material surfaces in our daily life, through microscopic images, and even through macroscopic images from telescopes and remote sensing. In digital images, one abstract definition is that texture elements emerge from the local intensity constancy and/or variations of pixels producing spatial patterns roughly independently at different scales [39].
The classical approaches to texture recognition focus on the mathematical description of the textural patterns, considering properties such as statistics [10,15,24], frequency [1,12], complexity/fractality [2,38], and others [52]. Many such aspects of texture are challenging to model even in controlled imaging scenarios. Moreover, the wild nature of digital images also results in additional variability, making the task even more complex in real-world applications.
In this work, we propose a new module for texture feature extraction from pre-trained deep convolutional neural networks (DCNNs). The method, called Random encoding of Aggregated Deep Activation Maps (RADAM), goes in a different direction than recent literature on deep texture recognition. Instead of increasing the complexity of the backbone and then retraining everything, we propose a simple codification of the backbone features using a new randomized module. The method is based on aggregating deep activation maps from different depths of a pre-trained convolutional network, and then training Randomized Autoencoders (RAEs) in a pixel-wise fashion for each image, using a closed-form solution. This module outputs the decoder weights from the learned RAEs, which are used as a 1-dimensional feature representation of the input image. This approach is simple and does not require hyperparameter tuning or backpropagation training. Instead, we propose to attach a linear SVM at the top of our features, which can be simply used with standard parameters. Our code is open and is available in a public repository 1 . In summary, our main contributions are: (i) We propose the RADAM texture feature encoding technique applied over a pre-trained DCNN backbone and coupled with a simple linear SVM. The model achieves impressive classification performance without needing to fine-tune the backbone, in contrast to what has been proposed in previous works.
(ii) Bigger backbones and better pre-training improve the performance of RADAM considerably, suggesting that our approach scales well.
Background
We start by conducting a literature review on texture analysis with deep learning and randomized neural networks. The methods covered here are also considered for comparison in our experiments.
Texture Analysis with Deep Neural Networks
In this work, we focus on transfer-learning-based texture analysis by taking advantage of pre-trained deep neural networks. For a more comprehensive review of different approaches to texture analysis, the reader may consult [19].
There have been numerous studies involving deep learning for texture recognition, and here we review them according to two approaches: feature extraction or end-to-end fine-tuning. Some studies explore CNNs only for texture feature extraction and use a dedicated classifier apart from the model architecture. Cimpoi et al. [6] was one of the first works on the subject, where the authors compare the efficiency of two different CNN architectures for feature extraction: FC-CNN, which uses a fully connected (FC) layer, and FV-CNN, which uses a Fisher vector (FV) [5] as a pooling method. They demonstrated that, in general, FC features are not that efficient because their output is highly correlated with the spatial order of the pixels. Later on, Condori and Bruno [22] developed a model, called RankGP-3M-CNN, which performs multi-layer feature aggregation employing Global Average Pooling (GAP) to extract the feature vectors of activation maps at different depths of three combined CNNs (VGG-19, Inception-V3, and ResNet50). They propose a ranking technique to select the best activation maps given a training dataset, achieving promising results in some cases but at the cost of increased computational load, since three backbones are needed. Lyra et al. [21] also proposes feature aggregation from multiple convolutional layers, but pooling is performed using an FV-based approach.
Numerous studies propose end-to-end architectures that enable fine-tuning of the backbone for texture recognition. Zhang et al. [51] proposed an orderless encoding layer on top of a DCNN, called Deep Texture Encoding Network (Deep-TEN), which allows images of arbitrary size. Xue et al. [46] introduces a Deep Encoding Pooling Network (DEPNet), which combines features from the texture encoding layer from Deep-TEN and a global average pooling (GAP) to explore both the local appearance and global context of the images. These features are further processed by a bilinear pooling layer [18]. In another work, Xue et al. [47] also combined features from differential images with the features of DEPNet into a new architecture. Using a different approach, Zhai et al. [50] proposed the Multiple-Attribute-Perceived Network (MAP-Net), which incorporated visual texture attributes in a multi-branch architecture that aggregates features of different layers. Later on [49], they explored the spatial dependency among texture primitives for capturing structural information of the images by using a model called Deep Structure-Revealed Network (DSRNet). Chen et al. [4] introduced the Cross-Layer Aggregation of a Statistical Self-similarity Network (CLASSNet). This CNN feature aggregation module uses a differential box-counting pooling layer that characterizes the statistical self-similarity of texture images. More recently, Yang et al. [48] proposed DFAEN (Double-order Knowledge Fusion and Attentional Encoding Network), which takes advantage of attention mechanisms to aggregate first-and second-order information for encoding texture features. Fine-tuning is employed in these methods to adapt the backbone to the new architecture along with the new classification head.
As an alternative to CNNs, Vision Transformers (ViTs) [8] are emerging in the visual recognition literature. Some works have briefly explored their potential for texture analysis through the Describable Textures Dataset (DTD) achieving stateof-the-art results. Firstly, ViTs achieve competitive results compared to CNNs, but the lack of the typical convolutional inductive bias usually results in the need for more training data. To overcome this issue, a promising alternative is to use attention mechanisms to learn directly from text descriptions about images, e.g. using Contrastive Language Image Pre-training (CLIP) [31]. There have also been proposed bigger datasets for the pre-training of ViTs, such as Bamboo [53], showing that these models scale well. Another approach is to optimize the construction of multitask large-scale ViTs such as proposed by Gesmundo [9] with the µ2Net+ method.
Randomized Neural Networks for Texture Analysis
A Randomized Neural Network [13,25,26,40], in its simplest form, is a single-hidden-layer feed-forward neural network whose input weights are random, while the weights of the output layer are learned by a closed-form solution, in contrast to gradient-descent-based learning. Recently, several works have investigated RNNs to learn texture features for image analysis. Sá Junior et al. [35] used small local regions of one image as inputs to an RNN, and the central pixel of the region as the target. The trained weights of the output layer for each image are then used as a texture representation. Ribas et al. [33] improved the previous approach with the incorporation of graph theory to model the texture image. Other works [16,32] have also extended these concepts to video texture analysis (dynamic texture).
The training of 1-layer RNNs as employed in previous works is a least-squares solution at the output layer. First, consider X ∈ R n×z as the input matrix with n training samples and z features, and g = φ(XW ) as the forward pass of the hidden layer with a sigmoid nonlinearity, where W ∈ R z×q represents the random input weights for q neurons. Given the desired output labels Y , the output weights f are obtained as the least-squares solution of a system of linear equations: [23,29] of matrix g.
An important aspect of RNNs is the generation of random weights for the first layer. Evidence suggests that this choice has little impact once the weights are fixed. In this sense, a common trend among previous works is the use of the Linear Congruential Generator (LCG), a simple pseudo-random number generator in the form of x k+1 = (ax k + b) mod c.
The RNN can be used as a randomized autoencoder (RAE) [17] by considering the input feature matrix X as the target output Y = X. In this sense, the model is composed of a random encoder and a least-squares-based decoder that can map the input data. Kasun et al. [17] also suggests the use of random orthogonal weights [37] for the initialization of the encoder. In this way, the weight matrix f will represent the transformation of the projected random space back into the input data X (output).
RADAM for Texture Feature Encoding
The main idea of the proposed RADAM method is to use multi-depth feature aggregation and randomized pixel-wise encoding to compose a single feature vector, given an input image processed by the backbone. First of all, consider an input image I ∈ R w0×h0×3 fed into a backbone B = (d 1 , ..., d n ), consisting of n blocks of convolutional layers. An activation map, i.e., the output of any convolutional block given I, is a 3-dimensional tensor (ignoring the batch dimension, for simplicity) X i ∈ R wi×hi×zi . The process of feature aggregation consists of combining the outputs of different activation maps at different depths. To that end, we divide the backbone into a fixed number of blocks according to different depths. This division is made to keep a fixed number of blocks for feature extraction, regardless of the total depth of the backbone architecture.
Pre-trained Deep Convolutional Networks: Backbone selection
Most previous works on texture analysis consider pre-trained ResNets [11] (18 or 50) as backbones. Here, we consider the output of five blocks of layers according to the ResNet architecture, meaning that five activation maps are considered for feature aggregation. Additionally, we consider the ConvNeXt architecture [20], a more recent method with promising results in image recognition. For this backbone, we consider the activation maps from the four blocks of layers according to the architecture described in the original work. More specifically, the following ConvNeXt configurations are used, with their corresponding number of channels (z i ) of each block: • ConvNeXt-nano 2 : z i = (80, 160, 320, 640). • ConvNeXt-XL: z i = (256, 512, 1024, 2048).
Deep Activation Map Preparation
Given each deep activation map X i , we apply a depth-wise l p -normalization (p = 2, i.e., Euclidean norm) where X i (:, :, j) represents the 2-dimensional activation map at each channel j ∈ z i with spatial sizes (w i , h i ).
For feature aggregation, we propose to concatenate the activation maps along the third dimension (z i ). However, each map X i initially has a different spatial dimension w i and h i . To overcome this, we simply resize all activation maps with bilinear interpolation using the spatial dimensions of X n 2 , (w n 2 , h n 2 ), as the target sizes. In other words, we consider the spatial dimensions at the middle of the backbone as our anchor size, meaning that some activation maps will require upscaling (if i > n 2 ) and others downscaling (if i < n 2 ). Naturally, the information from activation maps at higher depths receives higher priority considering that upscaling preserves more information than downscaling. These assumptions consider the most common structure of convolutional architectures where the spatial size decreases with layer depth. Nonetheless, the idea is to keep all activation maps with a fixed spatial dimension. From now on, we will refer to spatial dimensions of all X i as w = w n 2 and h = h n 2 . For an input size of 224x224, this results in w = h = 28 for the backbones explored in this work. The concatenation of activation maps is then performed as where [.; .] denotes the concatenation along the third dimension, and z = i z i is the resulting number of channels after concatenation. Considering common convolutional architectures where z i < z i+1 , activation maps from higher depths have a higher influence on the overall z features. Additionally, the 2-dimensional activation map at each channel z i is flattened, resulting in the reshaped 2D representation X with sizes wh-by-z, which we refer to as an aggregated activation map. These steps are illustrated in Fig. 1(a), which reports the overall structure of the proposed method.
Pixel-wise Randomized Encoding
The aggregated activation map of a single image is used to train an RAE considering each spatial point, or pixel (row of X ), as a sample and each channel (column of X ) as a feature. In this sense, the method also works with arbitrary input sizes (if accepted by the backbone) since the spatial dimensions only affect the number of training samples for the RAE. Intuitively, larger input sizes would improve the RAE training, but would also increase the backbone cost significantly. Therefore, in this work, we consider only a constant input size of 224x224 (forced resizing), since this is the most common configuration of various backbones. Moreover, considering that the spatial organization of the pixels is lost due to the flattening procedure of X , we add a positional encoding composed of sine and cosine functions of different frequencies with dimension z as proposed in [42], extended for 2 spatial dimensions as in [43]: where x ∈ w and y ∈ h, which is then added to the aggregated activation map via element-wise sum: After summing the positional encoding to X , the first step of the RAE is to project the inputs using a random fully-connected layer with weights W k ∈ R z×q , followed by a sigmoid nonlinearity. The weights are generated using the LCG for simplicity and better replicability, followed by standardization (zero centered, unity variance) and orthogonalization [37]. These configurations were chosen according to previous works [17,33]. As for the LCG parameters, we use a = 75, b = 74, and c = 2 16 + 1, starting with x = 0, which is a classical configuration according to the ZX81 computer from 1981. Here k works like a seed for random sampling, denoting a starting index inside the LCG space generated with the given configuration. More details on LCG weights are given in the Supplementary Material, such as an ablation on the impacts of different LCG configurations. The forward pass of the encoder g k ∈ R wh×q for all samples is then obtained as and the decoder weights f k ∈ R z×q are obtained as the least-squares solution described in Eq. (1), changing the target Y to X : The main idea of employing an individual randomized neural network for each image is to use the output weights themselves as a representation. In the case of RAEs, the output layer has the same dimension as the input layer. Therefore, a single hidden neuron (q = 1) is considered to maintain the dimensionality. In this sense, the resulting decoder weights are represented by where ν i represents the connection weight between the single hidden neuron and the output i, corresponding to feature i ∈ z. A single-neuron RAE may be limited in encoding enough information contained in the deep activation maps. Therefore, we propose an ensemble of models or, as recently introduced [45], a model "soup", which is achieved by combining the weights of m parallel models. Here, each model is an RAE with a different random encoder (using a different LCG seed), and the combination is performed by summing the decoder weights It is important to note that the encoders g k of each of the m RAEs have a different random weight initialization. This is achieved by creating an LCG sequence of size mz so that we have z weights for each of the m RAEs. The structure of the RAE is illustrated in Fig. 1(b), and following the whole RADAM pipeline shown in Fig. 1(a), a texture representation, or feature vector ϕ m , is obtained for the input image. The code for all these steps is available in the Supplementary Material and in our online repository 1 . The feature vectors ϕ m are then used to train a linear classifier for a given texture recognition task (more details on the classifier can be consulted in Sec. 4.2.1).
Setup
Our model is implemented using PyTorch [27] (except for the classification step), making it easier to couple RADAM with several methods implemented in this library. The classification step is performed using Scikit-learn [28]. We measure our results by the average classification accuracy and corresponding standard deviation, when applicable (depending on the dataset). For the backbones, we consider the PyTorch Image Models library [44] (version 0.6.7), which contains several pre-trained computer vision methods. In the Supplementary Material, we present the main code for RADAM, and the complete implementation including scripts for experimentation can be consulted in our GitHub repository 1 .
Seven texture datasets are used for evaluation purposes in this paper, of which the following two variants of the Outex dataset [14] were used for analyzing the RADAM method alone: • Outex10: Composed of 4320 grayscale images in 24 different texture classes. This dataset focuses on rotation invariance; • Outex13: This suite holds 1360 RGB images in 68 texture classes, and evaluates color texture recognition.
The following five datasets are used for comparisons with other methods: • Describable Texture Dataset (DTD) [5]: Composed of 5640 images in 47 different texture classes, evaluated by the 10 provided splits for training, validation, and test; • Flickr Material Dataset (FMD) [41]: Holds 1000 images representing 10 material categories, and validation is done through 10 repetitions of 10-fold cross-validation; • KTH-TIPS2-b [3]: Contains 4752 images of 11 different materials. This dataset has a fixed set of 4 splits for 4-fold cross-validation; • Ground Terrain in Outdoor Scenes (GTOS) [47]: This dataset represents 34105 images divided into 40 outdoor ground materials classes. There is also a fixed set of 5 train/test splits; • GTOS-Mobile [46]: Consists of 100011 images captured from a mobile phone of 31 different outdoor ground materials, and contains a single train/test split.
Analysis of RADAM properties
Our first experimental evaluation concerns aspects of the proposed RADAM method. In the Supplementary Material, we show an additional analysis of the impacts caused by different random weights (LCG configurations) and concluded that they are minimal, corroborating previous works. In the following, we evaluate and discuss other aspects of RADAM.
Positional encoding and different classifiers
We evaluate two design choices for the RADAM pipeline, the use of positional encoding and the classifier. The method is compared with or without encoding under two different classifiers: Linear Discriminant Analysis (LDA) [34] and Support Vector Machines (SVM) [30]. For LDA the least-squares solution with automatic shrinkage using the Ledoit-Wolf lemma is used, and a linear kernel with C = 1 is considered for SVM. Since the evaluation of positional encoding concerns the spatial properties of texture, we consider the Outex10 benchmark, which focuses on rotation invariance. As the results in Tab. 1 demonstrate, positional encoding improves or maintains performance in all cases, especially under the LDA classifier. On the other hand, SVM provides the best results in all cases while gaining less improvement from positional encoding. Nevertheless, we keep the positional encoding in our architecture since the additional cost is negligible compared to the potential gains. The SVM is also used as the classifier for all the following experiments.
Soup size
The only free parameter of the proposed RADAM method is the number of RAEs to be combined, i.e., m. We evaluated m ranging from 1 to 32 and the results are shown in Fig. 2 for different backbones in the Outex13 dataset. We observe significant gains for m from 1 to 4, while for larger values performance tends to stabilize. These results indicate that 4 ≤ m ≤ 8 is a good approach for a balance between performance and cost since no significant gains are achieved above that. All the following experiments in this paper are performed using m = 4. The effects observed when increasing m are expected considering what is usually seen in model ensembles, or "model soups" [45], where the combination of models trained separately may be beneficial. On the other hand, our encoders are random, and each one has different weights. However, even if each encoder creates a different random projection of the input, the decoders learn to transform the projection back to the same feature space. In other words, the RAEs learn different encoding-decoding functions for the same input that, when combined, provide a better representation in our feature extraction use case.
Comparison with other pooling techniques
To show the gains of RADAM over the common pooling approach, we performed additional experiments using Global Average Pooling (GAP) coupled with SVM using the same configurations considered for RADAM. We use GAP applied over the output of the last layer of the backbone (the usual approach in most CNN architectures), and also GAP agg., which aggregates the GAP from each of the n feature blocks and returns an image representation with z features (as RADAM). The results are shown in Tab. 2, where it is possible to observe that RADAM overcomes these two approaches by a considerable margin, in all backbones and datasets. GAP agg. usually improves over the regular GAP of only the last convolutional layer, which is to be expected since more features are being added. However, the regular GAP sometimes performs better than this simple aggregation by concatenation. On the other hand, the gains when using RADAM for aggregation are far more expressive and result in state-of-the-art performance in all benchmarks we considered, as we discuss in the following section.
Comparison with literature
Finally, we compare RADAM with several state-of-the-art methods on five challenging texture recognition datasets; all results are shown in Tab. 3. The table is organized into separate rows according to the different backbones in terms of computational budget. We indicate the pre-training dataset used for the backbone; ImageNet-1K [7] was used when not stated. Firstly, we show the results obtained using RADAM and MobileNet V2 (standard or with width multipliers of 1.4) as a lightweight alternative (costs are discussed in depth in Sec. 4.5). The first comparison section contains methods using ResNet18, and we also included RADAM with ConvNeXt-nano in this section considering that it has a similar computational budget. In this scenario, RADAM achieves competitive performance on KTH using ResNet18 but is less effective on other datasets. On the other hand, RADAM with MobileNetV2 1.4 achieves better results than ResNet18 in all cases except GTOS-Mobile and beats all the ResNet18 literature methods on DTD, FMD, and KTH, proving to be an excellent low-cost approach. Using ConvNeXt-nano, RADAM also achieves the best results on all datasets except GTOS and GTOS-Mobile.
Considering the results within the ResNet50 budget, RADAM performs much better compared to ResNet18. Competitive results are achieved on most datasets using ResNet50, and also the best results on KTH. Using ConvNeXt-T RADAM achieves state-of-the-art on FMD, and also overcomes the compared methods on KTH. Performance is improved even further considering ConvNeXt-T pre-trained on ImageNet-21K, achieving the best results on most of the datasets. These results show the potential gains of better pre-training of the employed backbone.
The results shown in the last block of rows in Tab. 3 concern methods with an increased cost compared to the previous ones. Here we compare RADAM using ConvNeXt-B, L, and XL against several works including very recent methods, such as ViTs. Our method achieves state-of-the-art results on all datasets in this case, especially with the ConvNeXt-L/XL backbones. It is possible to notice again that the better pre-training with ImageNet-21K results in significant performance increases.
Feature extraction cost versus performance
Additional analysis is performed to better understand the balance between the classification performance and computational budget of the compared texture recognition methods. We consider the inference costs in terms of GFLOPs and Table 3: Classification accuracy of different methods on texture benchmarks. The used backbones are separated into row blocks according to their computational budget, the input size is indicated in parentheses (224x224 when not stated), and the two best results in each block are highlighted in bold type. Results in blue show the previous state-of-the-art on each dataset, and red represents our results matching or above that. the number of parameters according to the backbone used by each method since this is the most resource-demanding step of every pipeline. One important aspect here is that the input size greatly impacts the FLOP count of the methods (check input sizes in parentheses in Tab. 3). Most works consider 224x224 inputs (the same input size employed by RADAM), and we assume this same size when not stated by the authors. For this analysis, we also are not considering the preparation of the backbone either in terms of pre-training cost or chosen dataset, nor the fine-tuning of the methods that do so. The results are shown in Fig. 3 first for the DTD dataset alone (Fig. 3(a)) since this is the most challenging task and with more methods to compare, and then as the average for the remaining datasets ( Fig. 3(b)). It is possible to notice the superiority of RADAM at different budgets, especially when using MobileNet V2 1.4 and ConvNeXt-T. Moreover, results also scale with backbone complexity.
It is also important to mention the backbone costs of other methods not present in this analysis due to the lack of available results. For instance, Multilayer-FV achieved the previous state-of-the-art on FMD using EfficientNet-B5 with an input size of 512 pixels, which yields an approximate inference cost of 12 GFLOPs. This is considerably higher than the cost of ConvNeXt-T (4.5 GFLOPs), with which RADAM achieves an improvement of 4.3% in absolute performance compared to Multilayer-FV. On KTH, RankGP-3M-CNN++ achieves 91.1% accuracy (previous state-ofthe-art) using three backbones, with a total inference cost of around 30 GFLOPs, while RADAM with ConvNeXt-T achieves comparable results (−0.1%), and also achieves the state-of-the-art results using ConvNeXt-B (91.8% with inference cost of 15 GFLOPs), and ConvNeXt-XL (94.4% with inference cost of 61 GFLOPs). Considering GTOS and GTOS-Mobile, a competitive cost and performance are also achieved with RADAM using ConvNeXt-T, and state-of-the-art results with ConvNeXt-B, L, and XL.
To increment the cost analysis, we show in Table 4 Considering the results in the table, fine-tuning ResNet50 on GTOS-Mobile using only 10 epochs with a batch size of 1 would take around 40 hours on CPU or 9 hours on GPU. On the other hand, extracting features with RADAM followed by SVM inference on the whole GTOS-Mobile dataset takes around 1.3 hours on CPU or half an hour on GPU. The training of SVM on the whole dataset (on CPU) takes an additional 15 minutes on average, without hyperparameter tuning. The results demonstrate that the RADAM module is considerably faster than the ResNet backbone, both in terms of inference speed and comparing feature extraction followed by SVM with training/fine-tuning. These results extend to ConvNeXt-nano and ConvNeXt-T, considering that the cost is comparable to ResNet18 and ResNet50, respectively, corroborating our claims that RADAM provides both considerable savings in training time and SOTA results at a similar inference cost. Moreover, considering more costly backbones such as ConvNext-L and XL, the gains in training time can be even more expressive.
Conclusion
We presented RADAM, a new feature encoding module for texture analysis. The method consists of randomly encoding aggregated deep activation maps from pre-trained DCNN using RAEs. These autoencoders learn to pool activation maps into a 1-dimensional representation by training on its z-dimentional pixels as sample points. A texture image is then encoded by using the decoder weights learned from its activation maps. The procedure is orderless, but takes into account the spatial information of the pixels by using a 2D positional encoding. Compared to previous works, our method does not require fine-tuning of the backbone, and the encoding module is rather simple. Linear classification of the descriptors is performed with an SVM, and we achieve state-of-the-art performance on several texture benchmarks. RADAM also achieves the best efficiency considering inference cost and performance using backbones with varying computational budgets. These results are impressive also considering that, compared to other methods, no fine-tuning of the backbone is needed for RADAM, causing a lower cost also at training time.
Our work corroborates a simpler approach to texture recognition where the fine-tuning of costly backbones may not be necessary to achieve high discriminatory power. For future works, one may explore different backbones or different formulations of our RAE, with multiple layers, more hidden neurons, and other possible improvements. On the other hand, if enough computing resources are available, another approach more similar to previous works would be to explore our module in an end-to-end manner. Since RADAM is deterministic and a closed-form solution, an alternative would be adding a linear layer instead of an SVM and optimizing it along with the backbone. The parameters in the code are the ones used by our experiments in the main paper. Additionally, the einops library is used for efficient reorganization of the activation map, and we also use an additional class to perform L p normalization in batches as a PyTorch layer.
The implementation of the Randomized Autoencoder (RAE) is shown in the following. It takes as arguments the dimensions of the model, previously passed to the RADAM class. The constructor generates the random encoder and the positional encoding, and the fit AE method computes the decoder weights, i.e, the least-squares training: self.encoding = torch.reshape(self.encoding, (z, w*h)).to(device) 9 self._activation = torch.sigmoid Finally, the main RADAM class can be simply applied to the output of net, for a given image or batch, to obtain texture representations: 1 texture_representation = RADAM(device, z, (w,h))(net(input_batch)) As detailed in the main paper, these representations are then used to train a linear SVM for texture classification. For that, we use the scikit-learn library to build the classifier as follows: 1 from sklearn import svm 2 SVM = svm.SVC(kernel='linear')
Additional Experiments with RADAM
Here we present additional experiments not present in the main paper.
Variance caused by random initializer
Tab. 1 shows the RADAM performance variance caused by changing the LCG configuration (a, b, and c) for weight initialization of the RAE. We selected a set of 10 configurations from the table available in 1 . In general, we observe minimal performance variance concerning these parameters, especially with larger backbones, corroborating that the choice of the random weights is not critical. We adopted parameters a = 75, b = 74, and c = 2 16 + 1 starting with x = 1 (first line in the table), as it uses the smallest integers and is also a classical configuration according to the ZX81 computer from 1981. However, small variations may impact the comparison between RADAM and other methods. Nevertheless, it is important to notice that our choice of LCG parameters does not impact the new state-of-the-art results reported in the main paper for RADAM. In fact, carefully selecting specific configurations in Tab. 1 for each dataset and backbone yields even higher results, but we preferred a single consistent configuration to be used for all cases. Additionally, for applicability purposes, we provide files with the exact values computed by our code for all the LCG configurations (which will be available in our code repository).
|
2023-03-09T06:42:45.805Z
|
2023-03-08T00:00:00.000
|
{
"year": 2023,
"sha1": "dc1bb8b56d1fecfa074aef9e34416e089a1444ff",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "dc1bb8b56d1fecfa074aef9e34416e089a1444ff",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
258037835
|
pes2o/s2orc
|
v3-fos-license
|
Prognosis of pulmonary arterial hypertension patients with pericardial effusion before and after initiation of parenteral prostacyclin therapy
Abstract Few studies have evaluated the effects of pulmonary arterial hypertension therapies on pericardial effusion. We evaluated hemodynamics, echocardiograms, and outcomes for 119 parenteral prostanoid‐treated patients. We discovered an increased frequency of pericardial effusions posttreatment, and that a moderate‐large pericardial effusion at initiation, but not at 1st follow‐up, was significantly associated with mortality.
INTRODUCTION
In patients with pulmonary arterial hypertension (PAH), prior studies have found that the pericardial effusion at diagnosis is more common in patients with other markers of poor prognosis including particularly elevated right atrial (RA) pressure and that the presence of pericardial effusion associates with increased risk of death. However, few studies have evaluated the effects of PAH therapies on the prevalence of pericardial effusion or evaluated the clinical significance of posttreatment pericardial effusions. We therefore sought to evaluate the frequency, size, and clinical implications of pericardial effusions at baseline and 1st follow-up in a cohort of PAH patients initiating parenteral prostanoid therapies.
METHODS
This retrospective cohort study included patients with PAH initiated on IV epoprostenol or IV/SC treprostinil between 2007 and 2016 at the University of Texas Southwestern Medical Center. Institutional review board approval was obtained from the University of Texas Southwestern Center Human Research Protection Program (#052015-041), including a waiver of informed consent. Echocardiographic, right heart catheterization, functional class, 6MWD, and NT-proBNP results were performed as standard of care tests, with the timing was determined by the treating clinician. First follow-up was defined as the time of 1st echocardiogram performed after at least 90 days of IV/SC therapy. Survival outcomes by baseline and 1st follow-up hemodynamics and echocardiographic findings as well as changes in hemodynamics and echocardiogram results between pre and postparenteral prostanoid therapy have been published previously. 1,2 Inclusion criteria were treatment with a parenteral prostanoid and results available for a pre-IV/SC therapy echocardiogram; this results in a slightly different sample size versus prior studies. For the current study, baseline and first follow-up characteristics for patients with and without pericardial effusion were compared, including demographics, functional class, NTproBNP, 6-min walk distance (6MWD), echocardiography, and hemodynamic results. For transplant-free survival, outcomes were assessed both by presence of effusion and by effusion size (none, small, moderate-severe). Patients undergoing transplant were censored at the time of transplant. Analyses included Cox-proportional hazards (survival), Students t-test (pre/post comparison for continuous variables), analysis of variance (ANOVA) with Tukey-Kramer's all pairs comparison (RA pressure by pericardial effusion size), and chi-square (echocardiography comparisons). Analyses were performed using NCSS 2022.
RESULTS
119 PAH patients initiated on IV/SC therapy were followed for a median of 7 years. Most patients were female (81%) and had idiopathic or CTD-PAH, 2 and the median age at IV therapy initiation was 47 years. Survival at 1, 2, and 3 years was 86%, 78%, and 67%, respectively. The median time from diagnosis to IV/SC initiation was 625 days (Interquartile range [IQR]: 130-1233 days). Pericardial effusion was present on echocardiogram in 43 of 119 patients before initiation of IV/SC therapy (36%), with similar PAH etiology distribution amongst those with and without an effusion. Patients with a pericardial effusion at the time of IV/SC therapy had greater PAH severity on multiple measures of prognosis, compared to patients without a pericardial effusion, and this was statistically significant for log NTproBNP, RA pressure at catheterization, and TR severity and IVC characteristics on echocardiogram (Table 1).
A moderate or large pericardial effusion before IV/SC therapy initiation, present in only seven patients, was associated with increased mortality risk (hazard ratio [HR]: 2.57, 95% confidence interval [CI] of [1.36-64.87], p = 0.0037), while a small effusion was not associated with increased risk, with an unexpected possible inverse association (HR: 0.37, 95% CI: 0.21-0.64, p = 0.004). Presence of an effusion was particularly common among those dying in the 1st 90 days after IV/SC therapy initiation, with four of seven patients (57%) with a moderate or large effusion dying during this time period compared with early deaths in 2 of 36 (5.5%) with a small effusion and 2 of 76 (2.6%) with no pericardial effusion.
One hundred and four (84.7%) patients underwent a follow-up echocardiogram after at least 90 days of IV/SC therapy (median 168 [IQR: 120-241 days]); reasons for missing data included death before 90 days (N = 8) and missing data for other reasons (N = 7). Compared with pre-IV/SC therapy echocardiogram, the number with any pericardial effusion increased from 35 to 47 patients (34% vs. 45%, p < 0.01), including 21 patients who developed a new effusion during this time versus only nine patients whose effusion resolved. The presence of a pericardial effusion at 1st follow-up was significantly associated with only RA size on echocardiogram (Figure 1), though a trend toward greater TR severity and abnormal IVC characteristics was also seen. There was no significant association between 1st follow-up pericardial effusion of any size and mortality ( Finally, we also evaluated pericardial effusion size versus RA pressure in greater detail, as this is the hemodynamic measure reported as most strongly associated with the presence of a pericardial effusion. Before IV/SC therapy, RA pressure increased in a stepwise manner with pericardial effusion size, with mean RA pressure of 11 ± 7, 14 ± 7, and 19 ± 6 mmHg for no effusion, small and moderate-large effusion (ANOVA p = 0.008; for pair-wise comparisons RA pressure in the no effusion group was significantly different vs. the RA pressure in the moderate-large effusion group, p = 0.01). In contrast, much smaller differences were seen in RA pressures at first follow-up, with RA pressures of 7 ± 6, 8 ± 6, and 9 ± 8 mmHg for those with no effusion, small and moderate-large effusion (ANOVA p = 0.74).
DISCUSSION
In this study of 119 patients with PAH, pericardial effusion was numerically more common in patients following IV/SC prostanoid therapy (43% at follow-up vs. 36% before initiation). Although most effusions at followup were small (88%), five patients had moderate or large effusions at first follow-up.
Prior studies exploring changes in pericardial effusion size after treatment with PAH therapies have had varied results. In the BREATHE-1 clinical trial of bosentan versus placebo, a possible beneficial effect of bosentan on pericardial effusion score was reported (Echocardiographic substudy; N = 85; treatment effect T A B L E 1 Comparison between patients before and after receiving IV/SC Therapy. p-value = 0.053). In contrast, in the pivotal randomized controlled trial of epoprostenol in idiopathic PAH, there was no change in the pericardial effusion score between baseline and 12 weeks in patients receiving epoprostenol (N = 41), and similar results were seen in patients in the control arm. [3][4][5] Most pericardial effusions in PAH are thought to occur due to elevations in right-sided pressures leading to myocardial interstitial edema formation with increased epicardial transudation, and reduced lymphatic outflow related to elevated systemic venous pressures. RA pressure at catheterization is the hemodynamic value most strongly associated with the presence and size of pericardial effusions. 6,7 Thus, given the significant hemodynamic improvement seen in our cohort, including RA pressures, we anticipated that fewer pericardial effusions would be present amongst post-IV/SC treated patients, contrary to what was seen. This then raises the question of whether longstanding pulmonary hypertension may lead to other physiologic changes in the myocardium, pericardium or lymphatic systems that could lead to pericardial effusion. For example, elevated central venous pressure can lead to thoracic duct remodeling due to increased thoracic duct diameter and increased wall thickness. 8 This remodeling could lead to lymphatic valve insufficiency, impairing unidirectional lymph transport and allowing lymph backflow even at lower central venous pressures. The association between pericardial effusion presence and RA size on echocardiogram, a marker of longstanding elevations in RA pressure, indirectly provides support for this hypothesis.
Other possibilities, such as inflammatory pericardial effusions associated with autoimmune conditions, seem unlikely as we did not see a higher rate of pericardial effusion in those with CTD-PAH in this cohort, though it is still possible that this may have contributed in individual patients. It is possible that alternative mechanisms could be involved. Although we lack a definitive explanation, one other observational study has described a high rate of new pericardial effusion in IV epoprostenol-treated patients, with 15 of 23 patients (65%) developing a new pericardial effusion during treatment, 9 and additional studies in both prostanoid and non-prostanoid treated patients would be helpful.
From a survival standpoint, we saw a significantly increased risk of death associated with moderate-large effusion at IV/SC therapy initiation, versus a more modestly elevated HR at follow-up that was not statistically significant. We suspect that this latter result may partially relate to the sample size, and the modest number of patients with moderate to large effusions. We also suspect that the inverse association between small effusion size at baseline and outcome may be spurious. Limitations to the study include the small sample size and lack of a validation cohort; as such these results should be considered hypothesis generating. As discussed above, there are numerous reasons why pericardial effusions can form in PAH patients including if elevated central pressures are associated with pericardial effusions and how proteinoids affect this. This study should be used to form larger studies that evaluate this question further.
In summary, in our cohort, we saw an increase in the presence of pericardial effusions among patients treated with IV/SC prostanoids. Most effusions were small and not hemodynamically significant themselves, but new moderate or large effusions did develop in some patients. Further study is warranted to explore both the pathophysiology and reproducibility of this finding.
AUTHOR CONTRIBUTIONS
All authors contributed equally to the development of this manuscript.
|
2023-04-09T15:07:19.430Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "4f5c10661dbf673c5b68df86e99dee547f623182",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "eb210cae03bfd63a9f7f392ea4ce8d83afe3b14d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
258666010
|
pes2o/s2orc
|
v3-fos-license
|
Synthesising the Existing Literature on the Market Acceptance of Autonomous Vehicles and the External Underlying Factors
. In recent years, the level of acceptance of autonomous vehicles (AVs) has changed with the advent of new sensor technologies and the proportional increase in market perception of these vehicles. Our study provides an overview of the relevant existing studies in order to consolidate current knowledge and pave the way for future studies in this area. Te paper frst reviews studies investigating the market acceptance of AVs. We identify the nonbehavioural factors that account for the level of acceptance and examine these in detail by cross-referencing the results of relevant papers published between 2014 and 2021 to reach a consensus on the perceived benefts and concerns. Te fndings showed that previous studies have found legal liability, safety, privacy, security, trafc conditions, and cost to be key external factors infuencing the acceptance or rejection of AVs, and that the upsides of adopting AVs in regard to improving trafc conditions and safety outweigh the risks identifed in relation to these areas. Tis resulted in an overall weighted average of 65% market acceptance of AVs among the 11,057 people surveyed in this regard. However, the remaining respondents were not very favourably disposed towards adopting AVs because of unresolved issues related to data privacy, security breaches, and legal liability in the event of accidents. In addition, our evaluation showed that the worldwide market purchasing power for an AV, based on 2022 prices, is around $38k, which is signifcantly below the current anticipated price of $100k.
Introduction
As a key component of future intelligent transport systems [1], autonomous vehicles (AVs) are likely to change travel behaviour, as they will have a signifcant impact on the modes of travel used [2][3][4][5][6][7]. Lehtonen et al. [8] pointed out that autonomous driving has the advantage of making using these vehicles more attractive than manual driving. Various studies have identifed the benefts and risks of AVs with regard to safety, trafc congestion, the number and severity of accidents, and ofering a means of mobility to individuals who have previously been unable to drive, such as people with certain types of disabilities [9][10][11][12][13][14][15][16][17][18]. Li et al. [19] emphasised that safety is the most signifcant concern in relation to AVs. However, some studies, such as that by Nikitas et al. [20], have warned against having unrealistic expectations of AVs that cannot be fully understood until more extensive testing has been conducted to ensure their safe operation. In this regard, Wang and Li [21] discussed how AVs have already started to be tested in several US states and some European and Asian countries. A study by Lee and Hess [22] also showed that the US, Australia, and Germany had taken actions relating to the safety testing of AVs. It is also worth mentioning that the abbreviation AV, which is used throughout this paper, means a fully automated vehicle or level 5 AV as defned by the Society of Automotive Engineers (SAEs) (2016) and used by the National Highway Trafc Safety Association (NHTSA) [23].
From a business point of view, if AVs are to penetrate the transport market successfully, they must be widely accepted [24]. However, the vast majority of relevant studies published to date, some of which are referred previously, have mainly focused on one or more characteristics of the transportation system, such as safety, security, and trafc conditions. Considerably less attention has been paid to the extent to which people, or in a more general sense, the markets, accept these vehicles and what factors infuence their perceptions with regard to this matter. Tis is evidenced by the number of publications produced over the past few years. As shown in Figure 1, between 2014 and 2021, 4,214 papers published on the Web of Science investigated the performance of AVs in relation to road transport regarding one or more of the characteristics mentioned previously. However, less than 1% (17) of the published papers has explored the acceptance of AVs. It is worth noting that few papers published before 2014 have investigated the adoption of AVs from a transport point of view. Although many studies have investigated the adoption of AVs, few have quantifed it in terms of a market acceptance percentage.
Consequently, there is a signifcant gap in this area. As the AV industry and the science behind it are advancing rapidly, the market acceptance of AVs will need to adapt accordingly. Tus, there is a need to review the benefts, concerns about, and level of acceptance of AVs over time.
In recent years, a number of studies have investigated the user acceptance of AVs from two perspectives. Some research has investigated social and behavioural factors, such as trust, attitudes, social norms, perceived value, risk, and usefulness, while other studies have explored nonbehavioural or external factors. For a comprehensive review of the various aspects related to social and behavioural theories that afect the acceptance of AVs, see, e.g., Fraedrich and Lenz [25] and Jing et al. [26]. Dichabeng et al. [27] conducted a focus group study investigating the various factors infuencing the acceptance of shared AVs. Tey concluded that security, trust, and the quality of shared space are the main factors involved in whether people are willing to accept AVs. Nastjuk et al. [24] also investigated some factors afecting the acceptance of AVs from a user perspective. Tey concluded that individual and social factors play a vital role in driving the widespread acceptance of AVs.
Using survey research focusing on social psychology and customer utility, Yuen et al. [28] studied the cognition process that leads individuals to accept or reject AVs. Tey found that the acceptance of AVs is afected by the trust that users have in these vehicles and their perceived value. Ekman et al. [29] pointed out that it is essential to consider providing as much information as possible about AVs, such as their driving performance and safety record to improve user trust.
In general, the social and behavioural studies mentioned previously have investigated the factors and mechanisms that drive the acceptance of AVs and why consumers are inclined to accept or reject these vehicles. Nonetheless, they were less focused on the level of acceptance, i.e., how much individuals or the market in general are willing to pay for and use these cars. However, some studies have evaluated nonbehavioural factors such as safety, cost, travel time, and mobility (trafc), relating to the AV infrastructure and AV technology [30]. Tese studies have focused on the external factors that have an impact on people's decisions about whether to adopt AVs, and most of them have used surveys to conduct their investigations. Some of these survey studies such as those by Das [31]; Hussain et al. [32]; Kim et al. [33]; and Rezaei and Caulfeld [34] have investigated one or more characteristics of the infrastructure, vehicle, or transportation system, such as safety and security, in relation to the acceptance of AVs. It is imperative to mention that behavioural studies are also required to understand more about people's reasoning regarding whether to accept or reject AVs; however, that is beyond the scope of the current study. For a comprehensive review of the various survey studies investigating the acceptance of AVs, see Becker and Axhausen [35]. As mentioned earlier, the level of acceptance of AVs has increased with the advent of new sensor technologies and the knowledge that these vehicles have improved in terms of safety, security, costs, and driving performance in road trafc. In order to make the most up-to-date assessment of user acceptance of AVs, this paper frst reviews studies that have investigated the acceptance of AVs with regard to the various benefts and drawbacks of these vehicles. Tis is followed by a numerical evaluation of the level of acceptance in the form of a percentage. Te study extracted the key external factors impacting on the acceptance or rejection of AVs from the studies examined in order to determine the key drivers of the acceptance level, i.e., the main reasons why the study participants accepted or rejected the adoption of AVs. Subsequently, we analysed the acceptance criteria by reviewing 88 papers published between 2014 and 2021 to consolidate existing knowledge regarding the factors infuencing acceptance. To the best of the authors' knowledge, this has not been done in any previous studies. Te remainder of this paper is organised as follows. Section 2 reviews the relevant previous studies and identifes the key factors resulting in acceptance or rejection of AVs. Section 3 examines these key factors in greater depth to arrive at a consensus from the results. Section 4 analyses the market acceptance of and buying power with regard to AVs. Section 5 discusses the key observations made by this study and situates these within the literature, and Section 6 provides the key conclusions regarding the aforementioned overview. Finally, Section 7 outlines the limitations of this study and ofers recommendations to pave the way for future researchers to better utilise the results of this study and fll the research gaps within this area.
Overview of the Market Acceptance
As discussed in Section 1, some studies have evaluated the factors and mechanisms that infuence the acceptance of AVs but have not explicitly examined the market acceptance of these vehicles. Terefore, this paper targeted those studies that have evaluated the main reasons for the market acceptance or rejection of AVs and assessed the acceptance rate. For example, a recent survey by Rezaei and Caulfeld [15] of 475 Irish participants showed that only 20% were interested in adopting AVs and paying for these vehicles. Nonetheless, there was a general belief that AVs could potentially reduce the number of accidents, and that consequently people would feel more secure and safer driving an AV. In addition, reducing delays, queues, and trafc congestion was one of the most appealing aspects of adopting AVs and a signifcant reason for their acceptance by these participants [7]. However, 80% of the participants stated that they would not be happy to adopt AVs because of privacy issues, security breaches, and the high cost of the vehicles. Overall, Rezaei and Caulfeld [15] found a statistical correlation between the security and safety of AVs and the acceptance of these vehicles. It is also worth noting that the correlation between the cost of AVs and their acceptance was investigated by Howard and Dai [36]. Approximately 65% of the individuals who participated in Howard and Dai's [36] study believed that cost would be a substantial barrier to accepting AVs. Rezaei and Caulfeld [15] also proved this statistical correlation mentioned previously by applying a backward linear regression model. Data privacy and the recording of data by AVs have also been cited as one of the main reasons for their rejection or acceptance (e.g., [15,37]). Rezaei and Caulfeld [15] found a statistical correlation between data privacy and the overall level of interest in and acceptance of AVs; most participants in their survey were unwilling to accept AVs because of the data recorded by them and concerns about data privacy.
Legal liability is another signifcant concern and a key factor afecting the acceptance of AVs. About 66% of the study participants were concerned about legal liability, which made them reluctant to adopt AVs [15,36,38,39]. Table 1 summarises the complete list of survey studies that have investigated people's interest in and concerns about AVs and how they afect their overall opinion regarding the acceptance of these vehicles. Te studies in Table 1 also calculated the percentage of participants willing to adopt AVs, thus representing the acceptance rate among the community studied.
Our review of the key benefts of and concerns about AVs, as outlined in Table 1, showed that legal liability, accidents, equipment failure, safety, trafc conditions, security, cost, and privacy were the factors most frequently mentioned in the participants' responses. Tese fndings validated the study by Lee et al. [30], which showed that concerns about safety and cost have a signifcant impact on the market acceptance of fully autonomous vehicles. Lee et al. [30] also concluded that ease of driving and driver education would positively infuence consumer acceptance of partially autonomous vehicles; however, these factors are beyond the scope of the current study (as outlined in Section 1), which focuses only on fully autonomous vehicles. On this basis, fve groups of factors were considered for further analysis in this paper, as follows: legal liability, safety, trafc conditions, privacy and security, and costs, each comprising a key theme that repeatedly occurred in the relevant studies. In this regard, "liability" refers to the terms of use of AVs on public roads, the group or agency responsible for accidents involving AVs, and other regulatory frameworks related to deploying these vehicles. Safety refers to equipment failures by AVs, their understanding of surrounding objects, driving decisions, errors that may result in accidents or, conversely, help drivers in an impaired condition, and other driving assistance that can help increase safety and reduce accidents. Trafc conditions refer to the features that help AVs make informed decisions while on the road, which may result in smoother trafc fow, fewer queues, and confict points at intersections and therefore less congestion overall. Te more efcient use of existing lanes, route choices and use of parking spaces, and the capacity to drive at near-constant velocities are key features in this context. Privacy and security refer to data recording, data sharing, data protection, data privacy, cybersecurity measures, security breaches, and cyber-attacks. Finally, cost refers to the price of AVs or technologies that can provide some (or fully) automated features in human-driven vehicles (HDVs).
Key External Factors Influencing the Adoption of AVs
3.1. Trafc. Briscoe [44] and Fagnant and Kockelman [45] suggested that the implementation of autonomous technologies such as adaptive cruise control (ACC) and trafc surveillance can lead to a more streamlined fow of trafc Bansal et al. [11]; NHTSA [53]; Gerdes and Tornton [54] Implementing and managing speed limits more efectively than HDVs Cui et al. [55] Reducing the gaps between each car and reducing trafc congestion with near-constant velocities Li et al. [56] Recognising and balancing upstream and downstream trafc incidents using intelligent sensors Schwarting et al. [57] Using combined decision-making, control, and perception approaches to make informed decisions Levin [58] Using data from LIDAR and other vehicles and infrastructure facilities to make efective route choices Igliński and Babiak [59]; Howard and Dai [36] Adhering to trafc regulations Awal et al. [60] Using existing intersections and lanes more efectively with shorter headways and reducing the number of lane-changing bottlenecks Fagnant and Kockelman [45] Connecting and coordinating with other vehicles in platoons Decreasing market acceptance Nikitas et al. [20] Tere may be safety issues associated with having a mixture of HDVs and AVs on the roads during the frst few years of AV adoption Martínez and Viegas [61] Increasing vehicle miles travelled by using shared AVs Fagnant and Kockelman [45] Increasing unnecessary congestion, trafc volume, vehicle miles travelled and trips through the use of automated braking and acceleration systems. Tis results in a decrease in the constant average speed of vehicles, thereby making the calculation of travel time for AVs more accurate. Based on reinforcement learning, Zhu et al. [46] proposed a model for controlling velocity during car following (car-following is a driving behaviour model. Probably the most famous example is the "Wiedemann car-following model" that has ten parameters or driving logics for emulating human driving behaviours, which has been widely used by the trafc simulation software, Vissim that could be used to develop autonomous driving systems with improved safety and efciency and more comfortable velocity control. Tis model performed better than the MPC-based ACC algorithm and outperformed human drivers. A recent case study involving simulation modelling of AVs by Rezaei and Caulfeld [16] suggested that AVs may substantially afect the quality of the trafc fow by reducing trafc queue length and the duration of delays. Furthermore, the simulation study conducted by Ye and Yamamoto [47] on the impact of AVs on road capacity suggested that road capacity would increase with a more signifcant number of AVs on the road. Fagnant and Kockelman's [45] study showed that AVs have the potential to anticipate the actions of other vehicles, such as sudden braking or decisions to accelerate. Because they have the ability to choose the best route, AVs can also make more efcient use of road lanes, allowing them to operate with smaller distances between them and other vehicles in a convoy. Tis ability enables vehicles to brake more smoothly and adjust their speed more efciently when travelling in a platoon [45]. Te study by Zhu and Ukkusuri [48] verifed Fagnant and Kockelman's [45] fndings by showing that the presence of AVs within the trafc network will improve the smoothness of the trafc fow.
Studies investigating parking areas and related concerns have demonstrated that AVs have the potential to lower parking costs and improve the utilisation of available parking spaces in urban areas [49].
Overall, the benefts of adopting AVs with regard to trafc conditions could potentially increase the market acceptance of these vehicles. Table 2 also outlines several other studies that have reviewed the trafc impacts of AVs that may encourage their market acceptance. However, there are some possible downsides to adopting AVs, such as the fact that they could disrupt the trafc fow. For example, an increase in the number of unnecessary trips and vehicle miles travelled (VMT) could increase trafc congestion. Table 2 presents the trafc-related outcomes associated with AVs that may increase or decrease the market acceptance of these vehicles.
Safety. Statistics from the Organisation for Economic
Co-Operation and Development (OECD) have shown that more than 1.2 million people worldwide die in road accidents annually. Road accidents are the leading cause of death among young people aged 15-29 [62]. Te OECD [62] data also demonstrate that the total motorised mobility in cities was 18 billion passenger kilometres (BPKs) in 2015; this is estimated to rise by 94% to 34.9 BPK by 2050. Such a substantial rise in mobility demand makes safety a global public health issue that requires special attention and consideration.
Fagnant and Kockelman [45]; Kyriakidis et al. [41]; and Howard and Dai [36] showed that human driver errors such as distraction, fatigue, alcohol, and drug taking are the leading cause of accidents. Favaro et al. [63] verifed this assertion with their fndings that 94% of car accidents occur due to human driver errors. Hussain et al. [32] highlighted AVs' capability to reduce human errors, and Wu et al. [64] suggested that AVs signifcantly reduce driving fatigue. Reducing driver errors by people under the infuence of alcohol, drugs, and medication was also recognised as a beneft of adopting AVs by 1,453 Chinese people, according to Qu et al. [65].
Papadoulis et al. [66] and Vander Laan and Sadabadi [67] found that AVs would be expected to have a quicker reaction time and safer driving operations than human drivers. In this regard, Combs et al. [68] and Noy et al. [69] also highlighted the intelligent sensor technologies associated with AVs that help them make informed decisions about unexpected road incidents, which has the efect of increasing road safety. Moreover, Li et al. [70] proposed a new decision-making algorithm that could be used by AVs to avoid collisions in various scenarios, focusing on diferent driving style preferences. Te method they developed was reliable enough to increase driver acceptance of AVs.
Katrakazas et al. [71] highlighted AVs' capability to identify surrounding objects more efectively than HDVs, thus reducing the number of accidents. A total of 185 professionals in the survey conducted by Rezaei and Caulfeld [34] also highlighted AVs' ability to reduce the number of accidents on public roads. Te capability to safely deliver freight and ofer a safe form of mobility for unlicenced drivers, people with certain disabilities and older people were also identifed as benefts of adopting AVs [72][73][74].
Te studies reviewed in this section revealed that safety is one of the key external factors infuencing the adoption of AVs, according to the views of potential users, many of which have been discussed above. Table 3 provides an overview of the main safety benefts of AVs and the concerns that may increase or decrease their market acceptance.
Privacy and Security.
Although eforts have been made to assess the diferent characteristics of AVs and their possible impacts on road transportation, many questions remain unanswered regarding the recording of data by AVs and the possibility of security breaches and hacking [7]. Tis concern becomes more critical in regard to connected and autonomous vehicles (CAVs) as the V2X communication system they use is likely to be a signifcant focus of cybersecurity attacks against AVs [33]. Rakotonirainy et al. [77] found evidence to suggest that a faw in the security system used by AVs could result in serious crimes, such as engaging in the unauthorised surveillance of important individuals. Te majority of the 5,000 people who participated in the Driving reaction speed is not as good as human drivers Strand et al. [79] Driving performance decreases as the level of automation increases [41] were very concerned about the potential for hacking AVs and losing control of their vehicles. Te survey by Rezaei and Caulfeld [15] involving 475 Irish people also verifed the observation made by Kyriakidis et al. [41], showing that members of the public, in general, worried about the secure operation of and safety issues associated with AVs. Pham and Xiong [80] showed that autonomous systems, especially those used in CAVs, are vulnerable to cyberattacks and may also afect many other vehicles of their generation on the network as part of the infrastructure because of their interconnectivity. Rizvi et al. [81] pointed out that designing a robust safety system for AVs requires a better understanding of the potential vulnerabilities and threats associated with them. In addition, Macher et al. [82] also highlighted certain vehicle-related cybersecurity issues, which helped identify proactive defence systems and countermeasures that could be used to address them. Cui et al. [51] developed an integrated simulation platform to evaluate the safety of CAV sensory systems and quantify the severity of potential crashes. Cui et al. [55] concluded that not all cyber-attacks result in crashes, and when they occur, the emergency braking system will probably prevent most of them. Tey also found that GPS jamming is another potential form of cyber-attack that could result in a collision, so this is an area that requires further investigation and development.
Regarding the privacy of AVs, the sensors installed on them are programmed to collect information about the vehicle and any incidents involving the vehicle's surroundings [77]. Several studies have pointed to the recording of data by AVs, the access to and use of data by third parties, and the tracking of individuals' locations. Tis could result in security breaches and the hacking of AVs [15,37]. However, Kim et al. [33] claimed that new artifcial intelligence tools and technologies could identify these threats and protect AVs against cyber-attacks. Table 4 presents some of the actions that could help to increase the security and market acceptance of AVs. Also, detailed in table 4 are some concerns that may decrease the market acceptance of these vehicles.
Legal Liability.
Legal responsibility is a critical and widely discussed issue in regard to the integration of AVs. Bartolini et al. [87] divided the legal liability concerning AVs into civil, criminal, and administrative categories. Civil liability deals with the compensation for property damage to third parties, criminal liability involves the death or injury of an individual in an accident with an AV, and administrative liability concerns driving incidents that occur without proper authorisation [87]. Tese three forms of liability must be addressed and resolved before AVs can become widely adopted, as the allocation of tort liability by law will signifcantly infuence consumer acceptance of AVs. For example, the extent to which AVs are responsible in the case of an accident raises questions as the driver is no longer in control of the vehicle's operation [36].
Several studies have investigated the public's response to the issue of legal liability in relation to autonomous vehicles [15,36,39,76]. Tese research studies have found that potential users are uncertain about who would be held responsible in the event of an accident involving an autonomous vehicle. Legal liability is viewed as a major barrier to the adoption of AVs by the public. Te absence of an ofcial framework or policy regarding this issue is a common gap identifed by all the relevant studies to date, making it difcult to assess public concerns and manage the data and information that AVs collect [11,41,45,53,88,89]. Tis uncertainty over legal liability has raised security concerns, such as the possibility of hacking and unauthorised tracking of AVs, which could lead to severe collisions, disruptions to the trafc network, carjacking, and even the kidnapping of important individuals [45]. Te extent of legal responsibility for an AV accident has yet to be determined and may be assigned to the driver, the manufacturer, or other groups and agencies [53].
Several eforts have been made to establish frameworks for determining responsibility in incidents involving AVs [90]. Tere has been some progress in terms of legislation and testing of AVs, particularly regarding the development and deployment policies aimed at enhancing the practical use of AVs on public roads and evaluating their potential impact on trafc and other key elements of highway transport [91,92]. Several countries have already begun to create regulatory frameworks for the safe testing and use of AVs. For example, Japan has refned its legal framework for operating Level 3 AVs on public roads [93]. Lee and Hess [22] found that many countries have updated their laws regarding the administration, safety testing, and operation of AVs. AV testing has also got underway in the US, Europe, and Asia [21]. Table 5 outlines some of the concerns and advancements associated with investigations into AVs regarding liability.
Costs and Willingness to Pay.
Cost is a signifcant concern for road users with regard to the adoption of AVs [39]. Neiger [95] estimated that the price of an AV could be between $70k and $100k (US dollars). Te cost of an AV will substantially afect people's interest in purchasing one. Te study by Liu et al. [96] involving 1,355 Chinese participants showed that around 26% were not interested in AVs because they were not happy to pay extra for AV technologies. Rezaei and Caulfeld [15] found that nearly half of the 475 Irish people who participated in their survey would not be willing to pay (WTP) more than $5,900 to add automation technologies to their vehicles. Table 6 summarises several other studies that surveyed individuals' opinions about the WTP for AVs.
Market Analysis
In this study, we evaluated people's purchasing power and compared it with the observed WTP for AVs; the results are shown in Table 6. In order to do so, we collected information about the top 10 best-selling cars in 2022 worldwide, as Legal liability is a primary concern identifed in most studies DMV [94]; NHTSA [53]; Underwood [76]; Need for regulatory frameworks to be established Howard and Dai [36]; Schoettle and Sivak [39]; Underwood [76]; To what extent should people take responsibility for AV accidents?
shown in Table 7. For each car, the average price is provided in US dollars, and the average price of the top 10 cars was treated as the average price that an individual would pay to buy a car. Tis is representative of the average purchasing power for a car globally. It is worth mentioning that this type of analysis could have been conducted at the country level. However, as a country's wealth and economic status can afect its citizens' purchasing power, a global-level study was deemed more suitable for ascertaining the purchasing power of people from diferent economic backgrounds. According to Table 7, the average purchasing power for individuals worldwide is $33,088 (US dollars). From the reviewed studies listed in Table 6, it was ascertained that the average WTP for autonomous features to be added to an HDV is around $5,124. Adding this WTP to the average purchasing power, the total price that people would be willing to pay for an AV with fully autonomous driving features based on 2022 car prices was calculated as $38,212. Tis is signifcantly lower than the anticipated current cost of approximately $100k for an AV (INSIDER, 2022) [95] which indicates that this could be a signifcant concern for individuals regarding their future willingness to adopt AVs.
Discussion
By evaluating the relevant papers published between 2014 and 2021, this study revealed a signifcant gap in terms of investigating the market acceptance of AVs, showing that less than 1% of the Web of Science publications were concerned with the market perception of these vehicles and people's WTP for them.
Reviewing the studies that investigated market acceptance of AVs and the factors that infuence it revealed that fve transportation system characteristics play major roles in this regard. Legal liability, safety, privacy, and security, AV trafc-related outcomes, and the cost of AVs were frequently seen as crucial reasons for the market acceptance or rejection of AVs in previous survey studies. Some of these studies discussed the potential benefts, while others pointed out the potential drawbacks of AVs.
A further review of the 100 papers investigating the potential benefts and drawbacks of the key characteristics, as the main drivers of AV acceptance, revealed that AVs could have more potential to improve the trafc fow than disrupt it. Te studies showed that AVs might be able to signifcantly improve the smoothness of the overall trafc fow [44,51], as well as the signal timing at intersections [45,60], road capacity [16,47], and parking management [49,52]. However, there is a possibility that AVs could also increase congestion, trafc volume, VMT, and unnecessary trips [61,99], which could be controlled through the use of proper trafc management strategies; otherwise, these factors may diminish the benefts of AVs with regard to improving the trafc fow, as argued previously.
Te studies showed that AVs could have a high potential to reduce the rate of accidents involving pedestrians and cyclists [31], in addition to eliminating human error [41,65], reducing the overall number of accidents [9,11,34,69,75] and 2020a [18], and increasing safety by making informed decisions [69,72]. Tese potential improvements would encourage more people to adopt AVs [15,36,39]. Nevertheless, signifcant concerns were also identifed, indicating that the market remains dubious about the benefts of AVs in this respect. It is possible that AVs might not succeed in fulflling such tasks [78]. For instance, some people were very concerned about the reaction speed and safe and secure operation of AVs [66,67,78] due to their potentially poor understanding of objects in their surrounding environment [34]. Tere was also some indication that AVs might not be as efective at reducing the severity of any accidents as they might be at reducing the overall number of casualties [34]. If [15] Ireland 475 $5,900 Liu et al. [96] China 1,355 $2,900 Bansal et al. [11] US 347 $7,300 Kyriakidis et al. [41] 109 countries 5,000 $10,500 Schoettle and Sivak [39] UK, US, Australia 1,533 $4,400 Schoettle and Sivak [78] Australia, UK, US, Japan, India, China 3,255 $2,400 Average WTP $5,124 these safety concerns are not addressed, current and potential users will be reluctant to adopt AVs for their day to day travel needs. Software failure [11,68], security breaches and hacking [15,20,83,85], and car hijacking and kidnapping [86], as well as the disruption of trafc networks and catastrophic collisions [45] were found to be the primary security concerns regarding the adoption of AVs. Aside from these, data recording by AVs remains a serious concern within the market. Te type of data stored by AVs, use of data by third parties and tracking an individual's location were among the key concerns [100]. In this regard, Pham and Xiong [80] highlighted some advanced forms of cyber-attack that AVs may be unable to identify or respond to; at least there is no solid evidence available to confrm that AVs can currently do so. Privacy and cybersecurity, therefore, remain signifcant concerns that could hinder the adoption of AVs as the drawbacks of AVs in this respect outweigh their benefts.
Another area in which AVs were found to have more drawbacks than benefts if adopted was in relation to legal liability. Tis was cited as a primary concern in several studies [15,36,39,41,76]. Te main reason for such concerns was the uncertainty about who the responsible group or agency for accidents involving AVs would be [36,39,76] and the lack of established regulatory frameworks in this respect [41,53,94]. However, a number of studies showed that advancements had been made in terms of designing regulatory frameworks for the safe testing and operation of AVs that may pave the way for defning a full regulatory framework in the future [21,22,91,93,101].
Te reviewed studies showed that the average amount people would be willing to pay to add AV technologies to their vehicles was $5,124. In order to evaluate the market purchasing power in greater depth, this study calculated the average price an individual would pay to buy a car to represent the average (car) purchasing power. Tis value was found to be $33,088. After adding the average purchasing power to the WTP for AVs, the total price that people would be willing to pay for an AV with fully autonomous driving features was calculated as $38,212. Tis is far below the estimated current price of $100k (INSIDER, 2022) [15,95,96]; hence, it remains a signifcant concern for the general market with regard to the adoption of AVs. People are much more likely to be interested in purchasing an AV if it is afordable [16]. Correspondingly, some studies have attempted to fnd ways to minimise the generalised costs. By combining a locally-optimal motion planner with a Markov decision process (MDP) model, Liu et al. [102] simulated vehicle trajectories. Te framework that they proposed reduced the trip costs of journeys made using AVs, including fuel and travel time costs, while also guaranteeing safety. However, young men, educated individuals, people earning a higher income and those interested in driving were found to be willing to pay more for AVs [96].
Conclusions
To conclude the research presented in this paper, the following key fndings were identifed, which add to the existing body of work within this feld: (i) Legal liability, safety, privacy, security, trafc conditions, and costs are key factors infuencing the acceptance of AVs. (ii) Tis study has shown that despite some speculation about the possible downsides of AVs concerning trafc and safety, AVs may ofer more benefts in these areas. Tese benefts were sufcient to appeal to 65% of the participants in the reviewed studies. Tis was then calculated in terms of the weighted acceptance rate of AVs in the survey studies listed in Table 1 among the 11,057 individuals who participated in those studies. (iii) 35% of the participants were reluctant to adopt AVs because of unresolved issues related to data privacy, security breaches and hacking, and legal liability problems in the event of accidents. (iv) Te cost of AVs seems to be a signifcant barrier to the adoption of AVs by the market. When cost was not an issue, the market showed greater interest in adopting these vehicles. (v) After examining the impact of vehicle automation and automation failures on driving performance, Strand et al. [79] claimed that driving performance decreases as the level of automation increases. Correspondingly, Tennant et al. [103] observed that people who enjoy driving are less enthusiastic about AVs. (vi) Te study showed that the price people are willing to pay for an AV is signifcantly below the estimated current price of an AV.
Limitations and Recommendations
We are mindful that evaluating the behavioural factors affecting users' decisions about whether to adopt AVs is as crucial as investigating the external factors relating to the infrastructure and manufacturing side and that not all external factors were examined in existing empirical studies. In this regard, it is recommended that future studies use both approaches and conduct behavioural and nonbehavioural survey studies on the same group of participants in the form of a Delphi method or other similar techniques [104]. We acknowledge that AV studies are advancing fast and that technological progress in the feld may signifcantly afect the market acceptance of these vehicles in the coming years. In light of this, the current study encourages future researchers to conduct similar analyses to expand current knowledge about their market acceptance. Tis could be done by conducting survey studies within the car manufacturing industry that would involve interviewing manufacturers to determine their preparedness and potential ongoing actions regarding the production of AVs at various levels of automation. Te insights gained from doing so would be of value in helping the entire AV market. Tey would be useful in terms of determining what to expect from AVs regarding their potential benefts and drawbacks, including those studied in this research, regarding the latest technological advancements. Future researchers could also attempt to identify the acceptance level of each of the infuencing factors from the manufacturers' point of view and thus suggest possible solutions that would increase the overall market acceptance of AVs.
Data Availability
Te data supporting the conclusions of this article can only be made available for academic research. Requests to access the datasets should be directed to rezaeim@tcd.ie.
Conflicts of Interest
Te authors declare that they have no conficts of interest.
|
2023-05-14T15:18:39.987Z
|
2023-05-12T00:00:00.000
|
{
"year": 2023,
"sha1": "cc79b0de2d578cb8fccdd232540faebf7d9c29d3",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jat/2023/6065060.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4df577d5fb904901f7375ab8ba34a511c4555f61",
"s2fieldsofstudy": [
"Business",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
}
|
251574975
|
pes2o/s2orc
|
v3-fos-license
|
Hijama (wet cupping therapy) enhances oral and dental health by improving salivary secretion volume and pH in adult patients at King Abdul Aziz University Hospital (KAUH), Jeddah, KSA: A controlled trial study
Objective The aim of this study was to explore the potential effect of Hijama in promoting oral health by analyzing its effects in modulating saliva flow and pH. Method An open-label, non-randomized controlled trial design was conducted at the Hijama clinic of Y.A. Jameel Scientific Chair of Prophetic Medical Applications at King Abdul Aziz University Hospital (KAUH), Jeddah, KSA. Forty-one healthy volunteers were divided into two groups: Hijama (intervention, N = 21) and control (N = 20). Saliva volume and pH were measured in salivary samples collected in a standardized fashion, 1 h before admission to the Hijama room (pre-Hijama) and 30 min after the procedure (post-Hijama) in both groups. The Hijama group underwent an additional salivary collection 7 days after Hijama. Result Early post-Hijama assessment showed an increase in saliva volume by an average of 1 mL in the Hijama group, whereas that in the control group decreased by 0.6 mL (p < 0.001; large effect size, Cohen's d = 1.24). Saliva pH also increased in the Hijama group by an average 0.22 but decreased by 0.08 in controls (p < 0.001; large effect size, Cohen's d = 1.22). The multivariate model demonstrated that Hijama explained 48.8% of the variability of both pH and volume together (group × time effect, eta squared = 0.488, p < 0.001), whereas time and sex had no effect. At 7 days post-Hijama, both the volume and pH of saliva had increased in the Hijama group with respect to the early post-Hijama time point; however, only the volume increase was statistically significant. Conclusion Hijama enhanced salivary function and induced a significant increase in saliva volume and pH, which was maintained 7 days after the intervention. Further studies are warranted to identify other effects of Hijama on salivary glands and explore its long-term efficacy and clinical applications.
Abstract
Objective: The aim of this study was to explore the potential effect of Hijama in promoting oral health by analyzing its effects in modulating saliva flow and pH.
Method: An open-label, non-randomized controlled trial design was conducted at the Hijama clinic of Y.A. Jameel Scientific Chair of Prophetic Medical Applications at King Abdul Aziz University Hospital (KAUH), Jeddah, KSA. Forty-one healthy volunteers were divided into two groups: Hijama (intervention, N ¼ 21) and control (N ¼ 20). Saliva volume and pH were measured in salivary samples collected in a standardized fashion, 1 h before admission to the Hijama room (pre-Hijama) and 30 min after the procedure (post-Hijama) in both groups. The Hijama group underwent an additional salivary collection 7 days after Hijama.
Result: Early post-Hijama assessment showed an increase in saliva volume by an average of 1 mL in the Hijama group, whereas that in the control group decreased by 0.6 mL (p < 0.001; large effect size, Cohen's d ¼ 1.24). Saliva pH also increased in the Hijama group by an average 0.22 but decreased by 0.08 in controls (p < 0.001; large effect size, Cohen's d ¼ 1. 22). The multivariate model demonstrated that Hijama explained 48.8% of the variability of both pH and volume together (group  time effect, eta squared ¼ 0.488, p < 0.001), whereas time and sex had no effect. At 7 days post-Hijama, both the volume and pH of saliva had increased in the Hijama group with respect to the early post-Hijama time point; however, only the volume increase was statistically significant.
Conclusion: Hijama enhanced salivary function and induced a significant increase in saliva volume and pH, which was maintained 7 days after the intervention. Further studies are warranted to identify other effects of Hijama on salivary glands and explore its long-term efficacy and clinical applications.
Introduction
Saliva, the principal component of oral fluid, plays a critical role in the preservation of oral health, and the maintenance of oral homeostasis and microbiome balance, beyond other functions in facilitating food chewing and swallowing. 1,2 Saliva is secreted by three pairs of major salivary glands: the parotid, submandibular, and sublingual glands. Moreover, it receives contributions from 300 to 400 minor salivary glands in the oral cavity. 3,4 Human saliva in the oral cavity functions in the maintenance of human health, and its complex composition is indicative of normal or abnormal human health. 5 Saliva flow and composition, as well as the percentage contribution of each gland, vary with physiological state, notably during mastication or food stimulation. 1,2 Quantitative or qualitative changes in saliva have causal or syndromic relationships with several conditions, primarily oral diseases such as tooth decay and caries, wherein a variety of physical and biochemical changes in saliva have been documented. 6e9 However, salivary dysfunction may also be associated with or influenced by extra-oral conditions, such as menopause, 10 aging, 11 or radiotherapy, 12 or by iatrogenic factors, such as treatment with isotretinoin, 13 which may affect oral health. By contrast, saliva composition is influenced by a broad range of other physiological and pathological systemic conditions, such as nutritional status, substance use, and emotional, hormonal, or immunological statuses, in addition to several oncological and infectious diseases. 1 Investigating saliva and its characteristics is an area of increasing interest among researchers and clinicians, because several saliva biomarkers have diagnostic and prognostic value and are easily accessible via noninvasive collection methods. 1,14,15 Interventions modulating salivary flow or composition may have value in oral health preventive and therapeutic applications. For instance, several clinical trials have demonstrated that stimulating saliva production by chewing sugar-free gum has a protective effect against the development of dental caries. 16 Likewise, the intake of tea, derived from Camellia sinensis dried leaves, has demonstrated a strong caries protective effect, owing to its antibacterial, amylase and acid production inhibitory, and fluoride supply properties. 17 Consequently, saliva stimulation has been proposed as a preventive tool for promoting oral health by maintaining an optimal pH in the oral cavity.
However, some pathological conditions may skew saliva homeostasis toward a pro-caries state. For example, diabetes mellitus is associated with a decreased salivary pH, which is associated with a significantly elevated risk of dental caries and periodontitis among people with diabetes. 18 The correlation of salivary flow or pH levels with dental caries development has been thoroughly demonstrated. An elevated pH, along with saliva buffering capacity and mineral content, is associated with decreased caries activity. 19 By contrast, data from a 2-year longitudinal study have indicated that a low resting saliva pH ( 6.0) and flow ( 0.6 mL/min) are associated with a 60% and 140% increase in the incidence of dental caries. 20 However, saliva properties change over time, thus potentially influencing the risk of active caries development in either direction. 21 Hijama, wet cupping therapy, is a traditional remedy with a long history of use in several cultures and civilizations. In the Islamic tradition, the Prophet Mohammed, peace be upon him, has encouraged its use on several occasions, promoting it as one of the best remedies. 22,23 During the past century, Hijama has regained popularity worldwide. Several clinical studies have been conducted to demonstrate its preventive and therapeutic effects in a variety of conditions and to adapt its technical aspects accordingly. 24e32 Consequently, the practice of Hijama is regulated in several countries, notably in those in which this practice has high popularity, such as KSA, which has developed national standards of safety and training. 22 Other studies have identified the mechanisms of action of Hijama, including enhancement of local blood circulation, tissue clearance of oxidative stress and inflammatory mediators, and immunomodulatory effects. 31,33e35 However, studies on the effects of Hijama on oral health and dental health are scant. In KSA, few studies have evaluated the effects of wet cupping on saliva. Therefore, this study fills this research gap by increasing knowledge on this topic. Moreover, if effective and sustained effects of Hijama in stimulating saliva are demonstrated, this treatment may provide a better preventive option that is cost-effective. Furthermore, this study may aid in exploring the potential effects of Hijama in promoting oral health and preventing dental caries by analyzing the modulation of salivary gland function. The aim of this study was to assess the effects of Hijama on saliva by measuring the changes in saliva flow and pH after a single Hijama session performed at two time intervals among adults attending the Prophetic Medicine Clinic of Y.A. Jameel Scientific Chair of Prophetic Medical Applications at King Abdul Aziz University Hospital (KAUH), Jeddah, KSA.
Design and setting
This is an open-label, non-randomized controlled trial design performed at the Prophetic Medicine Clinic of Y.A. Jameel Scientific Chair of Prophetic Medical Applications at KAUH, Jeddah, KSA, from March 31, 2019 to January 12, 2020. The KAUH Prophetic Medicine Clinic is part of the outpatient clinic department in KAUH. It is funded by Y.A. Jameel and accepts referrals from different specialties in the university hospital. Cupping therapy is performed as a complementary therapy for different conditions in conjunction with routine treatment. The benefit of cupping therapy is systematically assessed by comparison of the outcomes of the routine treatment method combined with cupping therapy. On average, 2000 patients are seen in the cupping therapy clinics every year. The clinic has three qualified physicians and three qualified nurses. The physicians and nurses are licensed by the Saudi Commission for Health Specialties and for cupping therapy by the National Saudi Organization of Integrative Medicine.
Participants
The study included apparently healthy adult patients who attended the Hijama Clinic for preventive and health promotive purposes. Individuals who had a clinically detectable oral condition, such as tooth decay, aphthous lesions, gingivitis, or labial herpes, or who had undergone dentistry or an oral procedure in the prior 3 months were excluded. Likewise, individuals with uncontrolled chronic diseases, such as hypertension, diabetes, dysthyroidism, end-stage disease, ongoing malignant disease, pregnancy, or mental disorders, were excluded.
Participants were divided into two groups: an intervention group (Hijama group) and control group. The group allocation was determined according to participant preference. Participants from each group received a full explanation of the study and signed a consent form for participation in the study as volunteers.
Intervention
Participants from the intervention group were seated on the examination bed at a 45 angle, with the head and neck resting against the back of the bed. Four cups were applied to each patient. The cups were located at the parotid and submandibular salivary gland areas bilaterally. Parotid cups were placed at the parotid area just anterior to the tragus of the ear bilaterally. Submandibular gland cups were placed just medial to the midpoint of the ramus of the mandible, also bilaterally. Standard sterilized, single-use commercial Hijama sets were used. Sets included cups equipped with a vacuum system and a suction pump. Practitioners with sterile gloves placed the vacuum cups, which were suctioned onto each identified point of the skin (dry cupping). Each cup was maintained for 30 s and was removed by de-suction. Afterwards, a size 15 surgical sterile scalpel was used to make small, light, superficial cuts of 1 mm depth and 1.5 mm length in the circular area to be covered by the cup. The cups were then repositioned, mild suction was exerted, and the cups were kept under suction for 2 min. This procedure was repeated two or three times. The cups were then removed, and their fluid content was disposed of in a biological waste container. Importantly, the cuts did not bleed except after suction was exerted; the released blood-like fluid was filtered out through the Hijama suction, because the cuts were too superficial to cause any bleeding. On the cut skin points, a simple sterile dressing was placed. The duration of the complete session was approximately 15e20 min.
Control
Participants were Hijama clinic attendees who agreed to provide two salivary samples for our study. The salivary samples were collected through the same technique described below for both the control and interventional groups.
Saliva collection and outcomes
The present study focused on two outcomes, saliva flow and pH, which were assessed in pre-and post-Hijama salivary samples. The changes in saliva volume and pH from baseline to post-intervention was measured twice. The immediate effect was measured from baseline (30 min before the start of the Hijama session) to 30 min to 1 h postintervention. Then the delayed effect was measured from baseline to 7 days post-intervention.
Salivary samples were collected with standard measuring cups used to collect biological fluid samples. Saliva was collected for all participants in a straight sitting position; participants were instructed to spit into the cup on demand for a 5min duration, without any stimulation (non-stimulating saliva secretion). 36 In the intervention group, pre-Hijama saliva collection was performed 30 min before the start of the Hijama session, whereas post-Hijama collection in another cup was performed through the same method, 30 min to 1 h after the cupping session. In the control group, saliva was collected twice, at a 1-h interval, with the same method, into two cups.
After documentation of the saliva volume in cc, the saliva pH was measured with a Pocket Pen Water pH Meter Digital Tester PH-009 (ASIN: B07MQL6X5T), according to the manufacturer's guidelines.
A second post-Hijama saliva collection for flow and pH measurement was performed for the intervention group 7 days after the Hijama session to measure delayed effects. The control group underwent a single outcome measurement, as specified previously. All saliva collections followed the same procedure described previously.
Statistical methods
Statistical analysis was performed with the Statistical Package for Social Sciences (SPSS) version 21.0 for Windows (SPSS Inc., Chicago, IL, USA). Categorical variables are presented as frequency and percentage, whereas continuous variables are presented as mean AE standard deviation (SD). Intergroup analysis compared the control and intervention groups' regarding pre-and post-intervention assessments, as well as pre-to-post intervention changes, with both parametric (independent t-test) and nonparametric (ManneWhitney U test) tests. In intragroup analysis, Wilcoxon signed-rank test was used to compare the pre-versus post-intervention saliva volume and pH within each group, separately. The effect size of the intervention was estimated with Cohen's d coefficient. Repeated-measures (RM) ANOVA was used to analyze the effects of group, time, and time  group on outcomes (saliva pH and volume). Results are presented as lambda Wilk's or Pillai's trace statistics, as appropriate, and the calculated squared eta indicated the percentage of variability in the outcome accounted for by each factor. A squared eta !0.14 was assumed for a large factor effect. Paired t-test was used to analyze the changes in pH and volume from pre-to postintervention in each group separately. A p value < 0.05 was considered to reject the null hypothesis.
Baseline group characteristics
Forty-one participants were enrolled: 21 in the intervention group (all women) and 20 in the control group (16 women; p ¼ 0.048). The mean (SD) baseline saliva volume and saliva pH were 3.35 (1.13) mL and 6.82 (0.49), respectively, and no statistically significant difference was observed between the intervention and control groups (p > 0.05; Table 1).
Short-term effect of Hijama on saliva flow
An intergroup comparison of the pre-to-post intervention change in saliva volume showed an increase in the Hijama group (mean change ¼ 1.00 mL) but a decrease in the control group (mean change ¼ À0.60 mL); the difference was statistically significant (p < 0.001; Table 2). Intragroup paired analysis with Wilcoxon signed-ranked test showed that both the increase in the Hijama group (p ¼ 0.001) and the decrease in the control group (p < 0.001) in saliva volume were statistically significant (Figure 1). The effect size of the intervention was large (Cohen's d ¼ 1.24).
Short-term effect of Hijama on saliva pH
An intergroup comparison of the pre-to-post intervention change in saliva pH showed an increase by 0.22 in the Hijama group versus a decrease by 0.08 in the control group; the difference was statistically significant (p < 0.001; Table 3). Intragroup paired analysis showed that both the increase in the Hijama group (p ¼ 0.003) and the decrease in the control group (p ¼ 0.015) in saliva pH were statistically significant ( Figure 2). The effect size of the intervention was large (Cohen's d ¼ 1.22).
Short-term effect of Hijama on saliva pH and volume: RM ANOVA
Hijama explained 44.8% of the variance in saliva volume (group  time effect, eta squared ¼ 0.448, p < 0.001) and 29.6% of the variance in saliva pH (group  time effect, eta squared ¼ 0.296, p < 0.001) from pre-to post-intervention (30 min after the intervention). The multivariate model demonstrated that Hijama explained 48.8% of the variability of both pH and volume together (group  time effect, eta squared ¼ 0.488, p < 0.001). All three models showed no effect of time alone in explaining the variance in saliva volume or pH (p > 0.05; Table 4). The estimated marginal means of saliva volume and pH are depicted in Figure 3, showing diverging curves for both outcomes between the Hijama and the control group.
Sex-specific responses to Hijama
Another RM ANOVA multivariate model including sex as a cofactor showed no effect for sex alone (
Delayed effects of Hijama on saliva volume and pH
The 7 day post-intervention assessments of the Hijama group showed a mean saliva volume of 5.48 mL, a value significantly higher than those in the pre-intervention (p < 0.001) and early post-intervention assessments (p ¼ 0.003). By contrast, although saliva pH further increased at 7 days post-intervention (mean ¼ 7.29), the difference was significant with respect to only the preintervention assessment (p ¼ 0.002) but not the early postintervention assessment (p ¼ 0.198; Figure 4).
Discussion
Our study was aimed at measuring the effects of wet cupping on saliva volume and pH. This study may be the first to investigate the effects of Hijama in dental and oral health. The findings demonstrated that cupping increased the saliva volume and pH. Moreover, the difference in sex distribution between the Hijama and control groups did not affect the observed increase in saliva volume and pH in the Hijama group compared with the control group. Additionally, the baseline characteristics showed comparable saliva volumes and pH between study groups, despite the significant difference in sex distribution. The potential effect of sex is discussed in the following section.
This study indicated that Hijama induced an early and large-effect increase in both saliva and pH. Investigation of the mechanisms underlying these effects on saliva was not within the scope of the present study and should be the objective of further studies. However, the literature has suggested several mechanisms of actions for Hijama, including interference with saliva stimulation and the enhancement of local microcirculation with vasodilatory effects, thus facilitating draining and immediate elimination of noxious materials and toxins from interstitial compartments. Additionally, Hijama has been demonstrated to increase blood flow, thus stimulating the autonomic nervous system. 37 A study in middle school students with multiple caries has indicated a high prevalence of autonomic dysfunction associated with hyperactivation of the sympathetic nervous system. 38 Another study in patients with type 1 diabetes has indicated an association between impaired saliva secretion and autonomic nervous system dysfunction. 39 Hijama has also been demonstrated to decrease oxidative stress by removing oxidative molecules such as myeloperoxidase. 40 Several studies have demonstrated a positive association between caries, or the risk of caries development, and the levels of oxidative markers in both the saliva and serum. 41,42 Another study has suggested a role of the clearance of microparticles, also called extracellular vesicles, which are released by aging erythrocytes, platelets, endothelial cells, or leukocytes, and have been associated with proinflammatory states and thrombophilia profiles. 43 However, as previously stated, further studies are warranted to explore the mechanisms underlying the observed effects of Hijama on saliva flow and pH, as well as other effects requiring further investigation.
The aim of the present trial was to explore the effect of Hijama in inducing positive changes in saliva flow and pH that might help prevent the development of caries. The effect of saliva stimulation on dental caries prevention was investigated and demonstrated several decades ago. 18e20 Simulating saliva enhances its clearance, buffering power, and degree of saturation with inorganic components such as calcium and phosphate. Additionally, stimulated saliva has high concentrations of bicarbonate. The combination of these effects results in two principal pH-raising mechanisms: clearance of dietary carbohydrates from the oral cavity and buffering of plaque acidity, in addition to the enhancement of tooth remineralization. 20,44 These observations led to several clinical trials exploring the effects of saliva stimulation in decreasing the incidence of dental caries. A review including seven clinical trials has reported that chewing sorbitol-containing chewing gum after each meal is associated with a 6.4%e39% decrease in the 2-to 3-year risk of caries development. The preventive effect was significant only with strict use of chewing gum after each meal three times per day; otherwise, the beneficial effect was not significant. 45 This constraint that may limit the clinical use of chewing gum. By contrast, our study showed that one session of Hijama may have promising preventive effects against dental caries by increasing saliva volume and pH. Sex-specific differences in saliva characteristics are expected, on the basis of physiological differences, particularly the effects of sexual hormones such as estrogen on the salivary glands. However, the literature is inconsistent regarding the presence of sex differences in saliva flow or pH, or other biochemical components such as aamylase. 46,47 A study by Pandey et al. has estimated the sexspecific differences in saliva flow rate and pH among schoolaged children. The authors divided the study population into two age groups of 7e10 years and 11e15 years. In the 7e10 year group, the mean (SD) saliva flow was 0.310 (0.10) versus 0.299 (0.12) mL/min (p ¼ 0.787), and the mean (SD) pH was 7.17 (0.52) versus 7.15 (0.76; p ¼ 0.934) in cariesfree boys versus girls, respectively. In the age group of 11e15 years, the mean (SD) saliva flow was 0.302 (0.08) versus 0.278 (0.07) mL/min (p ¼ 0.389), and the mean (SD) pH was 7.01 (0.68) versus 7.03 (0.58; p ¼ 0.932) in cariesfree boys versus girls, respectively. 46 By calculating the p values, we observed that none of the abovementioned differences were statistically significant, thus not supporting sex-specific differences in unstimulated saliva flow and pH. Nonetheless, some authors have reported lower saliva flow rates among women than men, in both unstimulated and stimulated saliva. 48,49 Other studies have shown sex-specific differences in other biochemical components of saliva, thus theoretically leading to baseline differences in pH. For example, Bel'skaya et al. have shown that saliva calcium and urea concentrations are higher in men than women; however, only the age groups of 40e49 and 50e59 years showed a significant difference for calcium and urea, respectively. Similar differences in uric acid concentration were observed in all age groups; however, none were statistically significant. 50 By focusing on poststimulation changes, as in the present study, a study by Liu-Hui has demonstrated that, although saliva volume and pH increased in both sexes after citric acid stimulation, the values remained significantly lower in women than men. 51 This finding further supports the conclusion that the increase in saliva volume and pH observed in the present study was unlikely to have been due to sex differences between the intervention and control group but instead were attributable to Hijama.
Limitations
The generalizability of the present study findings is limited by the small sample size, the non-randomized enrollment of patients, and the high risk of a placebo effect, because of the impracticability of blinding, and the high subjective and emotional implications of Hijama in the target population's culture.
The sample size of male participants was small, because the group allocation was determined according to participant preference. No male participants preferred to be included in the intervention group.
Although observations from the 7th day post-Hijama support a sustained effect of Hijama in enhancing saliva flow, both quantitatively and qualitatively, this finding was not controlled, given the lack of 7-day assessment of the untreated group.
Conclusion
The results of this study showed that wet cupping resulted in greater saliva flow and pH shortly after and 7 days after a single Hijama session than that in the control group with no intervention. The effects of Hijama in increasing the saliva volume and pH in preventing dental caries are promising, and support potential clinical applications of Hijama in oral health promotion. Our studies showed that Hijama enhanced salivary function and induced a significant increase in saliva volume and pH that was maintained 7 days after intervention. However, further studies are warranted to identify other effects of Hijama on dental health and to explore the underlying mechanisms. Further controlled studies with larger sample sizes and longer follow-up times are warranted to demonstrate these effects and their influence on the incidence of dental caries by providing a better preventive option that is cost-effective.
Source of funding
This research did not receive any specific grant from funding agencies in the public, commercial, or profit sectors.
Conflict of interest
The authors have no conflict of interest to declare.
Ethical approval
Ethical approval was granted for the study by the IRB committee of research department, Health affairs of Makkah region, Ministry of Health (IRB #: H-02-K-076-0419-114). Ethical approval date:02/05/2019.
Authors contributions
FAB provided research materials, and participated in the literature review, methods, and discussion. AMO conceived and designed the study, conducted research, provided research materials, wrote the initial and final drafts of the article, provided logistic support, and participated in the methods and discussion. HMA collected, organized, analyzed, and interpreted data; and participated in participated in the methods and discussion. EAO participated in the literature review and discussion, provided logistic support, organized data, and reviewed the final draft of the article. All authors have critically reviewed and approved the final draft and are responsible for the content and similarity index of the manuscript.
|
2022-08-16T15:02:00.728Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "cd770d7ffa61c546b6540914449bbd51eb55bccd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jtumed.2022.07.012",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b1b5174fb5fd619944c2581d190e0ed16dc787f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3234539
|
pes2o/s2orc
|
v3-fos-license
|
Estimation of Kramers-Moyal coefficients at low sampling rates
A new optimization procedure for the estimation of Kramers-Moyal coefficients from stationary, one-dimensional, Markovian time series data is presented. The method takes advantage of a recently reported approach that allows to calculate exact finite sampling interval effects by solving the adjoint Fokker-Planck equation. Therefore it is well suited for the analysis of sparsely sampled time series. The optimization can be performed either making a parametric ansatz for drift and diffusion functions or also parameter free. We demonstrate the power of the method in several numerical examples with synthetic time series.
I. INTRODUCTION
The behavior of complex systems consisting of a large number of degrees of freedom can often be described by low dimensional macroscopic order parameter equations [1]. Thereby the influence of the microscopic degrees of freedom is treated via noise terms of Langevin type [2]. In case of a single order parameter q(t) its time evolution can be described bẏ where Γ(t) is a Gaussian distributed white noise term satisfying Γ(t) = 0 and Γ(t)Γ(t ′ ) = δ(t−t ′ ). Here and in the following Ito's interpretation of stochastic integrals is used [2]. The same information is contained in the corresponding Fokker-Planck equation (FPE) for the probability density function of f q (x, t) ∂f q (x, t) ∂t =L(x, t)f q (x, t) .
Here we have introduced the Fokker-Planck operator which contains the Kramers-Moyal (KM) coefficients also referred to as drift and diffusion for n = 1 and n = 2, respectively. The connection to the functions g and h in Eq. (1) is h(x, t) = D (1) (x, t) and g(x, t) = 2D (2) (x, t).
As was recently shown [3,4], it is possible to set up an equation of the form (1) by estimating the conditional averages in (4) from a data set of the variable q(t). This method was applied in various fields of science, see Ref. [5] for an overview. * Electronic address: c.honisch@uni-muenster.de There are two major problems connected to the estimation of drift and diffusion coefficients from measured "real world" time series. The first problem consists in the occurence of measurement noise. In Ref. [6] it was shown that measurement noise spoils the Markov property, the latter being a requirement for the KM analysis. A promising approach to handle Gaussian distributed exponentially correlated measurement noise was recently proposed by Lehle [7].
The other problem in the Kramers-Moyal analysis is that one has to perform the limit τ → 0, while data sets are recorded at finite sampling intervals. Also in real world processes the intrinsic noise is not strictly δcorrelated, which results in a finite Markov-Einstein time, i. e., a finite time interval τ ME such that for time intervals τ < τ ME the Markov property does no longer hold. It is observed that in case of a finite Markov-Einstein time, the KM coefficients go to zero with decreasing time interval τ .
Ragwitz and Kantz [8] were the first who presented a formular to estimate the KM coefficients that takes into account finite sampling interval effects at first order in the sampling interval. In a comment on this article Friedrich et al. [9] presented correction terms in form of an infinite series expansion in the sampling interval. Very recently Antenedo et al. presented exact analytical expressions for the finite time KM coefficients (s. Eq. (5)) for processes with linear drift and quadratic diffusion [10] and later for other common processes [11].
A very elegant way to obtain finite time KM coefficients for arbitrary (but sufficiently smooth) drift and diffusion terms was recently presented by Lade [12]. He reinterpreted the series expansion presented in [9] in a way that finite time coefficients can be obtained by solving the adjoint Fokker-Planck equation. Since this can be done at least numerically, Lades method opens up the possibility to deduce the true KM coefficients from measured finite time coefficients by an optimization approach. This is the topic of the present work.
Of course, finite time KM coefficients can also be obtained by simulating Langevin equations and measuring the conditional moments at a finite τ . This was done in the iterative method developed by Kleinhans et al. [13,14]. But since this is numerically very demanding, the method was only applied to situations where very few parameters had to be optimized. In this article we show that an optimization based on Lades method can even be performed without a parametric ansatz for drift and diffusion coefficients, which should make it more applicable for a larger class of diffusion processes.
In the next section we review the method of Lade [12] that allows for a calculation of exact finite time effects. The following section gives a description of our new optimization procedure. Section IV contains four numerical examples in which the functionality of our method is demonstrated.
II. EXACT FINITE SAMPLING INTERVAL EFFECTS
From now on we assume the Langevin process of interest to be stationary, i. e., drift and diffusion do not explicitly depend on time. We define the finite time coefficients as with the conditional moments The conditional probability density function p(x ′ , t + τ |x, t) is the solution of the corresponding FPE with the initial condition δ(x ′ − x), so it can be expressed as Inserting this in (6) results in where we use the notation The main point of Lade's article [12] is to interpret eq. (8) as the solution to the partial differential equation at t = τ, x = x 0 , i. e.: For simple drift and diffusion coefficients, eq. (10) can be solved analytically. E. g., for an Ornstein-Uhlenbeck process [2] with D (1) (x) = −γx and D (2) (x) = D, one obtains [12] With Eq. (5) and (11) we get A process with linear drift D (1) (x) = −γx and quadratic diffusion D (2) (x) = α + βx 2 gives the same finite time drift as for the Ornstein-Uhlenbeck process. For the diffusion we obtain which leads to If an analytical solution cannot be obtained one has to solve eq. (10) numerically up to t = τ for all x values of interest.
An alternative way to calculate finite time effects would be to solve the real FPE, instead of the adjoint FPE, which yields the whole transition pdf. But this would involve a Dirac δ-function as an initial condition which is expected to cause numerical problems. The adjoint FPE can be easily solved via a simple forward-time centered-space scheme. For the spatial derivatives on the left and right boundaries we use second order forward and backward finite differences, respectively.
III. THE OPTIMIZATION PROCEDURE
The first step of the optimization is to estimate the conditional moments (6) for a set of τ values {τ 1 , . . . , τ M }, τ i < τ i+1 , and a set of x values {x 1 , . . . , x N }, x i < x i+1 . The latter should be the same values that are later on used for the numerical integration of the adjoint FPE. In a histogram based regression the size of the bins located at x i is limited through the available amount of data. Therefore a kernel based regression as described in [15] is favorable which results in a smooth curve. We denote the estimated conditional moments byM (1,2) τi (x j ). It is also important to calculate statistical errorsσ The optimization can be performed with or without the use of parameterized drift and diffusion functions. In the former case one has to embed the drift and diffusion functions into a family of functions D (1) (x, σ) and D (2) (x, σ), respectively, with a set of parameters denoted by σ.
In the latter case one has to define a set of sampling points {x s 1 , . . . , x s K }, K < N , and represent D (1) and D (2) as a spline interpolation through these sampling points. Then the set of parameters to be optimized
In both cases D
(1) τ1 and D (2) τ1 can be used to construct an initial guess σ ini .
For a specific set of parameters σ, the conditional moments (6) can be calculated as described in sec. II, yielding M (1,2) τi (x j , σ). Since these computations are to be performed for each x j individually, it is very easy and efficient to parallelize this part for the use on parallel computers.
The final step is to find the minimum of the least square potential Fig. 1 illustrates the idea of this procedure. For the optimization we use a trust region algorithm [16]. It turns out that for large sampling intervals τ 1 , the best results are achieved, when only that single τ 1 is used, i. e. M = 1 in Eq. (18). For smaller sampling intervals the accuracy can be improved by the use of more τ values.
After the optimization procedure has converged to a certain set of parameters σ res , one can perform a selfconsistency check by comparing graphically the functions D Fig. 1.
IV. NUMERICAL EXAMPLES
A. Ornstein-Uhlenbeck process As a first numerical example we consider an Ornstein-Uhlenbeck process with D (1) (x) = −x and D (2) (x) = 1. A synthetic time series with 10 7 data points is computed using a forward Euler scheme with a time step ∆t = 10 −3 , but only every 1000th time step is stored. So the minimal time increment, that is available for the data analysis, is τ 1 = 1. The symbols with the error bars in Fig. 2 show the estimated finite time coefficients D (1) τ1 (x) (top) and D (2) τ1 (x) (bottom). From this it seems reasonable to make the parametric ansatz D (1) (x) = −ax and D (2) (x) = b + cx 2 . As an initial guess, we choose a ini = 0.63, b ini = 0.43 and c ini = 0.2. The corresponding curves are shown in blue in Fig. 2. The resulting parameters from the optimization are a res = 0.9966, b res = 0.9995 and c res = 0.00032. These values correspond to the red curves in Fig. 2. For comparison we also plot the black dots which correspond to the true parameters a = 1, b = 1 and c = 0.
B. Multiplicative noise
The next example is a system with multiplicative noise, i. e., the diffusion term depends on x. We choose D (1) (x) = −x and D (2) (x) = 1 + x 2 . In the same manner as in the previous example, we construct a time series with 10 8 data points and a sampling interval τ 1 = 1.
C. Bistable system
The first example for a parameter free optimization is a bistable system with D (1) (x) = x− x 3 and D (2) (x) = 1. The blue dots in Fig. (4) correspond to the finite time coefficients D (1,2) τ1 . They are used as the initial guess for the parameters σ to be optimized. The terms M (1,2) τi (x j , σ) in Eq. (18) are now calculated by a spline interpolation between these sampling points. They are shown in blue in Fig. (4). The resulting parameters that minimize (18) are the red squares from which the red spline curves are calculated. The latter represent the resulting drift and diffusion coefficients.
D. Phase dynamics
As a last example, we consider a phase variable φ that can also be a phase difference φ = φ 1 − φ 2 between two coupled nonlinear oscillators. The reconstruction of phase dynamics from data sets is an important theoretical problem that is relevant in many different fields of science. The problem was among others tackled by Kralemann et al. [17]. We suggest the KM approach as a less In the case of phase dynamics, the drift and diffusion coefficients must be 2π-periodic, i. e., D (n) (x) = D (n) (x + 2π). Therefore, it makes sense to define the KM coefficients as (19) Phase dynamics are often governed by Langevin equations of the forṁ We consider the case ω = 0.2 ; D = 0.5, so we have D (1) (x) = 0.2 + cos(x) and D (2) (x) = 0.5 . Fig. 5 shows the result in the same representation as in Fig. 4.
V. SUMMARY AND OUTLOOK
We have presented a novel optimization procedure for the estimation of drift and diffusion coefficients for onedimensional Markovian time series that suffer from large sampling intervals. The optimization can be performed both in a parametric and non-parametric fashion. Therefore, it is applicable for a large class of diffusion processes. The usefulness of our method is demonstrated in four examples with synthetic time series. The method yields good results, even if the sampling interval is of the order of the typical time scales of the deterministic part of the dynamics.
|
2011-02-25T15:04:52.000Z
|
2011-02-25T00:00:00.000
|
{
"year": 2011,
"sha1": "ea9d9c773a0defe1bd72b193602b04e44733cbe3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1102.5264",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ea9d9c773a0defe1bd72b193602b04e44733cbe3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Mathematics"
]
}
|
221931623
|
pes2o/s2orc
|
v3-fos-license
|
Current and Potential Soil Suitability of Pearl Millet, Wheat and Mustard for Sustainable Production in Aravalli Foothills of Mewat Region of Haryana, India
Land is one of the scarce natural resources essential for survival and growth of any civilization due to its role in food security and economic development. However, in the recent years land degradation perceived to be a major environmental threat globally, and in India alone it affects about 121 m ha land (Maji et al., 2010). Soil degradation (Panagos et al., 2015), waterlogging (Singh, 2016), salinization/alkalization (Velmurugan et al., 2016), and contamination (Sam et al., 2016; Chartzoulakis and Bertaki, 2015) are some of the serious environmental problems arise as a consequence of agricultural activities. Such degradations make it unsuitable for agricultural production (Verheye, 2008), and International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 9 Number 5 (2020) Journal homepage: http://www.ijcmas.com
Introduction
Land is one of the scarce natural resources essential for survival and growth of any civilization due to its role in food security and economic development. However, in the recent years land degradation perceived to be a major environmental threat globally, and in India alone it affects about 121 m ha land (Maji et al., 2010). Soil degradation (Panagos et al., 2015), waterlogging (Singh, 2016), salinization/alkalization (Velmurugan et al., 2016), and contamination (Sam et al., 2016;Chartzoulakis and Bertaki, 2015) are some of the serious environmental problems arise as a consequence of agricultural activities. Such degradations make it unsuitable for agricultural production (Verheye, 2008), and
ISSN: 2319-7706 Volume 9 Number 5 (2020)
Journal homepage: http://www.ijcmas.com Buraka micro-watershed was delineated into seven land management units (LMUs) to evaluate current and potential soil suitability of pearl millet, wheat and mustard crops for sustainable production. Current and potential soil suitability of pearl millet, wheat and mustard revealed that soils of LMU1 found to be permanently not suitable (N2) for cultivation of these crops due to severe soil constraints while soils of LMU5 due to less to very less constraints found to be highly suitable (S1) for pearl millet and mustard, and moderately suitable (S2) for wheat. Current soil suitability revealed that each pearl millet and mustard occupied 17 percent area under highly suitable class (S1) while no area qualified to be S1 for wheat cultivation. However, potential soil suitability registered significant increase in class S1 areas of pearl millet and mustard to 38.1 and 45.0%, respectively. Unlike pearl millet and mustard crops, wheat recorded improvement in area within class S2 to 69.7% due to removal of correctable limitations through scientific management and cultivation practices. Current and potential soil suitability evaluation of crops offer great choice to the farmers for crops cultivation in areas where soils are suitable besides, it also helps to suggest the management options for improving the soil related constraints. Thus, soil suitability evaluation helps in ensuring sustainable crop production and increased land use efficiency, and also enables the policy planners to develop suitable strategies for cultivation of particular crop in the particular areas.
if such deteriorating environment increases continuously, it may pose serious threat to food security (Gomiero, 2016;Abd-Elmabod, 2019). Increasing land degradation scenario calls for effective utilization of land resources i.e., according to their suitability so as to achieve sustainable agricultural production (FAO 1976;Elaalem et al., 2010). In this regard, land evaluation strategy seems to be the effective tool both for sustainable agriculture as well as sustainable land use planning (Shahbazi et al., 2009;Perveen et al., 2012). Its main objective is to improve and manage the land resources in a sustainable way so as to increase its potential for human uses (Rossiter 1996;FAO, 2007), and also helps in determining the potential of land for agricultural purposes (FAO 2007;Ananya-Romero et al., 2015). It also provides information about major constraints and opportunities for a specific purpose and guides decision-makers for ensuring optimal land utilization.
Since, land suitability is a function of crop requirements and land characteristics, and a measure of how well the qualities of land unit match the requirements of a particular land use (FAO, 1976). Its assessment requires knowledge about nature of soils, their characteristics, and extent of distribution, qualities, productivity potential and suitability for optimum utilization. Its status is based on intrinsic properties of soils viz., parent materials, soil texture and depth, and characteristics that can be altered by human management like drainage, salinity, nutrient concentration and vegetation cover (FAO, 1985;FAO, 1993). Moreover, agriculture land suitability evaluation predicts the potentials and limitations of land for crop production (Pan and Pan, 2012). Thus, to predict the potentials and limitations of land construction of matching tables or transfer functions are required to calculate the suitability class (FAO, 1976). However, in most parts of the world crops are being grown irrespective of their suitability for a particular area therefore, it need to be evaluated for every parcel of land so as to ensure sustainable crop production, agriculture development and future planning. During this exercise lots of spatial information is generated that requires to be managed in more realistic way. Thus, in this context, use of sophisticated technologies such as Geographic Information Systems (GIS) holds great promise to enables decision makers to manage and understand the spatial data (Bagherzadeh and Mansouri Daneshvar, 2011).
Keeping land and soil related issues and crop production aspects at the centre soil suitability have been evaluated for cultivation in Buraka micro-watershed area, where cultivation is practiced without assessing soil suitability (based on soil/land qualities, characteristics and crop requirements), and also no scientific research carried out aiming usage of potential soil suitability to overcome these limitations. Pearl millet, wheat and mustard are the dominant crops in the micro-watershed, and holds paramount importance in food and livelihood security in the study area but faces lot of difficulties in meeting these targets due to their cultivation on moderately and marginally suitable areas, and under low to poor management conditions. Such constraints are responsible for low productivity of these crops compare to state as well as national average. In the study area, pearl millet and wheat are important for food and fodder security while mustard holds special significance as cooking oil and fuel in several rural households besides, being cash crop. Soil related constraints, delineation of suitable areas for cultivation, and low productivity are the biggest issues and challenges before scientific community. Therefore, it is imperative to evaluate the soil suitability of crops for sustainable crop production, and to achieve the improved inputs and land use efficiencies. Besides, soil suitability estimates enables the decision makers and policy planners to formulate effective policies for holistic development of the area. The above facts indicates that soil suitability evaluation provides sound basis for agriculture development and future planning, and thus, holds prominence in managing the natural resources especially the land/soils for their sustainable utilization under changing land use scenario.
Study locale
The study was carried out during 2017 and 2018 for sustainable agricultural land use planning in the economically backward Buraka micro-watershed area, located in the ecologically sensitive Aravalli foothills of Mewat Region of Haryana, India (Fig.1). The micro-watershed with its total area of 542.4 ha situated between 28 o 10'00" to 28 o 11'56" N latitudes and 76 o 57'15" to 76 o 59'30" E longitudes. Climate is semiarid, continental and monsoon type. Mean minimum and maximum temperature is 18.7 o C and 32.2 o C, respectively, and mean annual rainfall is 574 mm. Micro-watershed is drained by a nalah (network of seasonal streams) originated from the adjoining Aravalli hill outcrops. Elevation ranges from 259 to 340 m above mean sea level with slope direction from south-east towards north-west. The soils developed from local alluvium and colluvium parent materials while some hilly soils in the eastern parts developed from weathered quartzite/ sandstone. The area falls under agroecological sub-regions 4.1, characterized by hot semi-arid climate with 90-120 days length of growing period (LGP). However, microwatershed has assured irrigation with good quality underground water due to being located adjacent to foothills of Aravalli ranges. Pearl millet-wheat and pearl milletmustard cropping systems are dominant systems whereas other crops including vegetables also cultivated in the microwatershed.
Delineation of LMUs
LMUs were delineated based on dominant soil features (depth, slope, textural class, coarse fragments, rockiness/stoniness, erosion, drainage, soil pH, soil organic carbon and available water capacity), land use/land cover (LULC), and production systems. LMU is homogeneous area for effective management of natural resources particularly the soil and land resources with particular set of treatment/management under particular condition.
Soil suitability evaluation
Soil suitability has been evaluated considering soil and site characteristic mainly rainfall and temperature. Climatic data collected from Krishi Vigyan Kendra, Babal, Haryana, India, and average of 5 years data (2013-14 to 2017-18) have been considered for suitability evaluation due to their important role in crop growth and development (Fig. 2). Suitability had been assessed following matching crop requirement criteria scheme proposed by Sys (1993) and modified by Naidu et al. (2006). Soil suitability groupings at various levels viz., order i.e., S (suitable) and N (not suitable), reflecting the kind of suitability followed FAO framework for land evaluation (1976). Order S was divided into 3 classes viz., S1 (highly suitable), S2 (moderately suitable) and S3 (marginally suitable) while order N into 2 classes viz., N1 (presently not suitable) and N2 (permanently not suitable), reflecting the degree of suitability within the order. Current soil suitability also known as actual soil suitability evaluated to know the actual area available in a particular class for cultivating a crop under the existing soil-site regime while, potential suitability estimate the potential land availability for crop production, assuming that the existing limitations be rectified through improved land use and management practices in due course of time. Current and potential soil suitability evaluated following earlier approaches (Sys, 1991a;Sys, 1991b;Sabalia and Gundalia, 2010;Meena et al., 2017).
Thematic map generation
Thematic maps were generated under Geographical Information System (GIS) environment using Arc GIS 10.3.1 to represent the area under a particular class in the entire micro-watershed however, soils suitability have been evaluated for each LMU.
Land management units (LMUs)
Buraka micro-watershed area was delineated into seven LMUs for sustainable agriculture land use planning (Fig. 3). Result revealed that LMU2, LMU3, LMU4, LMU5 and LMU6 belonged to irrigated ecosystem, and supported agriculture and allied activities while, LMU1 and LMU7 falls under rainfed ecosystems, and not suited for agricultural purpose. LMUs of irrigated ecosystem having very deep soils, nearly level to gently sloping, loamy sand to sandy loam, somewhat excessively well drained to well drained soils with slight to moderate erosion, and under the double and triple crops land use land cover. LMUs of irrigated ecosystem supported agricultural, agri-horticulture and livestock based production systems while LMUs of rainfed ecosystem mainly supported silviculture and silvi-pastoral systems (Table1). Differences in suitability of soils for various kinds of land uses mainly ascribed to soil and land related constraints. LMU5, for example found to be suitable for agricultural and many other land uses including horticulture due to low to very low level of soil related constraints while non-suitability of LMU1 and LMU7 for agricultural purpose mainly attributed to severe soils and land related constraints. Soils related constraints in LMU1 includes very shallow, gravelly sandy loam soils, moderate stony and rockiness with coarse fragment (43.5%) and severely eroded soils, gently to strongly sloping, excessively drained nature, which mainly attributed to geology and parent material as well as variations in physiographic features i.e., soils of LMU1 belongs to Aravalli hill tops. Constraints of LMU7 include its location near stream terraces, severe erosion and somewhat excessively drained soils. Our research findings are in close agreement with land and soil suitability evaluation study carried out for land unit maps (LUM) considering similar factors (Girmay et al., 2018).
Parameters considered for soil suitability evaluation
Soil suitability evaluation involves soil-site characteristics and climatic parameters as important criteria to meet the requirements of crops. Thus, it is essential to consider soil physico-chemical properties such as soil texture, depth, pH, drainage, slope, organic matter content, salinity (EC) and sodicity (ESP) and many other parameters, and climatic factors particularly rainfall and temperature while evaluating the soil suitability for crops. Results revealed that soil texture ranges from gravelly sandy loam (LMU1) to sandy loam (LMU 3, LMU5 and LMU6) to loamy sand (LMU2, LMU4 and LMU7). Soils of LMU1 had slightly acidic pH (6.8) while, alkaline soil reaction in case of soils of LMU2 (soil pH 7.8) and LMU7 (soil pH 7.8). Soil fertility in terms of organic carbon was low in most of the LMUs, and it ranges from as low as 0.03% in LMU7 to as high as 0.41% in case of LMU1. Soil salinity (expressed as EC status) was found to be within safe limit for most of the crops while, soil sodicity (ESP) was high in LMU2 (24.4%) and LMU7 (31.4%). Base saturation was high under soils of LMU4 (82.6%) while lowest in LMU5 (68.9%). Calcium carbonate was nil in case of LMU1 and LMU3 while, LMU2 and LMU7 recorded higher values compared to LMU4, LMU5 and LMU6 (Table 2). Rainfall and temperature data (average of 5 years i.e., 2013-14 to 2017-18) revealed that minimum and maximum annual average temperature was 16.97 o C and 31.64 o C, respectively while annual average rainfall was 637.15 mm. Gravelly sandy loam texture of soils of LMU1 could be attributed to hilly terrains of Aravalli ranges. Low salinity of the soils and water was also ascribed to the location of the microwatershed adjacent to Aravalli foothills. Slightly acidic soil pH might be due to leaching of bases from hill tops. Alkaline pH of soils of LMU2 and LMU7 could be due to presence of CaCO 3 .
Low organic carbon content in LMU2 and LMU7 attributed to fast decomposition and their dispersal under coarse textured soils (loamy sand). Low CEC of soils of LMU2 and LMU7 ascribed to coarser soil texture (low clay content), and thus low CEC lead to low nutrient retention capacity (low fertility of soils). However, high CEC of soils of LMU5 could be due to relatively improved textural class compared to other LMUs, and these soils have more nutrient retention capacity. Nutrient retention capacity is controlled by several factors viz., organic matter and type of clay minerals (Sawhney et al., 1996;Gorai et al., 2013). Perveen et al. (2007) used soil texture, soil moisture, soil consistency, pH, organic matter content and soil drainage for agricultural land suitability analysis while, Zengin and Yilmaz (2008) used soil depth, erosion, slope, aspect, rainfall and temperature for soil suitability evaluation.
Soil suitability of pearl millet
Current and potential soil suitability evaluation revealed that soils of LMU5 found to be S1for pearl millet cultivation while soils of LMU1 evaluated to be N2. Current soil suitability revealed that soils of LMU7 found to be N1 while potential suitability revealed that soils found to be class S3 (Table 3). Further, current soil suitability map revealed that class S3 occupied highest area (171.2 ha) followed by class S2 (114.5 ha) and least under class S1 (92.2 ha), which respectively constituted 31.6, 21.1 and 17.0% area of the entire micro-watershed (Fig. 4). Potential soil suitability evaluation revealed significant increase in class S1 area to 206.7 ha, which constituted 38.1% area of the microwatershed (Fig. 5). Current soil suitability evaluation revealed that area under class N1 and N2 also occupied significant portion of the micro-watershed while, potential soil suitability evaluation indicated that class N1 areas improvised to cultivated classes due to rectification of correctable limitations. Increase in class S1area may be attributed to decline in class S3 areas and bringing into cultivation class N1 land with the improved management and scientific cultivation practices i.e., by removing the correctable limitations. Girmay et al., (2018) also suggested management option and the conservation measure to improve the suitability of class S3 and class N1 land units. Severe soil related constraints viz., soil erosion, soil texture, stoniness and rockiness attributed to class N2 and relatively less serious limitations to class N1. Class S3 rating attributed to relatively less serious problems i.e., soil erosion, drainage and fertility status besides soil texture, compared to class N1. Contrary to this, class S1 rating for crop production may be ascribed to favourable soil physico-chemical properties such as soil texture, drainage and erosion soil pH, EC and ESP. Organic carbon was major limitation for pearl millet in most of the LMUs except soils of LMU1, LMU3 and LMU5. Major limitations in the soils of LMU1 were depth, texture, erosion and drainage as well as permeability while organic carbon and erosion in soils of LMU7. Therefore, appropriate interventions viz., soil and water conservation, integrated soil fertility management, moisture conservation and water harvesting, and agronomic practices need to be adopted to enhance the current land suitability of the micro-watershed for sustainable crop production. Similar interventions were suggested in several studies from areas where this kind of limitations exists for crop production (Alemu et al., 2013;Girmay et al., 2018). In this study specific soil and climate requirements for pearl millet were determined based on Naidu et al. (2006), modification over Sys et al. (1993). Current (actual) and potential soil suitability closely follows the previous research studies (Sabalia and Gundalia, 2010; Meena et al., 2017;Girmay et al., 2018). The spatial information of suitable and notsuitable areas were depicted with the suitable thematic maps under GIS environment for easy understanding of the spatial information as well as to enable the decision makers for effective policy planning and management for such areas. Bagherzadeh and Mansouri Daneshvar (2011) also used thematic maps in their study to depict the spatial information for similar reasons.
Soil suitability of wheat
Results revealed that soils of LMU3, LMU5 and LMU6 found to be class S2 for wheat cultivation while soils of LMU1 belongs to class N2 when evaluated for their current and potential soil suitability. However, soils of LMU2 and LMU4 rated as class S3 under current soil suitability evaluation, and class S2 when evaluated in terms of potential soil suitability (Table 4). Further, area under particular class revealed that class S2 occupied highest area (206.7 ha) followed by class S3 (171.2 ha) when evaluated in terms of their current soil suitability, and this area respectively constituted to 38.1 and 31.6% area of the entire micro-watershed (Fig. 6). However, significant increase was registered under class S2 area (377.9 ha) while evaluated to their potential soil suitability as compared to their current acreage (206.7 ha), and this area (potential suitability) under class S2 constituted to 69.7% of the microwatershed (Fig. 7). Area under class N1 and N2, respectively estimated to 18.1 ha (3.3%) and 116.5 ha (21.5%) in terms of current soil suitability while, potential soil suitability indicated an improvement in class N1areas due to rectification of existing correctable limitations with scientific management practices and resultant to this, the same could be brought to cultivation.
Increase in class S2 area may be attributed to decline in class S3 areas besides improved rating of class N1areas to suitable class for cultivation. Girmay et al. (2018) also suggested management option and the conservation measure to improve the suitability of class S3 and class N1 land units. Severe soil related constraints viz., soil erosion, soil texture, stoniness and rockiness attributed to class N2 and relatively less serious limitations to class N1. Contrary to this, class S1 rating for crop production may be ascribed to favourable soil physicochemical properties such as soil texture, drainage and erosion soil pH, EC and ESP. Area under class S3 could be due to problems of soil erosion, drainage and fertility status besides soil texture. Organic carbon was major limitation for wheat in the soils of all LMUs except soils of LMU1. Major limitations in the soils of LMU1 were depth, texture, erosion and drainage as well as permeability while organic carbon, slope and erosion in soils of LMU7. Therefore, appropriate interventions viz., soil and water conservation, integrated soil fertility management, moisture conservation and water harvesting, and agronomic practices need to be adopted to enhance the current land suitability of the micro-watershed for sustainable crop production. Similar interventions were suggested in several studies from areas where this kind of limitations exists for crop production (Alemu et al., 2013;Girmay et al., 2018). In this study specific soil and climate requirements for wheat were determined based on Naidu et al. (2006), modification over Sys et al. (1993). Current (actual) and potential soil suitability closely follows the previous research studies ( Sabalia and Gundalia, 2010;Meena et al., 2017;Girmay et al., 2018). Thematic maps using sophisticated techniques like GIS were prepared to depict the spatial information about suitable and not-suitable areas for easy understanding and decision making regarding policy planning and management of these areas. Our findings were in close agreement to Bagherzadeh and Mansouri Daneshvar (2011). LMUs Characteristic features LMU1 Wastelands, on very shallow, gravelly sandy loam soils, moderate stony and rockiness with coarse fragment (43.5%) and severely eroded soils, gently to strongly sloping, excessively drained and comes under rainfed ecosystem. Soils belonged to weak fine granular and sub-angular blocky structure. Available water capacity of the soils were 48.3 mm. This LMU falls under rainfed ecosystem. LMU2 Irrigated, agriculture and livestock based production system on very deep, loamy sand soils, gently sloping, moderately eroded and somewhat excessively drained. Soils belonged to single grain structure. Available water capacity of the soils were 22.7 mm. It comes under double crop LULC category. LMU3 Irrigated, agriculture and livestock based production system on very deep, sandy loam soils, nearly level to gently sloping, slight to moderately erosion, and well drained. Soils belonged to weak fine granular, weak fine to medium sub-angular blocky structure. Available water capacity of the soils was 58.3 mm. It comes under double crop LULC category. LMU4 Irrigated, agriculture and livestock based production system on very deep, loamy sand to sandy loam soils, very gently to gently sloping, slight to moderate erosion, somewhat excessively drained. Soils structure was single grain and massive. Available water capacity of the soils was 65.6 mm. It comes under double crop LULC category. LMU5 Irrigated, agriculture, horticulture and livestock based production system on very deep, sandy loam soils, nearly level to very gently sloping, slightly eroded, well drained soils. Soils had weak fine to medium sub-angular blocky structure. Available water capacity of the soils was 77.2 mm. It comes under triple crop LULC category. LMU6 Irrigated, agriculture and livestock based production system on very deep, sandy loam soils, very gently to gently sloping, slight to moderately eroded well drained soils. Soil structure was weak to medium sub-angular blocky. Available water capacity of the soils was 116.6 mm. It comes under double crop LULC category. LMU7 Grazing lands/pasture lands and fallow lands, on very deep, loamy sand, moderately sloping terraces, somewhat excessively drained, severely eroded soils and comes under rainfed ecosystem. Soil structure was single grain structure. Available water capacity of the soils was 37.5 mm. This LMU falls under rainfed ecosystem. LGP Current Suitability Potential Suitability LMU 1 S1 S1 N N S3 S3 S2 N N S2 S1 S1 S1 S1 S1 N2 N2 LMU 2 S1 S1 S1 S2 S1 S2 S1 S2 S3 N1 S1 S1 S1 S3 S1 S3 S2 LMU 3 S1 S1 S1 S1 S1 S2 S1 S2 S1 S2 S1 S1 S1 S1 S1 S2 S1 LMU 4 S1 S1 S1 S2 S1 S2 S1 S2 S3 N1 S1 S1 S1 S1 S1 S3 S2 LMU 5 S1 S1 S1 S1 S1 S1 S1 S1 S1 S2 S1 S1 S1 S1 S1 S1 S1 LMU 6 S1 S1 S1 S1 S1 S2 S1 S2 S1 N1 S1 S1 S1 S1 S1 S2 S1 LMU 7 S1 S1 S1 S2 S1 S3 S1 N S3 N1 S1 S1 S1 S3 S1 N1 S3 LG P
Soil suitability of mustard
Current and potential soil suitability for mustard crop revealed that soils of LMU5 evaluated to be class S1 followed by soils of LMU4 under class S2 and soils of LMU1 in the class N2 (Table 5). Further, results of current soil suitability indicated highest area (285.7 ha) under class S2 followed by class S1 (92.2 ha) and least class S3 (18.1 ha), and this is respectively constituted to 52.7, 17.0 and 3.3% area of the whole micro-watershed (Fig. 8). However, significant increase was registered under class S1 area (244 ha) while evaluated to their potential soil suitability as compared to their current acreage (92.2 ha), and this area (potential suitability) under class S1 constituted to 45.0% of the entire microwatershed ( Fig. 9). Current and potential suitability evaluation indicated a positive sign for mustard cultivation in the microwatershed, as no area belonged to class N1 but at the same time it also depicted a gloomy picture due to significant area (116.5 ha) under class N2, which comes to around 21.5% of the total micro-watershed area. Significant increase in class S1 area while evaluated to their potential suitability could be attributed to improved management and scientific cultivation practices that helped in rectification of correctable limitations. Girmay et al. (2018) also suggested management option and the conservation measure to improve the suitability of land units particularly the class S3 and class N1 lands, fortunately no land found to be N1 for mustard crop but these measure also proves effective in areas which are permanently not suitable. Organic carbon was major limitation for mustard in the soils of LMU2, LMU4, LMU6 and soils of LMU7. Major limitations in the soils of LMU1 were depth, texture, coarse fragments, drainage as well as permeability while organic carbon and soil sodicity (ESP) in soils of LMU2 as well as soils of LMU7. Therefore, appropriate interventions viz., soil and water conservation, integrated soil fertility management, moisture conservation and water harvesting, and agronomic practices need to be adopted to enhance the current land suitability of the micro-watershed for sustainable crop production. Similar interventions were suggested in several studies from areas where this kind of limitations exists for crop production (Alemu et al., 2013;Girmay et al., 2018). In this study specific soil and climate requirements for mustard were determined based on Naidu et al. (2006), modification over Sys et al. (1993). Current (actual) and potential soil suitability closely follows the previous research studies ( Sabalia and Gundalia, 2010;Meena et al., 2017;Girmay et al., 2018). Thematic maps were prepared to depict the spatial information using GIS to depict the suitable and not-suitable areas to suggest effective policy and management options in these areas. Bagherzadeh and Mansouri Daneshvar (2011) also suggested use of GIS to depict spatial information for easy understanding of the spatial information.
It can be concluded that despite, multiple uses of land, it plays crucial role in achieving food security however, in the recent years sustainable crop production is seriously challenged due to indiscriminate land use practices and resultant land and soil related problems. Under such situations, it becomes imperative to employ land evaluation strategies viz., soil suitability evaluation which not only helpful in effectively addressing these problems but also helps to improve the land use efficiency. Current and potential soil suitability reveals soils of LMU5 found to be highly suitable (S1) for pearl millet and mustard cultivation. Current soil suitability indicated pearl millet occupied maximum acreage (171.2 ha) under marginally suitable class (S3) while, wheat (206.7 ha) and mustard (285.7 ha) under moderately suitable class (S2). Area under presently not suitable class (N1) as well as permanently not suitable class (N2) occupied large part of the micro-watershed, and thus land evaluation based scientific interventions becomes even more relevant to bring these areas under suitable land uses such as silviculture, silvi-pastures, recreational and wildlife purposes, and also under the cultivation through conscious efforts. Thus, soil suitability evaluation helps in developing alternate land use options for sustainable agricultural land use planning, and also enables the policy planners to formulate and implement policies for effective planning at various levels including micro-watershed scale.
|
2020-07-02T10:02:46.695Z
|
2020-05-20T00:00:00.000
|
{
"year": 2020,
"sha1": "0e61f1bca0efbfa902cdd4891e21bc36817f8f64",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/9-5-2020/Ashok%20Kumar,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "73c3db5794b3c5af78a73e4e80c09d01b14846de",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Geography"
]
}
|
53605079
|
pes2o/s2orc
|
v3-fos-license
|
Identification and Application of Corrosion Inhibiting Long-Chain Primary Alkyl Amines in Water Treatment in the Power Industry
Gas chromatography with flame-ionization detection (FID) and gas chromatography-mass spectrometry (GC/MS) with electron impact ionization (EI) and chemical ionization (PCI and NCI) were successfully used for separation and identification of commercially available longchain primary alkyl amines. The investigated compounds were used as corrosion inhibiting and antifouling agents in a water-steam circuit of energy systems in the power industry. Solidphase extraction (SPE) with octadecyl bonded silica (C18) sorbents followed by gas chromatography were used for quantification of the investigated Primene JM-T™ alkyl amines in boiler water, condensate and superheated steam samples from the power plant. Amine formulations from Kotamina group favor formation of protective layers on internal surfaces and keep them free from corrosion and scale. Alkyl amines contained in those formulations both render the environment alkaline and limit the corrosion impact of ionic and gaseous impurities by formation of protective layers. Moreover, alkyl amines limit scaling on heating surfaces of boilers and in turbine, ensuring failure-free operation. Application of alkyl amine formulation enhances heat exchange during boiling and condensation processes. Alkyl amines with branched structure are more thermally stable than linear alkyl amines, exhibit better adsorption and effectiveness of surface shielding. As a result, application of thermostable long-chain branched alkyl amines increases the efficiency of anti-corrosive protection. Moreover, the concentration of ammonia content in water and in steam was also considerably decreased.
Introduction
Heated boilers are an essential element of industrial processes. They need to be reliable and kept in good working order. With skyrocketing fuel and energy costs, maintaining the reliability and consistent performance of a boiler while minimizing energy costs, is challenging for any industrial works. Since boiler systems are constructed primarily of carbon steel and the medium for heat transfer is water, the potential threat for corrosion, scale forming and biofouling is great. The build up of corrosion can result in a forced shutdown of the boiler and the whole industrial process [1].
Certain chemical and thermal conditions favor the occurrence of corrosion processes on the surfaces of heated boilers. This phenomenon is a result of chemical reactions taking place in non-homogenous steam-water environment, which have often a local nature and, additionally, are difficult to determine in qualitative analysis of working medium [2]. In high pressure boiler drums, where feed water is even one hundred times more concentrated (even more in regions where steam bubbles are generated) it is difficult to avoid scaling and corrosion of boiler's construction walls. Even a relatively slight decrease in boilers efficiency caused by scale leads to increase in fuel consumption up to hundreds of tons annually [2]. Finding ways to operate a large cooling water system economically while maximizing heat transfer is a complex and challenging task.
One of the effective ways to improve installation economics and reliability is protective water treatment with chemical formulations containing corrosion inhibitors [2]. In the past, the efficient protective treatments of carbon steels were based on inorganic inhibitors like ammo-nia, nitrites, phosphates or chromates. Today, their use is restricted due to their high toxicity and environmental impact. The most common protective water treatment methods base on the use of neutralizing amines. These chemicals, such as morpholine, cyclohexylamine and N,N-diethylhydroxylamine (DEHA) neutralize the carbonic acid (formed in the reaction of carbon dioxide with water) and increase the pH of the condensate [3].
In the first half of the 20 th century the aliphatic monoamines [CH 3 (CH 2 ) n NH 2 , n = 10 ÷ 20] and alkyl polyamines [R-(NH-CH 2 -CH 2 -CH 2 -) n NH 2 , n = 1 ÷ 5] drew researchers' attention due to their exceptional ability to form thin adhesive films, usually monomolecular, which strongly bond to the metal surface. In this layer, referred to as an amine film, amine groups (-NH 2 ) are bonded to a metal surface, while long hydrophobic hydrocarbon chains are oriented in opposite direction, which works as a barrier protecting the metal surface against gaseous molecules (O 2 , CO 2 ) as well as ionic pollutants (e.g. Cl -, SO 4 2-) [2].
Despite the fact that use of organic chemical agents in anti-corrosion protection is not recommended by several international regulations, many film-forming amines (FFA) have been used in the power industry and in heat engineering [2][3][4]. Just like in classical treatment concepts, treatment with FFA must be carefully monitored in order to ensure successful treatment and a high degree of operational safety. Sampling points are the boiler feed-water, the boiler water and the condensate. The condensate should have a slightly alkaline pH and it should be possible to detect an excess of free FFA [4].
At the end of 1980 the Instytut Ciężkiej Syntezy Organicznej "Blachownia" (Institute of Heavy Organic Synthesis "Blachownia") (Kędzierzyn-Koźle, Poland) along with ZPBE ENERGOPOMIAR Ltd. (Gliwice, Poland) and P.U.B. "Ekochem" Ltd. (Gliwice, Poland) worked out a group of multifunctional amine formulations named Kotamina [5][6][7]. These formulations contain an appropriately composed mixture of alkyl amines (main component) with different partition coefficients, which enables control of corrosion kinetics within a whole system through dosing of these formulations into condensate, feed water or makeup water. The presence of alkalizing amines in the formulation helps in keeping constant pH in a desired range, whereas alkyl amines, both render the water alkaline and limit the corrosive impact of ionic and gaseous impurities by formation of monomolecular protective films. The alkyl amines were chosen on the basis of results obtained during investigation of their influence on the adsorption on metal surfaces. The utility properties of these formulations as well as experience related to their application in Polish power industry for over twenty years were presented in several papers [5][6][7].
Thermostable alkyl amines as a base for anti-corrosive and anti-scaling formulations
Alkyl amines used at present in correction formulations exhibit limited stability toward higher temperatures and their decomposition starts at temperatures slightly above 300 °C. One disadvantage of alkyl amine formulations application in the field was the high concentration of ammonia in the condensate and in the steam caused by decomposition of amines of higher molecular weight. Utilization of thermostable alkyl amines in water treatment technology, whose decomposition is above 500 °C, causes decrease in ammonia levels and improves anticorrosive protection of the brass elements [2].
Achieving further improvement in correction technology of steam-water systems, additional investigations with usage of thermostable, branched long-chain alkyl amines (as a base for correction formulations) were developed. The research on the application of thermostable alkyl amines in water treatment was performed within the framework of the international project EUREKA E!2426 BOILTREAT "New technology of boiler water treatment" with partners from research institutes, industry and academia from Poland, Lithuania, Romania, France and Germany [8]. Thus, a new anticorrosion agent called Kotamina Plus was formulated, which contains branched long-chain primary alkyl amines of the Primene JM-T™ type [9][10][11].
Chemicals and samples
Alkyl For solid phase extraction (SPE) of organic compounds from water samples LiChrolut RP-18 E (octadecyl, endcapped, 500 mg/3 cm³) cartridges supplied by Merck were used.
Derivatization procedure
Amines are generally known to be very difficult to analyze by gas chromatography (GC) due to their basic character [19,20]. In addition to the basic character, the amino group introduces a large dipole in the molecule. This dipole is responsible for strong interaction with silanol groups and siloxane bridges in the structure of the stationary phase of the GC capillary column. This often results in nonlinear adsorption effects and can be seen as strong tailing peaks in the chromatogram. The best way to prevent interaction of the strong dipole is to derivatize the amine. Derivatization reduces the polarity of the molecule making it more retentive in chromatographic analysis. The conversion of compounds enhances GC performance as the analyte volatility is increased and peak shape improved because of reduced surface adsorption. Derivatized analyte offer a greater response to the chromatographic detection system than the parent compounds. The choice of a derivatizing reagent is based on the functional group requiring derivatization, the presence of other functional groups in the molecule, and the reason for performing the derivatization. We have selected the acylation with trifluoroacetic acid anhydride (TFAA) as derivatization method for alkyl amines. A benefit of acylation is the formation of fragmentation-directing derivatives for GC/MS analysis [19].
A compact ultrasonic bath Sonorex Super RK 31H from Bandelin electronic (Berlin, Germany) and reaction vessels of 5 cm³ with solid cap and PTFE liner (Supelco) deactivated with 5% DMDCS in toluene (Sylon CT) were used for derivatization. Approximately 10 mg of the investigated alkyl amines were dissolved in 0.5 cm³ THF in a 5 cm³ glass micro-reaction vessel and 100 µL of TFAA were added. The sealed vessel was placed in the ultrasonic bath and agitated by heating at 60 °C for 15 min. Excess of reagent, released trifluoroacetic acid and THF were evaporated with a gentle steam of nitrogen at room temperature or by using a vacuum pump. The resulting product was dissolved in 1 cm³ THF and analysed by GC or GC/MS.
Solid-phase extraction (SPE) of water samples from the power plant
Solid-phase extraction (SPE) is a form of digital (step-wise) chromatography designed to extract, partition, and/or adsorb one or more components from a liquid phase (sample) onto stationary phase (sorbent or resin). Over the last thirty years, SPE has become the most powerful technique available for rapid and selective sample preparation prior to analytical chromatography.
Water samples of boiler water, superheated steam and condensate from the power plant were stored at ambient temperature in 5 L PP bags (Bürkle, Lörrach, Germany) and extracted by solid-phase extraction from a 1 L PP volumetric flask (Kartell). For conditioning the SPE LiChrolut ® RP-18 E cartridge packing, the tube was rinsed with 6 cm³ methanol followed with 6 cm³ deionised water. After the conditioning step 1-2 L of the investigated water sample were percolated at a flow rate of 5 cm³ min -1 through the SPE tube. After washing the tube with 5 cm³ of deionised water, the adsorbed organic compounds were eluted in a mixture of 5 cm³ n-hexane/THF 95/5 (v/v) and collected in a deactivated micro-reaction vessel. After elution, the solvent was evaporated with a gentle steam of nitrogen at room temperature or by using a vacuum pump. Then, the extract was derivatized by acylation according to the procedure described in 3.2. The resulting product was dissolved in 1 cm³ of the internal standard solution, containing 10 -12 mg L -1 of dicyclohexylamine in n-hexane and analyzed by GC-FID.
Instrumentation
Gas chromatographic (GC) analyses were performed with an Autosystem or an Clarus 500 gas chromatograph from PerkinElmer Instruments (Norwalk, CT, U.S.A.), both equipped with a split/splitless injector at 290 °C and a flame-ionization detector (FID) operated at 320 °C.
Structure elucidation of long-chain primary alkyl amines of the Primene JM-T™ type
The general reaction for acylation of the investigated long-chain primary alkyl amines with trifluoroacetic anhydride (TFAA) is shown in equation (1): (1) Figure 1 shows a typical total ion current GC/MS chromatogram of trifluoroacetylated (TFA) derivative of Primene JM-T™ dissolved in n-hexane. The obtained electron impact (EI) mass spectra of the separated compounds from Primene JM-T™-TFA as well as the mass spectra recorded in the negative chemical ionization mode (NCI) and in the positive ionization (PCI) mode, respectively are presented in our previous publications [20][21][22][23]. Table 1 summarized the significant m/z fragments in the recorded mass spectra of the investigated TFA derivative of Primene JM-T™. Table 1. Significant fragments and identification of trifluoroacetylated (TFA) derivatives of tert-octadecylamines in Primene JM-T TM [21,23]. Publishing with licence number 3185351058212 from Elsevier. Figure 2 [22,23]. The proposed chemical structures of the trifluoroacetylated tert-octadecylamines are summarized in Table 1.
Quantitative determination of Primene JM-T TM in water samples from the power plant
Gas chromatography with flame ionization detection (GC-FID) was used for the quantitative determination of Primene JM-T™ in boiler water, condensate and superheated steam samples from the power plant. The linearity of FID for the quantitative determination of Primene JM-T ™ was evaluated by consecutive injecting of standard solutions (see 3.1). Each standard solution was injected in triplicate and the mean value of the total peak area ratio Primene JM-T ™ -TFA/I.S. was taken for construction of the calibration line. The total peak area of Primene JM-T ™ -TFA means the sum of all peaks area of trifluoroacetylated tertoctadecylamines in the sample. The calibration graph obtained in Figure 3 shows the relationship between the obtained total peak area ratio Primene JM-T ™ -TFA/I.S. and the concentration of Primene JM-T ™ -TFA in the standard solutions for FID. The quantity of Primene JM-T ™ in boiler water, condensate and superheated steam samples from the power plant was calculated from results of chromatographic analyses and results of detectors calibration using the equation (2): where C i sample is the concentration (mg L -1 ) of Primene JM-T ™ in the sample, A´s ample is the total peak area ratio Primene JM-T ™ -TFA/I.S. in the sample, a is the slope of the calibration line, b is the y-intercept of the calibration line, f is the pre-concentration factor (1000 -2000), and R is the average SPE yield of Primene JM-T ™ from the water sample.
The total peak area ratio Primene JM-T ™ -TFA/I.S. in the sample (A´s ample ) was calculated from the equation (3): where A i sample is the total peak area of all TFA derivatives of tert-octadecylamines in the sample, A I.S. sample is the peak area of the internal standard (I.S.) in the sample, and m I.S. sample and m I.S. cal are the masses of the internal standard in the sample and in the standard solution used for detector calibration, respectively.
Corrosion studies
Corrosion studies were performed at the Lithuanian Energy Institute in Kaunas [24]. The results obtained in experiments performed within a temperature range between 20 and 500 0 C, show that the anticorrosion protection increases as temperature becomes higher [2]. In the presence of the new formulation Kotamina Plus, the corrosion rate was seven times lower than for noninhibited water at 90 0 C, and two and half times lower than that in water inhibited with sodium hexametaphosphate (Na PO 3 ) 6 . It is important to note that Kotamina Plus appears to be over 30% more efficient than Kotamina. The corrosion rate at 500 0 C for a boilers' steel equals 0.34 µm/ year (see Figure 5) [24].
Industrial applications
Alkyl amines with branched structure are more thermally stable than linear alkyl amines, and exhibit better adsorption and effectiveness in surface shielding protection [2]. One of the basic features of high molecular mass alkyl amines as components of correction formulations is their influence on improvement of heat exchange in boiler and in turbine condensers. In order to determine this influence for Kotamina Plus appropriate investigations in the Engineering Department of Polish Academy of Sciences (PAN) were performed. Results of these experiments show that application of Kotamina Plus leads to increase of the convective heat transfer coefficient during boiling by 55% and to increase in heat flux by over 120%. Finally, heat flux during condensation is increased by over 120% in comparison to demineralised water [2]. This improves the efficiency of the entire power unit and lowers the temperature of boiler pipes, helping to avoid overheating. [22,23]. Publishing with licence number 3185330243225 from Elsevier.
tion assuring high purity of circulating medium, and therefore meeting requirements related to boilers exploitation. Experiments performed on a WT 230 boiler's circulating system showed that substitution of Kotamina by Kotamina Plus results in decrease of iron content by 60% in boiler water, by 30% in saturated and superheated steam, by 25% in condensate and by 40% in make-up water ( Figure 6). As a result of Kotamina Plus application in WT 650 boiler, the total iron content in particular streams was decreased by over 10% and modification of formulation (Kotamina Plus/P) caused further decrease by ca. 30% in boiler water and from 15 to 25% in other streams (Figure 7). The ammonia content was significantly decreased in all streams to 1/3 of non-normalized value but the widely accepted upper limit is 500 µg/dm 3 . This is important for protection against corrosion of brass elements, especially in turbine condensers. Low ammonia concentration positively influences stabilization of pH value in the system and considerable influences mating of boiler and sub-turbine heat exchangers. [20]. Publishing with licence number 3185340683087 from Elsevier.
Application of alkyl amine formulations instead of phosphate and hydrazine leads to considerable conductivity drop of the boiler water. Kotamina Plus introduction, as a result of decreased ammonia concentration, caused further conductivity drop in boiler water as well as in feed water. It allows to lower the desalination of the boiler and to save make-up water as well as energy needed for its heating. Lower ammonia concentration additionally stabilizes the pH value for particular streams. In WT 650 boilers for both blocks (B1 and B2) the pH was raised and for live steam and condensate pH was lowered. This is advantageous because it counteracts the wear of constructional material of a turbine condenser, which possesses brazen piping (Figure 8).
Conclusions
Gas chromatography with flame-ionization detection (FID) and gas chromatography-mass spectrometry (GC/MS) with electron impact ionization (EI) and chemical ionization (PCI and NCI) were successfully used for separation and identification of commercially available longchain primary alkyl amines. The investigated compounds were used as corrosion inhibiting and antifouling agents in a water-steam circuit of energy systems in the power industry. Solidphase extraction (SPE) with octadecyl bonded silica (C 18 ) sorbents followed by gas chromatography were used for quantification of the investigated Primene JM-T ™ alkyl amines in boiler water, condensate and superheated steam samples from the power plant.
Amine formulations from Kotamina group favor formation of protective layers on internal surfaces and keep them free from corrosion and scale. Alkyl amines contained in those formulations both render the environment alkaline and limit the corrosion impact of ionic and gaseous impurities by formation of protective layers. Moreover, alkyl amines limit scaling on heating surfaces of boilers and in turbine, ensuring failure-free operation. Application of alkyl amine formulation enhances heat exchange during boiling and condensation processes. Alkyl amines with branched structure are more thermally stable than linear alkyl amines, exhibit better adsorption and effectiveness of surface shielding. As a result, application of thermostable long-chain branched alkyl amines increases the efficiency of anti-corrosive protection. Moreover, the concentration of ammonia content in water and in steam was also considerably decreased.
|
2018-11-10T00:29:45.468Z
|
2014-02-20T00:00:00.000
|
{
"year": 2014,
"sha1": "7ad3b74365c3f65a15684a0262d133a9bfbd48a2",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/46227",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "079b7c6121fa02f57197e783c3e152c73ee28626",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
202885034
|
pes2o/s2orc
|
v3-fos-license
|
New α-pyrones from an endophytic fungus, Hypoxylon investiens J2
Four new α-pyrones, hypotiens A–D (1–4), were isolated from a fungal endophyte, Hypoxylon investiens J2, harbored in the medicinal plant Blumea balsamifera. Their structures were determined through detailed HRMS and NMR spectroscopic data. Compounds 1–4 are new α-pyrone derivatives containing an unusual dimethyl substitution in the highly unsaturated side chain. Their plausible biosynthetic pathway was discussed. Biological assay indicated that compounds 1–4 showed no antimicrobial, quorum sensing inhibitory, and cytotoxic activities. The specific side chain in α-pyrone derivatives 1–4 might be responsible for the weak pharmacological activities.
Introduction
Fungal endophytes asymptomatically colonize living tissues of healthy plants. [1][2][3] They are now recognized as an invaluable source of structurally diverse and biologically active natural products. 4 More than one hundred endophytic fungi-derived secondary metabolites with new carbon skeletons, rare ring systems, or unusual structural units have been reported. 5 Exploration of these novel and bioactive secondary metabolites greatly facilitates the discovery of lead compounds.
From the endophytic fungus Chaetomium sp. IFB-E015 living in the leaves of Adenophora axilliora, an unprecedented alkaloid, chaetominine containing an unusual alanine-derived dlactam ring, was isolated and structurally elucidated. 6 It exhibited more potent cytotoxicity to the human colon cancer SW1116 and leukemia K562 cell lines than the positive drug 5-uorouracil, and has received considerable attention from chemists and biologists in the eld of total synthesis and biological investigations. [6][7][8] Papeo and co-workers reported a total synthesis of chaetominine based on a straightforward (nine steps) sequence and found that this compound exhibited negligible cytotoxic activities on several cancer cell lines. 9 Rhizoctonia solani, an endophyte isolated from the medical plant Cyperus rotundus, was discovered to biosynthesize a degraded and rearranged steroid, solanioic acid with an unprecedented carbon skeleton. 10,11 It showed signicant antibacterial activities against Gram-positive bacteria, especially the problematic human pathogen methicillin-resistant Staphylococcus aureus with an MIC of 1 mg mL À1 . 10 The healthy plant Paris polyphylla contained an endophytic fungus Aspergillus versicolor. 12 Its chemical investigation resulted in the isolation and purication of a highly oxygenated cyclopiazonic acidderived alkaloid, aspergilline E. 12 This compound has a new hexacyclic 6/5/6/5/5/5 scaffold and displayed signicant biological activities, including anti-virus and cytotoxicity. 12 As part of an ongoing program aimed at nding biologically active natural products from endophytic fungi, 13,14 Hypoxylon investiens J2 as a fungal endophyte, was isolated from the medicinal plant Blumea balsamifera. Chemical investigation on its rice cultures led to the isolation of four new a-pyrone derivatives, hypotiens A-D (1-4). Compounds 1-4 possess a highly unsaturated side chain containing an unusual dimethyl substitution, which is similar to that of oxazolomycins with potent antibacterial, antiviral and cytotoxic activities. 15 Details of the isolation, structure elucidation, and biological activity, together with a proposed biosynthesis of compounds 1-4 are reported here.
15.5 Hz). The 13 C NMR spectrum ( Table 1, The planar structure of compound 1 was further constructed through the detailed analysis of the HMBC spectrum (Fig. 2, and S4 †). The key HMBC correlations (Fig. 2) from H 3 -15 to C-13 and C-14 coupled with the requirement of chemical shis of C-14 (d C 213.0) and H 3 -15 (d H 2.14, s) conrmed the connection from C-13 to C-15. Two singlet methyls (C-18 and C-19) were further located at the C-13, which was conrmed by the HMBC correlations of H 3 -18 and H 3 -19 with C-13 and C-14. Based on the key HMBC correlations from H-12 to C-10, C-11, C-13, and C-14, from H-11 to C-9 and C-10, from H-9 to C-7 and C-8, and from H-7 and H-8 to C-6, a side chain from C-6 to C-15 was tentatively deduced. It contained three trans-disubstituted double bonds at C-7(8), C-9(10), and C-11 (12), which was strongly supported by their chemical shis and relatively large coupling constants.
Further analysis of the HMBC cross-peaks of H 3 -16/C-2, H 3 -16/C-3, H 3 -16/C-4, H 3 -17/C-4, H 3 -17/C-5, and H 3 -17/C-6 veried the connections from C-2 to C-6 ( Fig. 2). A hydroxyl group was placed at C-4 based on its chemical shi (d C 167.5). The remaining one degree of unsaturation and the chemical shis of C-2 (d C 167.4) and C-6 (d C 153.7) suggested that C-2 and C-6 in compound 1 were both linked to the same oxygen atom to form a a-pyrone ring, which was consistent with its molecular formula. In the NOESY spectrum of compound 1, a key correlation between olenic H-7 and aliphatic CH 3 -17 was observed ( Fig. S5 †), indicating these protons were close in space. Accordingly, the structure of compound 1 was established as depicted and it was named hypotien A.
Compound 2 ( Fig. 1) was also obtained as a yellow powder and named as hypotien B. Based on the ESI-HRMS data, it was assigned the molecular formula C 19 H 24 O 4 , corresponding to one CH 2 group more than 1. Analysis of its 1 H, 13 C, and HSQC NMR spectra (Table 1, in 1 was linked to C-4, which was further supported by detailed analysis of the HMBC spectrum of compound 2 (Fig. 2). For compound 3 (Fig. 1), its molecular formula C 17 H 20 O 4 was determined by the same strategy as above and corresponded to one CH 2 group less than compound 1. The 1 H NMR spectrum of 3 (Table 1 and Fig. S12 †) was also close to that of 1 except for the presence of an olenic proton (H-5, d H 6.13) in 3 and the absence of a methyl signal at C-5 in 1. Key HMBC correlations (Fig. 2) from H-5 to C-3, C-4, C-6, and C-7 indicated the location of H-5 and assigned the structure of compound 3 as shown. Compound 3 was named hypotien C.
Hypotien D (4, Fig. 1) was a yellow powder. ESI-HRMS spectrum determined its molecular formula as C 19 H 24 O 4 . Detailed analysis of the 1 H, 13 C, and HSQC data of 4 (Table 1, and Fig. S17-S19 †) suggested that compound 4 has similar structural characteristics to compound 1 and indicated a apyrone derivative. By comparing the 1D NMR data of 4 with that of 1, in addition to the absence of an olenic proton signal in compound 4, one more methyl group (d H 2.05; d C 15.2) was observed in compound 4. The above methyl group was located at the olenic C-7 based on the HMBC correlations of H 3 -20 with C-6, C-7, and C-8 (Fig. 2). Further analysis of key HMBC correlations conrmed the structure of compound 4, which was in accordance with the requirement of its molecular formula.
a-Pyrone, a six-membered lactone, is frequently discovered in microorganisms, plants, and animals, and is oen substituted with a side chain. 16 The diverse substitutions of the six-membered lactone, as well as the variations in length and substitutions of the side chain, greatly contribute to the structural diversity and complexity of a-pyrone derivatives. [17][18][19][20][21] Compounds 1-4 are new a-pyrone derivatives containing an unusual dimethyl substitution in the highly unsaturated side chain (Fig. 1). Their plausible biosynthetic pathway was proposed through a polyketide synthase. 16 A linear polyketide chain was rst constructed from an acetyl coenzyme A (CoA) and six malonyl-CoA followed by reduction, dehydration, methylation, oxidation, or cyclization to generate the a-pyrone derivatives.
Natural products containing a a-pyrone have exhibited diverse biological activities, such as the mostly reported antimicrobial efficacy, [17][18][19] quorum sensing (QS) inhibitory activity, 22 and cytotoxicity. 19,21 In this work, the antibacterial activities of new a-pyrones 1-4 were evaluated against four bacteria Staphylococcus aureus (ATCC 6538), Bacillus subtilis (ATCC 9372), Pseudomonas aeruginosa (ATCC 27853), and Escherichia coli (ATCC 25922), and their antifungal efficacies were tested against three agricultural pathogens Colletotrichum musae (ACCC 31244), Colletotrichum coccodes (ACCC 36067), and Colletotrichum orbiculare (ACCC 36095). Furthermore, the QS inhibitory activity against Chromobacterium violaceum and the cytotoxic assay against three human cancer cell lines A549, CT-26, and MCF-7 were also applied for compounds 1-4. Unfortunately, in contrast to the positive controls, none of them at the given concentrations (Experimental section) were effective against the tested microorganisms or cancer cells. The specic side chain in new a-pyrones 1-4 might be responsible for the weak pharmacological activities.
Fungal material
The fungal strain Hypoxylon investiens J2 was isolated from the medicinal plant Blumea balsamifera collected from Danzhou, Hainan Province, People's Republic of China. It was identied based on its internal transcribed spacer sequence (Genbank no. MK757895). The fungus was deposited at the Tropical Crops Genetic Resources Institute, Chinese Academy of Tropical Agricultural Sciences CATAS, Hainan, People's Republic of China, and was maintained at À80 C. For the large-scale Paper fermentation, the fungus H. investiens J2 was cultured in rice culture (20 asks each containing 80 g rice and 120 mL water) in an incubator at 28 AE 2 C for one month.
Extraction and isolation
The fermented material was extracted by ethyl acetate for three times. The organic solvent was evaporated to give a crude extract (15 g), which was then fractionated into six fractions (Fr.1-Fr.6) by column chromatography on silica gel. Fr Hypotien A (1). Yellow powder; UV l max 222, 362; 1 H NMR (CD 3 OD, 500 MHz) and 13
Antimicrobial assay
The disk diffusion method was applied to evaluate the antibacterial and antifungal activities of compounds 1-4. 23 For bacteria, 200 mL inoculum suspension was spread on the nutrient agar plates. For fungi, the mycelia were rst macerated with mortar and pestle to generate a homogeneous inoculum. In antibacterial assay, sterile paper disks containing 40 mL of the compounds with different concentrations (10, 5, 2, 1, 0.1 mg mL À1 in MeOH) were air-dried and then placed on inoculated plates. In antifungal test, paper disks were impregnated with 50 mg of the samples. The plates were incubated at 37 C for 24 h for bacteria or at 28 C for 48 h for fungi. Streptomycin was used as the positive control for antibacterial evaluation, while actidione was employed as reference for antifungal efficacy.
QS inhibitory activity
The strain Chromobacterium violaceum CV026 was inoculated in a 20 mL LB broth media overnight to afford seed culture. 24 0.2 mL of seed broth was mixed with 15 mL of molten LB agar media. Kanamycin (0.72 mg) and N-hexanoyl-L-homoserinelactone (C6-HSL, 1.5 mg) were further added to the culture. Then, the agar was poured into a sterile Petri dish and then punched with a sterile cork borer. Compound at 40 mg mL À1 in MeOH was pipetted into each well. The positive control is furanone C30 at 10 mg mL À1 . Finally, the Petri dish was incubated overnight at 37 C.
Cytotoxicity assay
The in vitro cytotoxic activities of compounds 1-4 were evaluated using the MTT method. 25 The cancer cells were properly seeded in 96-well culture plates and then treated with different concentrations of compounds (40, 20, 10, 5, 2, 1 mM) for 24 h. Aer treatment, cells were incubated with MTT for 4 h. The plates were recorded at 570 nm by a plate reader. Adriamycin was applied as the positive control in the cytotoxicity assay.
Conclusions
In summary, we isolated and characterized four new a-pyrones, hypotiens A-D (1-4), from a fungal endophyte Hypoxylon investiens J2 living in the medicinal plant Blumea balsamifera. Their structures were determined by extensive spectroscopic analyses. Compounds 1-4, as a-pyrone derivatives, possess an unusual dimethyl substitution in the highly unsaturated side chain. All compounds were measured for their antimicrobial, quorum sensing inhibitory, and cytotoxic activities but proved to be inactive. These results indicated that the specic side chain in compounds 1-4 might be responsible for the weak pharmacological activities.
Conflicts of interest
There are no conicts to declare.
|
2019-09-17T02:40:26.946Z
|
2019-08-29T00:00:00.000
|
{
"year": 2019,
"sha1": "e2613b301f6ca188bb3fcdaab3589871fb43f953",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/ra/c9ra05308e",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e28ca570a54b85da7794d7c665c5ff95546728c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
248471407
|
pes2o/s2orc
|
v3-fos-license
|
Handheld PET Probe for Pediatric Cancer Surgery
Simple Summary Positron emission tomography (PET)/computed tomography (CT) scans are widely used as a form of full body imaging and allow for the early detection of small, asymptomatic tumors that may represent cancer metastasis or recurrence. Tissue diagnosis is critical in determining the choice of ongoing targeted therapy for pediatric patients with solid tumors. These small tumors may be difficult to localize in the operating room, especially in a re-operative or radiated area of the body. An adjunct such as a PET probe, used to guide intra-operative dissection, is the ideal tool to assist in cases where an occult tumor requires an excisional biopsy. Abstract 18F-fluorodeoxyglucose (FDG) is a glucose analog that acts as a marker for glucose uptake and metabolism. FDG PET scans are used in monitoring pediatric cancers. The handheld PET probe localization of FDG-avid lesions is an emerging modality for radio-guided surgery (RGS). We sought to assess the utility of PET probe in localizing occult FDG-avid tumors in pediatric patients. PET probe functionality was evaluated by using a PET/CT scan calibration phantom. The PET probe was able to detect FDG photon emission from simulated tumors with an expected decay of the radioisotope over time. Specificity for simulated tumor detection was lower in a model that included background FDG. In a clinical model, eight pediatric patients with FDG-avid primary, recurrent or metastatic cancer underwent a tumor excision, utilizing IV FDG and PET probe survey. Adequate tissue for diagnosis was present in 16 of 17 resected specimens, and pathology was positive for malignancy in 12 of the 17 FDG-avid lesions. PET probe gamma counts per second were higher in tumors compared with adjacent benign tissue in all operations. The median ex vivo tumor-to-background ratio (TBR) was 4.0 (range 0.9–12). The PET probe confirmed the excision of occult FDG-avid tumors in eight pediatric patients.
Introduction
Diagnostic whole-body 18F-fluorodeoxyglucose (FDG) positron emission tomography computed tomography (PET/CT) imaging is used to identify and assess metabolically active tissue. Rapidly dividing cancer cells have a high number of glucose transporters and high rates of glycolysis, increasing the uptake of FDG [1]. FDG is a radioisotope with a half-life of 110 min, and its decay involves the release of high-energy 511 keV directional photons [2]. PET/CT is highly sensitive to detecting a variety of cancers, including childhood cancers such as Hodgkin's lymphoma, neuroblastoma, and posttransplant lymphoproliferative disorder (PTLD) [3,4]. PET/CT may be used for initial staging, in monitoring for recurrence or metastasis, or to identify occult residual disease after tumor resection [3,4]. When performing an excisional biopsy of recurrent disease or metastasis, small tumors can be difficult to localize intra-operatively, particularly when in a post-operative or radiated field. Novel techniques to assist with tumor identification in the operating room have the potential to identify regions of occult disease and facilitate a safer and more complete excision of residual or recurrent disease.
Radio-guided surgery (RGS) utilizing technetium-99 m and a handheld gamma probe was first described in 1981 and has become the standard of care for sentinel lymph node biopsy [5]. The use of FDG and a handheld gamma probe specifically designed for highenergy gamma ray detection (PET probe) for RGS was first described in 1999, but its intraoperative use has not become routine [5]. The reported evidence in adult cancers includes heterogeneous case series [6][7][8][9][10]. PET probe performance has shown to be dependent on a variety of factors including anatomic location, time from injection to probe survey and physical distance between the probe and tumor. The PET probe's major limitation is poor specificity as its use is confounded by high gamma detection from adjacent metabolically active organs. To assess the specificity of the probe to malignant tumors of interest, a tumor to background ratio (TBR) in situ and ex vivo is often calculated. TBR is a simple calculated ratio of gamma counts per second (cps) of a tumor divided by gamma cps of non-lesional tissue (background tissue). If the tumor has a higher gamma cps than the background, the TBR will be >1. A higher TBR is associated with a higher specificity of the detection system.
The absolute threshold for an adequate TBR has yet to be established in operative use of the PET probe. In 2000, Yasuda et al. described that a minimum TBR of 5 is required to distinguish an FDG-avid tumor from a background using the handheld gamma probe in vitro [11]. In 2001, Zervos et al. demonstrated the successful use of a PET probe in the detection of recurrent colorectal cancer. Ten patients found to have FDG-avid lesions in a pre-op PET/CT then underwent resection with PET probe use, and post-op pathology confirmed the resection of malignant lesions. In this study, the mean in situ TBR was 1.5 [6]. In 2007, Hall et al. described intra-operative use of a PET probe to confirm the complete resection of metastatic breast cancer in two patients [7]. In 2008, Cohn et al. demonstrated the use of a PET probe for the detection of recurrent epithelial ovarian cancer in three patients [8]. Povoski described use of the probe in the resection of three metastatic melanoma lesions in one patient in 2008 and in thirteen patients with lymphoma in 2015 [9,10]. TBR values were not reported in some studies.
Other options are available for intraoperative tumor localization. The literature demonstrates the successful localization of occult pulmonary lesions in pediatric patients using a variety of methods including CT-guided coil or wire placement. CT-guided microcoil placement is useful to guide surgeons to find small pulmonary nodules [12,13]. CT-guided wire localization is another useful adjunct for localizing pulmonary nodules but also risks pneumothorax, increased time under anesthesia and inadvertent wire dislodgement [14]. The use of methylene blue dye has also been reported as an important adjunct for thoracoscopic tumor localization [13,14]. Magnetic tracer placement by ultrasound-guidance and a magnetic probe for the detection of nonpalpable breast lesions is another method of occult tumor localization that has been described with some success [15]. Ultrasoundguided magnetic tracer localization avoids the risk of radiation exposure to patients and perioperative staff associated with the use of radiotracers and CT-guided methods. The methods mentioned above, require a lesion that is clear in imaging, with the placement of a localizing agent adjacent to it that will allow for the intraoperative detection of the lesion. PET-avid lesions can be small, and at times, may have FDG avidity as their only distinguishing characteristic on imaging. Additionally, some lesions are not accessible via percutaneous techniques. Thus, methods such as wire, microcoil and magnetic tracer Cancers 2022, 14, 2221 3 of 11 localization would not be possible for many of the tumors in our study, particularly those in the abdomen and mediastinum.
The use of RGS in pediatric patients with a radiotracer was first described in 1997 by Heij et al. utilizing I 123 -MIBG-directed surgery to aid the resection of neuroblastoma in five patients [16]. There have been a few subsequent reports demonstrating the use of intraoperative gamma probe detection of neuroblastoma, but the largest such series, reported that I 123 -MIBG was not helpful in 35% of cases, with a specificity for malignant tissue of only 55% [17][18][19][20]. Given the limited specificity of the gamma probe and variable results from prior studies, SLNB remains the only regular use of RGS in pediatric surgery. Despite the widespread usage of FDG for preoperative imaging, there are no prior studies examining FDG for RGS in pediatric cancer. We seek to define the baseline performance of the PET probe in a PET scan calibration model and in a clinical model of RGS. Ultimately, we aim to evaluate PET probe RGS as a modality to facilitate the early identification and treatment of recurrent or metastatic childhood cancers.
PET/CT Calibration Phantom Design
In a preclinical setting, a PET/CT calibration phantom model was used. This model was developed by the Clinical Trials Network as a method to validate PET/CT scanners for use in oncology clinical trials [21]. The phantom contains six spheres with diameters ranging from 0.7 to 2.0 cm that are filled with a solution of concentrated FDG of 24.0 kBq/mL. The spheres are situated within a fluid-filled thoracic cavity containing dilute FDG of 6.0 kBq/mL. The ratio of FDG in the solution relative to the spheres should produce a standardized uptake value (SUV) of 4 in the spheres on PET/CT 60 min after fill time. This protocol was recapitulated, and PET scan was followed by PET probe survey to simulate the planned operative procedures (Figure 1). For comparison, in a separate experiment, the spheres were filled with the same concentration of FDG, while the background cavity was filled with saline-only spheres. Gamma counts per second (cps) of each lesion were measured, and background readings from the fluid were obtained to calculate a TBR.
Cancers 2022, 14, x 3 of 11 magnetic tracer localization would not be possible for many of the tumors in our study, particularly those in the abdomen and mediastinum. The use of RGS in pediatric patients with a radiotracer was first described in 1997 by Heij et al. utilizing I 123 -MIBG-directed surgery to aid the resection of neuroblastoma in five patients [16]. There have been a few subsequent reports demonstrating the use of intraoperative gamma probe detection of neuroblastoma, but the largest such series, reported that I 123 -MIBG was not helpful in 35% of cases, with a specificity for malignant tissue of only 55% [17][18][19][20]. Given the limited specificity of the gamma probe and variable results from prior studies, SLNB remains the only regular use of RGS in pediatric surgery. Despite the widespread usage of FDG for preoperative imaging, there are no prior studies examining FDG for RGS in pediatric cancer. We seek to define the baseline performance of the PET probe in a PET scan calibration model and in a clinical model of RGS. Ultimately, we aim to evaluate PET probe RGS as a modality to facilitate the early identification and treatment of recurrent or metastatic childhood cancers.
PET/CT Calibration Phantom Design
In a preclinical setting, a PET/CT calibration phantom model was used. This model was developed by the Clinical Trials Network as a method to validate PET/CT scanners for use in oncology clinical trials [21]. The phantom contains six spheres with diameters ranging from 0.7 to 2.0 cm that are filled with a solution of concentrated FDG of 24.0 kBq/mL. The spheres are situated within a fluid-filled thoracic cavity containing dilute FDG of 6.0 kBq/mL. The ratio of FDG in the solution relative to the spheres should produce a standardized uptake value (SUV) of 4 in the spheres on PET/CT 60 min after fill time. This protocol was recapitulated, and PET scan was followed by PET probe survey to simulate the planned operative procedures ( Figure 1). For comparison, in a separate experiment, the spheres were filled with the same concentration of FDG, while the background cavity was filled with saline-only spheres. Gamma counts per second (cps) of each lesion were measured, and background readings from the fluid were obtained to calculate a TBR.
Inclusion Criteria and Participants
A clinical prospective analysis included children aged 21 years and under with suspected recurrent, primary, or metastatic PET-avid lesions who underwent tumor excision or biopsy utilizing intravenous (IV) 18F-FDG and handheld PET probe at the Children's Hospital of Pittsburgh from January 1, 2018 to March 1, 2021. This study was approved by the Institutional Review Board (IRB) at the Children's Hospital of Pittsburgh. The study included 9 patients who met the inclusion criteria. All participating patients underwent a
Inclusion Criteria and Participants
A clinical prospective analysis included children aged 21 years and under with suspected recurrent, primary, or metastatic PET-avid lesions who underwent tumor excision or biopsy utilizing intravenous (IV) 18F-FDG and handheld PET probe at the Children's Hospital of Pittsburgh from 1 January 2018 to 1 March 2021. This study was approved by the Institutional Review Board (IRB) at the Children's Hospital of Pittsburgh. The study included 9 patients who met the inclusion criteria. All participating patients underwent a pre-operative PET/CT within one month of the operation, which identified one or more occult PET-avid lesions. PET avidity was defined as SUV > 4. Tumors were defined as occult by the operating surgeon if they were expected to be difficult to find based on size and/or location.
Prospective Study Design
Based on prior studies, all patients received a same-day, single-dose preoperative IV injection of 18F-FDG 0.2 mCi/kg via a peripheral IV line one hour prior to the operation and approximately two hours prior to intra-operative probe use for tumor localization [5,20]. Per standard PET scan protocol, patients rested in a quiet, dark room in the interval between injection and transfer to the operating suite. All patients fasted for a minimum of 6 h prior to receiving the 18F-FDG injection. Care was taken to ensure that IV fluids administered perioperatively and intraoperatively were non-dextrose containing. The Neoprobe High Energy F-18 Probe (Mammotome, Cincinnati, OH, USA) was utilized for intra-operative tumor detection ( Figure 2). This probe detects high-energy photons emitted during FDG decay and involves sophisticated internal shielding to enhance directionality. An external survey was performed to determine external values for several organs (body regions), including the kidneys (flanks), spleen (posterior left upper quadrant), liver (right upper quadrant under costal margin), bladder (suprapubic), and brain (top of scalp). External values of the distal extremity and room background were also measured for comparison. External surveys were performed during induction of anesthesia to prevent prolonging the operative procedure. During open cases, intra-operative probe guidance was attempted. A disposable sterile probe cover was utilized. The lesion of interest was excised, and the handheld probe was used to measure gamma counts in the lesion ex vivo. Adjacent non-lesional tissue (of similar size) was also evaluated to calculate a TBR. Final pathology reports of the 17 excised lesions were reviewed. pre-operative PET/CT within one month of the operation, which identified one or more occult PET-avid lesions. PET avidity was defined as SUV > 4. Tumors were defined as occult by the operating surgeon if they were expected to be difficult to find based on size and/or location.
Prospective Study Design
Based on prior studies, all patients received a same-day, single-dose preoperative IV injection of 18F-FDG 0.2 mCi/kg via a peripheral IV line one hour prior to the operation and approximately two hours prior to intra-operative probe use for tumor localization [5,20]. Per standard PET scan protocol, patients rested in a quiet, dark room in the interval between injection and transfer to the operating suite. All patients fasted for a minimum of 6 h prior to receiving the 18F-FDG injection. Care was taken to ensure that IV fluids administered perioperatively and intraoperatively were non-dextrose containing. The Neoprobe High Energy F-18 Probe (Mammotome, Cincinnati, OH, USA) was utilized for intra-operative tumor detection ( Figure 2). This probe detects high-energy photons emitted during FDG decay and involves sophisticated internal shielding to enhance directionality. An external survey was performed to determine external values for several organs (body regions), including the kidneys (flanks), spleen (posterior left upper quadrant), liver (right upper quadrant under costal margin), bladder (suprapubic), and brain (top of scalp). External values of the distal extremity and room background were also measured for comparison. External surveys were performed during induction of anesthesia to prevent prolonging the operative procedure. During open cases, intra-operative probe guidance was attempted. A disposable sterile probe cover was utilized. The lesion of interest was excised, and the handheld probe was used to measure gamma counts in the lesion ex vivo. Adjacent non-lesional tissue (of similar size) was also evaluated to calculate a TBR. Final pathology reports of the 17 excised lesions were reviewed.
Outcomes and Statistical Analysis
The primary outcome of interest to evaluate PET probe utility in tumor identification is TBR. TBR was calculated in preclinical and clinical models. Data including demographics, gamma cps measurements, and TBR were reported with median values and ranges or means and standard deviations. TBR values between malignant and benign groups are compared with a nonparametric Mann-Whitney U test given the small sample size in this study (n = 17). A p-value <0.05 is used as a threshold for statistical significance.
Outcomes and Statistical Analysis
The primary outcome of interest to evaluate PET probe utility in tumor identification is TBR. TBR was calculated in preclinical and clinical models. Data including demographics, gamma cps measurements, and TBR were reported with median values and ranges or means and standard deviations. TBR values between malignant and benign groups are compared with a nonparametric Mann-Whitney U test given the small sample size in this study (n = 17). A p-value < 0.05 is used as a threshold for statistical significance.
Phantom Model Performance
In the thoracic phantom model (Figure 1), the handheld PET probe localized all six FDG-containing spheres when background radioactivity was absent (saline alone), confirming the ability of the probe to detect the high-energy gamma photons produced by FDG decay. The cps associated with the simulated tumors was measured at several time points resulting in a decreasing pattern as expected for the decay of FDG ( Figure 3). Next, the phantom's background compartment was filled with dilute FDG, and an SUV of 4 in the lesions was confirmed on PET/CT prior to PET probe survey. Measurements of gamma counts for each lesion and the background were recorded at 110 min from experiment start time and used to calculate TBR (Table 1). Median TBR for the spheres was 1.07 (range 1.06-1.11) at 110 min in a dilute FDG background.
Phantom Model Performance
In the thoracic phantom model (Figure 1), the handheld PET probe localized all six FDG-containing spheres when background radioactivity was absent (saline alone), confirming the ability of the probe to detect the high-energy gamma photons produced by FDG decay. The cps associated with the simulated tumors was measured at several time points resulting in a decreasing pattern as expected for the decay of FDG ( Figure 3). Next, the phantom's background compartment was filled with dilute FDG, and an SUV of 4 in the lesions was confirmed on PET/CT prior to PET probe survey. Measurements of gamma counts for each lesion and the background were recorded at 110 min from experiment start time and used to calculate TBR (Table 1). Median TBR for the spheres was 1.07 (range 1.06-1.11) at 110 min in a dilute FDG background.
Patient Demographics
Nine patients were included in the study. For the analysis, one patient who received IV I 123 -MIBG prior to surgery for localization of neuroblastoma was excluded. Eight patients met criteria for the data analysis. Within the patient population (n = 8), a total of 17 FDG-avid tumors were identified by pre-operative PET/CT scans. The median age was 16 years old (range 5-21 years old). Of the eight patients, six were male and two were female. The diagnoses included three patients with suspected PTLD, three with Hodgkin's Lymphoma, one with Burkitt's lymphoma, and one with neuroblastoma. Patient demographics are illustrated in Table 2. Pre-op PET/CT was performed within one month of the operation with a median of 10 days (range 2-29 days). One pre-op PET/CT median tumor SUV was 7 (range 3.8-16.7).
Patient Demographics
Nine patients were included in the study. For the analysis, one patient who received IV I 123 -MIBG prior to surgery for localization of neuroblastoma was excluded. Eight patients met criteria for the data analysis. Within the patient population (n = 8), a total of 17 FDGavid tumors were identified by pre-operative PET/CT scans. The median age was 16 years old (range 5-21 years old). Of the eight patients, six were male and two were female. The diagnoses included three patients with suspected PTLD, three with Hodgkin's Lymphoma, one with Burkitt's lymphoma, and one with neuroblastoma. Patient demographics are illustrated in Table 2. Pre-op PET/CT was performed within one month of the operation with a median of 10 days (range 2-29 days). One pre-op PET/CT median tumor SUV was 7 (range 3.8-16.7).
External Survey
A handheld gamma probe was used to measure external readings from surrounding organs for the assessment of background FDG uptake (Table 3). An external survey was performed in eight of the nine operations in the study at a median of 65 min after injection of IV FDG (range 20-175 min). Areas overlying the liver, spleen, kidneys, brain, and the distal extremity were evaluated with a mean >300 from the liver, spleen, kidney, and brain external surveys. Distal extremity and room background had a low median cps of 50 and 2, respectively.
Intra-Operative Probe Performance
Probe survey of excised lesions was performed at a median of 101 min after IV FDG injection (range 65-210 min). PET probe data were collected from the prospective evaluation of nine pediatric cancer operations and in eight patients with excision of seventeen FDGavid lesions, including ex vivo cps with calculated TBR. Eight patients underwent surgical exploration, including five minimally invasive surgeries and four open operations for the removal of PET-positive lesions. One patient underwent two separate FDG-guided procedures to remove PET-positive lesions for PTLD. Seventeen lesions in total were excised from the nine operations included in this study (Table 4). Six lesions were excised from the abdomen, two from the mediastinum, five from the retroperitoneum (RP), four from the neck, and one from the lung. In open cases, the PET probe provided a poor lesion specificity and could not be used for intraoperative guidance in the identification of lesions in vivo. The probe is too large for use in laparoscopic and thoracoscopic cases; therefore, it was only used for ex vivo analysis. Lesion size ranges from 0.1 cm to 2.5 cm, with a median size of 2.0 cm. The final pathology was positive for the suspected malignancy in 12 of the 17 excised lesions. The median cps for the excised lesions was 32, and the mean cps for the background tissue was 11. The average ex vivo TBR for the lesions was 4.0 (range 0.9-12.0). When comparing the malignant and benign lesions based on final pathology, the average ex vivo TBR values were 4.7 and 2.0, respectively with a p-value of 0.0181 between the two groups.
Discussion
Gamma probes capable of detecting high-energy gamma emissions (PET probes) have been available for over 20 years [5]. Despite the high prevalence and clinical relevance of PET scanning, PET probes have not found a routine place in cancer surgery. This study looked to define the utility and limitations of PET probes and describe their first use in pediatric patients. This study also provides the first patient-level data on the external measurements of FDG gamma emissions from various regions of the body ( Table 3).
The preclinical thoracic phantom model confirmed the ability of the PET probe to detect high-energy gamma emission from FDG decay. This was an important step in validating the probe for gamma detection with the well-established half-life of the radioisotope prior to trialing clinical use of the probe. Of note, in Figure 3, sphere 1 emitted a higher gamma cps compared with other simulated lesions. Sphere 1 was the second-largest sphere (1.5 cm in size) and was closest to the phantom's surface, which we suspect was the reason for the high recorded values. When background levels of FDG were added to the phantom, the median TBR was only 1.07, indicating an inability of the PET probe to localize simulated lesions. It is important to note that prior studies described that successful localization requires a TBR of at least 1.5 [6,11,22]. The preclinical phantom model was critical for probe validation as our clinical study did not have a control arm.
The patients in our series had small tumors with a minimum SUV of 4 and a median SUV of 7.0. Background FDG avidity was confirmed by an external survey and intraoperative probe use. Metabolically active organs (kidneys, spleen, liver, and brain) by an external survey demonstrated mean gamma cps > 300. This correlated well with typical PET scan findings, which note a high uptake in these metabolically active organs. This is the first description, however, of gamma activity recordings via an external survey with a handheld PET probe. Not surprisingly, the highest cps are noted at the bladder, which is a result of FDG excretion via the kidneys into the urine. The second highest cps are recorded at the brain given the high glucose requirement for neural tissue metabolism. These findings indicate a potential reason for the PET probe's lack of specificity, as FDG uptake is seen throughout the body, with a very high accumulation in metabolically active organs. Values from these benign organs were at least 300% greater than the measurements from the small, malignant tumors. External survey data from our experiment certainly indicated that eliminating background signals would be important to allow for the routine intraoperative use of the PET probe for tumor localization. This knowledge is critical for guiding future studies that may aim to improve probe specificity and highlights the importance of tumor-specific markers as a potential future research direction.
Fluorescent-guided surgery is gaining enthusiasm as a method for tumor-specific surgical guidance. The most commonly used agent, indocyanine green (ICG), is actually a non-specific, water-soluble, near-infrared (NIR) dye that can accumulate in some lesions such as lung and liver tumors [23]. Targeted NIR agents, have the advantage of allowing for a more specific detection, with less background signal. The use of an ICG-like fluorescent molecule conjugated to an anti-CEA antibody has been described in a preclinical model [24]. An OTL38 (folate analog conjugated to NIR dye) has been successfully used to target tumors that express folate receptors, such as in ovarian and lung cancers [25,26]. An antiepidermal growth factor receptor (EGFR) antibody panitumumab conjugated to a NIR dye is under investigation and has been used in resection of head and neck cancers that express EGFR [27]. The tumors in the current study, however, were identified solely on the basis of their FDG avidity from a PET scan. Without a clear targeted agent to utilize, IV FDG was used to determine if FDG could guide surgical resection via PET probe detection.
Consistent with the preclinical model, the in vivo probe performance was impacted by a lack of specificity and was unable to provide surgical guidance to tumors. The PET probe was able to confirm FDG avidity ex vivo after suspected lesions were excised. The recorded median ex vivo cps of the small FDG-avid tumors (17 lesions) was 32, compared with the median ex vivo cps of the adjacent tissue of 11, yielding a median TBR of 4.0. Sixteen of the seventeen lesions were adequately sampled in the final pathology. The final pathology was positive for the suspected malignancy in 12 of the 17 excised lesions, 4 of the 17 lesions were benign, and 1 lesion was inadequately sampled. When comparing the malignant and benign lesions, the average ex vivo TBR values were 4.7 and 2.1 with a p-value of 0.0181. There was a significant difference between the two groups. This is promising that even without a tumor-specific marker, FDG avidity is more apparent in malignant tissue and is detected by the PET probe.
Safety and Feasibility
In this small subset of patients with pediatric cancer, the PET probe RGS was safe and feasible. The standard PET scan protocol was pre-operatively used for the infusion of IV FDG. The protocol was carried out as planned with no adverse reactions or events identified in the patients who participated in the study. A PET probe intra-operative survey may add a small amount of operative time and cost but were not quantified in the present study. Radiation exposure to patients and perioperative staff from the radiotracer decay is a risk of the procedure. The literature has demonstrated that FDG PET/CT has been associated with a very small radiation exposure risk to perioperative personnel, about 4 µSv of exposure per FDG administration, which is equivalent to that of less than a single-plain X-ray film on the chest [28][29][30]. It is associated with a larger risk of radiation exposure to the patient of 6-7 mSv, or roughly the equivalent of two whole-body CT scans [28][29][30]. The radiation exposure from FDG-guided procedures is higher than those utilizing technetium given the high-energy gamma rays emitted by FDG decay [31].
Clinical Implications and Future Research
While the PET probe is inadequate in its ability to guide surgery, it was able to confirm significantly elevated extracorporeal cps in excised lesions and served as a confirmation of FDG-avid tumor excision. More specific agents or detection systems may need to be developed to provide highly specific intra-operative guidance. FDG is not tumorspecific, and simple handheld gamma detection has a limited specificity for the presence of FDG in malignant tissue versus background tissue. The radiolabeling of a tumorspecific agent such as an antibody, known as radioimmunoguided surgery (RIGS), may allow handheld gamma probes to provide intraoperative guidance for tumor resection in pediatric cancer surgery. In 2007, Sun et al. described the use of RIGS for the excision of occult lesions in metastatic colorectal cancer by utilizing a handheld probe in combination with a radiolabeled monoclonal antibody to a glycoprotein complex overexpressed in epithelial-derived cancers [32]. Targeted therapies are gaining enthusiasm in pediatric cancer treatment, and a similar approach that uses some of these agents may allow for intraoperative guidance for the resection of pediatric tumors.
Strengths and Limitations
Our study is limited in its lack of broad applicability as the operations analyzed were conducted by a single pediatric surgeon experience in pediatric oncology surgery at a large, academic children's hospital. We did not evaluate the effect of prior radiation and the number of prior operations on the success or duration of the excisions. PET probe performance can be impacted by tumor histology, FDG avidity, and size, as well as the timing of operative exploration after FDG injection and anatomic location. The PET probe demonstrated a consistent performance over broadly dispersed tumor histologies and variable anatomic locations, including cervical, intra-abdominal, and intra-thoracic operations. We did not quantify the effect of these factors on outcomes. This could be examined in future studies. Due to the low number of patients in our series, our data are insufficient for drawing comprehensive conclusions about the specificity of handheld PET probes in detecting malignant tissue. Additionally, post-operative PET scans were not reviewed to confirm the complete excision of residual disease. As with other investigations of rare diseases, it will likely be necessary for cooperative trials to be conducted with the amalgamation of multicenter data on pediatric FDG-guided surgery, in order to draw conclusions about the efficacy of the PET probe. Ideal future studies involving RIGS could potentially improve probe specificity for malignant tissue and allow for intra-operative PET probe localization.
Conclusions
We demonstrated a successful excision of occult FDG-avid tumors in eight pediatric patients utilizing PET probe RGS. To our knowledge, this is the first case series to describe FDG-guided surgery in a pediatric patient population and the first study to evaluate the PET probe using a PET scan calibration phantom. Tumor localization is limited by lack of FDG specificity and uptake by adjacent metabolically active organs in the thoracic and abdominal cavities. The median TBR of 4.0 is promising and demonstrates that, with improved tissue specificity by novel radiolocalizing agents, a PET probe could potentially be used for in vivo localization in the future. The handheld PET probe has the potential to facilitate a successful tumor excision for diagnostic and therapeutic purposes in pediatric cancer surgery. Informed Consent Statement: Patient consent was waived due to IRB exemption.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to data involving Protected Health Information.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-05-01T15:07:56.573Z
|
2022-04-29T00:00:00.000
|
{
"year": 2022,
"sha1": "45f728f8ce0fc7571308d8ad79b6a6b0b3f5265c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/9/2221/pdf?version=1651216392",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "732594c5ba852eb8123334ca9cebd49bb0139721",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
222826946
|
pes2o/s2orc
|
v3-fos-license
|
Cognition and behaviour in frontotemporal dementia with and without amyotrophic lateral sclerosis
Objective The precise relationship between frontotemporal dementia (FTD) and amyotrophic lateral sclerosis (ALS) is incompletely understood. The association has been described as a continuum, yet data suggest that this may be an oversimplification. Direct comparisons between patients who have behavioural variant FTD (bvFTD) with and without ALS are rare. This prospective comparative study aimed to determine whether there are phenotypic differences in cognition and behaviour between patients with FTD-ALS and bvFTD alone. Methods Patients with bvFTD or FTD-ALS and healthy controls underwent neuropsychological testing, focusing on language, executive functions and social cognition. Behavioural change was measured through caregiver interview. Blood samples were screened for known FTD genes. Results 23 bvFTD, 20 FTD-ALS and 30 controls participated. On cognitive tests, highly significant differences were elicited between patients and controls, confirming the tests’ sensitivities to FTD. bvFTD and FTD-ALS groups performed similarly, although with slightly greater difficulty in patients with ALS-FTD on category fluency and a sentence-ordering task that assesses grammar production. Patients with bvFTD demonstrated more widespread behavioural change, with more frequent disinhibition, impulsivity, loss of empathy and repetitive behaviours. Behaviour in FTD-ALS was dominated by apathy. The C9ORF72 repeat expansion was associated with poorer performance on language-related tasks. Conclusions Differences were elicited in cognition and behaviour between bvFTD and FTD-ALS, and patients carrying the C9ORF72 repeat expansion. The findings, which raise the possibility of phenotypic variation between bvFTD and FTD-ALS, have clinical implications for early detection of FTD-ALS and theoretical implications for the nature of the relationship between FTD and ALS.
INTRODUCTION
An association between frontotemporal dementia (FTD) and amyotrophic lateral sclerosis (ALS) is well established on clinical, pathological and genetic grounds, yet the precise nature of the relationship remains controversial.
Up to 15% of people with ALS develop FTD 1 and a similar proportion of people with FTD develop ALS. 2 3 Similarities in profile of cognitive impairment have been identified in the two disorders, although more severe in FTD. 4 TDP-43 pathology occurs in both conditions. 5 So too do expansions in the C9ORF72 gene. [6][7][8][9] Such convergent evidence supports the notion of a spectrum or continuum of disease. 3 10 On the other hand, some authors 11 have reported distinct cognitive profiles in ALS and ALS-FTD, and explicitly argue against the notion of a continuum. Moreover, while FTD-ALS is pathologically homogenous, invariably being associated with TDP-43 pathology, 5 half of FTD cases without ALS have alternative pathologies: tau or fused-in-sarcoma. 12 Furthermore, of the three main genes implicated in FTD: C9ORF72, PGRN and MAPT, only C9ORF72 is associated with FTD-ALS. Therefore FTD-ALS is predictive of a pathological and genetic signature in a way that FTD alone is not. It would be reasonable to infer that not all patients with FTD are equally vulnerable to developing ALS.
An important question is whether it is possible to identify potentially vulnerable patients with FTD on clinical grounds. Specifically, are there phenotypic differences between patients who have FTD with and without accompanying ALS? The issue has clinical relevance for early detection of FTD-ALS and patient management as well as having theoretical implications for the relationship between FTD and ALS.
There is some limited evidence for phenotypic differences between FTD and FTD-ALS. FTD encompasses three canonical clinical syndromes: behavioural variant FTD (bvFTD), semantic dementia (SD), also known as semantic variant primary progressive aphasia and progressive nonfluent aphasia (PNFA)/non-fluent variant primary progressive aphasia. SD and PNFA, at least in their pure forms, are rarely associated with FTD-ALS, 2 13 raising the possibility of a more uniform clinical phenotype in FTD-ALS, associated with behaviour change, compared with FTD alone.
There have, however, been few direct comparisons of cognition and behaviour in bvFTD and FTD-ALS and the limited evidence is inconsistent. De Silva et al 14 reported greater behavioural change in bvFTD than FTD-ALS whereas Lillo et al 15 found no differences in frequency of behavioural symptoms, although identified higher rates of aphasia and psychosis in FTD-ALS. Our own retrospective study of bvFTD and FTD-ALS raised the possibility of more frequent agrammatism and impaired syntactic comprehension in FTD-ALS and greater social disinhibition and reduced empathy in bvFTD. 16 That study was limited by its retrospective nature, reliance on presence/absence of symptoms or deficits rather than quantitative measurement, and lack of control for motor deficits in FTD-ALS. In that and other studies there was no exploration of the potential genetic contribution to clinical phenotype.
The aim of the present study was to compare cognition and behaviour in bvFTD and FTD-ALS. The study incorporates assessment of language, executive functions and social cognition, inclusion of appropriate motor controls, behavioural and neuropsychiatric measures applicable to FTD, and analysis of genetic contributions to the cognitive and neuropsychiatric profiles. We anticipated greater behavioural change in bvFTD than FTD-ALS, with changes in FTD-ALS being dominated by apathy. We also predicted that deficits in language processing would occur more frequently in FTD-ALS. Given the known heterogeneity of FTD however, it was anticipated that there would be a degree of variation within and overlap between the groups, in part influenced by genetic contributions.
METHODS
This is a prospective cross-sectional comparative group study. It involved consecutive patients who agreed to participate and fulfilled the criteria for the study during the recruitment period.
Participants
The study included patients with a clinical diagnosis of bvFTD or FTD-ALS, and healthy volunteers. Patients were recruited between December 2014 and September 2017 from specialist cognitive or motor neuron disease clinics at Salford Royal NHS Foundation Trust (the Cerebral Function Unit), the Walton Centre NHS Foundation Trust, Lancashire Teaching Hospitals NHS Trust and Sheffield Teaching Hospitals. Clinical diagnoses were made by specialist neurologists and supported in most cases by detailed neuropsychological evaluation. Patients fell into the mild to moderate range of impairment as measured by the Clinical Dementia Rating (CDR) scale, modified for use with patients who have FTD. 17 All patients fulfilled contemporary diagnostic criteria for bvFTD. 18 Patients with FTD-ALS also met El Escorial criteria for ALS. 19 Patients with FTD-ALS were excluded if they fell into the 'very severe' range of disability (score <12), as measured by the ALS Functional Rating Scale revised, 20 or if they required mechanical respiratory support. Healthy controls were recruited through the Cerebral Function Unit's ethically approved research register (Salford Royal NHS Foundation Trust) or Join Dementia Research. Participants were excluded if there was evidence of significant cerebrovascular disease, history of head injury, alcohol or drug abuse, or other neurological or medical disorders that might affect cognition. Participants were required to have premorbid fluency in English, as several tasks were designed to assess language. Patients' caregivers were invited to complete behavioural interviews/questionnairesAssessments were carried out in a hospital setting or in the patient's home according to personal preference.
Cognitive assessment
Assessment of participants focused on language, executive skills and social cognition, known to be impaired in FTD and ALS. Language tests included The Graded Naming test 21 of confrontation naming, the Object and Action naming test 22 allowing comparison of noun and verb naming, the Pyramids and Palm Trees test 23 of semantic association for words and pictures, the Psycholinguistic Assessment of Language Processing in Aphasia (PALPA) 24 test of spelling to dictation (subtest 40) and sentence comprehension (subtest 55) and a locally developed sentence ordering test. The latter requires patients to rearrange five randomly presented printed words to form a sentence (eg, they went to the beach) and was included because of its proven sensitivity to grammatical impairments in FTD. 25 Executive tests comprised letter and category fluency, and sorting tests from the Delis-Kaplan Executive Function System battery (DKEFS), 26 and the Hayling and Brixton tests 27 to assess response inhibition and rule abstraction and set shifting. Social cognition was assessed by a Judgement of Preference from Eye Gaze task 28 and emotion recognition using the Ekman and Friesen faces. 29 Assessment lasted 2-3 hours and was administered over separate sessions to patients to reduce fatigue. To accommodate patients' motor difficulties either oral or written responses were permitted. For Verbal Fluency, a Verbal Fluency Index, which represents the average 'thinking time' per word, was calculated, as previously described. 30
Behaviour assessment
Behaviour assessment, of patients only, was carried out through caregiver interview. It included the neuropsychiatric inventory (NPI), 31 which covers 12 behavioural dimensions, rated for both severity and frequency, and The Family Rating version of the Frontal Systems Behaviour Scale (FrSBe), 32 a 46-item rating scale, yielding three subscale scores: apathy, disinhibition and executive dysfunction. The presence or absence of behavioural features (disinhibition, apathy/inertia, social/emotional change, stereotypies, dietary change) from the international consensus criteria for bvFTD 18 was recorded through structured interview.
Genetic analysis
Patients were invited to provide a blood sample to be screened for known FTD genes. Genotyping was carried out using the Ion PGM System for next generation sequencing. Testing for the hexanucleotide repeat expansion in C9ORF72 was carried out using a repeat primed PCR method. 7 Where patients were not screened this was for logistical reasons.
Statistical analysis
Data were analysed using IBM SPSS Statistics V.25. Group comparisons were carried out using analysis of variance with post-hoc Gabriel tests or t-tests for demographic data and Kruskal-Wallis and Mann-Whitney tests for cognitive and behavioural data for which data were not normally distributed. Wilcoxon tests were used for related samples. χ 2 and Fisher's exact tests were used for categorical variables as appropriate. Significance values are shown in the tables uncorrected for multiple comparisons, to minimise the risk of masking potentially informative data: the relatively small sample size of the patient groups limits the power to detect significant differences between bvFTD and FTD-ALS. Corrected results are noted in the text.
Study cohort
Seventy-one patients were approached and 46 agreed to participate (25 bvFTD and 21 FTD-ALS). Three patients (two bvFTD Neurodegeneration and one FTD-ALS) were later excluded due to diagnostic uncertainty. Forty healthy controls were initially recruited; however, to reduce age disparity between groups the ten youngest were excluded. It was not possible to collect behavioural data from four caregivers. The final cohort consisted of 23 patients with bvFTD, 20 patients with FTD-ALS and 30 healthy controls, together with 39 caregivers. 19/23 (82%) bvFTD and 11/20 (55%) patients with FTD-ALS fulfilled criteria for probable, as opposed to possible, bvFTD 18 : they had evidence both of functional decline and frontal and/or temporal atrophy on neuroimaging in addition to their cognitive and behavioural disorder. The group difference reaches statistical significance (χ 2 =3.9, p=0.05). Scans in two patients with bvFTD and six patients with FTD-ALS were reported to show generalised atrophy and one patient with bvFTD and one patient with FTD-ALS had a normal scan. Imaging was not available for one patients with bvFTD and two patients with FTD-ALS. Notably, two patients with bvFTD and two patients with FTD-ALS with generalised atrophy or a normal scan had a positive C9orf72 repeat expansion (see the Genetics section), providing confirmation of the FTD diagnosis. Of the FTD-ALS group, 13 had presented initially to an ALS clinic and 7 to a specialist dementia clinic. Eleven caregivers reported noticing cognitive symptoms first, six motor symptoms and three cognitive and motor symptoms simultaneously. Thirteen patients with FTD-ALS had some degree of bulbar involvement at the time of testing.
There were some demographic differences between the three groups (table 1). Post-hoc comparisons between group pairs showed that the control group included more female participants than both patient groups (p=0.03), controls were younger than the FTD-ALS group (p=0.03) and had more years of education than the bvFTD group (p=0.02). The bvFTD group had more years of illness than the FTD-ALS group. Other comparisons were non-significant.
Cognition
Kruskal-Wallis tests showed highly significant group differences on all cognitive tests, with p values of p<0.001 for all measures apart from the PALPA sentence comprehension and Brixton tests, which elicited significance levels of p=0.002. Subsequent Mann-Whitney U tests revealed that these striking differences lay between patients and controls (table 2). Only subtle differences were elicited between bvFTD and FTD-ALS. Patients with FTD-ALS performed more poorly on category fluency and showed a trend towards greater difficulty on sentence ordering. Those differences between FTD and FTD-ALS do not survive correction for multiple comparisons.
Frequency of behaviour change
Behavioural changes were commonly reported in both bvFTD and FTD-ALS and encompassed symptoms within each of the five domains specified by current diagnostic criteria for bvFTD (table 3). 18 Nevertheless, there were notable differences. Whereas apathy was virtually ubiquitous in both groups, disinhibited behaviours, reduced sympathy and empathy, and repetitive behaviours, particularly simple motor mannerisms, were significantly more common in bvFTD. Changes in dietary habits were also numerically more frequent in bvFTD although differences did not reach statistical significance.
Overall, informants of patients with bvFTD described in the patient a higher number of altered behaviours than did informants of patients with FTD-ALS (figure 1).
Quantitative behavioural scales
The NPI revealed small significant differences between bvFTD and FTD-ALS that were in line with the frequency data: greater agitation and behavioural disinhibition in bvFTD (table 4). There was also a trend towards greater apathy, elation and irritability in bvFTD and depression in FTD-ALS.
The FrSBe elicited greater change in bvFTD in the disinhibition and executive but not apathy domains of behaviour (table 4). The level of apathy, disinhibition and executive dysfunction reported by informants before the onset of the patient's illness did not differ in the two groups, excluding the possibility that differences were influenced by premorbid factors.
Self-report versus informant-based report
The FrSBe data are based on informant-based reports of behavioural change. Self-reports were also obtained from a subsection of patients (15 bvFTD and 8 FTD-ALS). Patients in both groups reported less change than their corresponding informant, although the disparity between self and informant report was greater and reached statistical significance only in bvFTD (table 5).
FTD-ALS relationships Cognitive versus motor onset
No significant cognitive or behavioural differences were identified in patients with FTD-ALS depending on whether cognitive or motor symptoms were noticed first.
Cognitive clinic versus Motor Neurone Disease (MND) clinic
Patients presenting to a cognitive clinic showed greater disinhibition than those presenting to an MND clinic (z=−2.3, p=0.02).
Other comparisons were non-significant.
Bulbar signs
Patients with FTD-ALS with bulbar signs exhibited greater cognitive impairment than those without, particularly on language tasks: object naming z=−2.
Genetics
Thirty-three patients (19 bvFTD and 14 FTD-ALS) were screened for the C9ORF72 hexanucleotide repeat expansion and 27 for other known FTD genes (17 bvFTD and 10 FTD-ALS). Six bvFTD (32%) and seven FTD-ALS (50%) patients were positive for the C9ORF72 expansion. Two patients with bvFTD had a mutation in the MAPT gene and two in the progranulin gene. Patients with the C9ORF72 expansion performed more poorly than those without on spelling, sentence comprehension and block sorting and there was a trend towards poorer sentence ordering, semantic association and category fluency (table 6). No significant differences were elicited on behavioural measures or severity of illness measured by duration of illness or CDR ratings.
DISCUSSION
This prospective study examined the hypothesis, arising from our earlier retrospective study, 16 that bvFTD is associated with greater behavioural change and FTD-ALS more marked language change. The current study, involving an independent cohort of patients, provided a more in-depth analysis of behaviour, language, executive and social cognition than previously available. It incorporated measurements of severity as well as presence/absence of abnormality, test procedures that control for motor deficits, the inclusion of a healthy control group and comparisons of behavioural change based on patients' and informants' report. It explored the relationship of cognitive/behavioural change to motor disability in FTD-ALS and the influence of genetic mutations. The study's prospective nature confers the advantage of more systematic and controlled administration of cognitive tests and behavioural interviews by a single examiner.
The bvFTD and FTD-ALS groups both showed striking impairments in cognitive performance compared with controls, confirming the sensitivity of the language, executive and social cognition measures to FTD. The two patient groups showed largely similar cognitive profiles. The data did, however, suggest subtle differences: poorer performance in FTD-ALS in category fluency and a trend towards poorer performance in ordering words to form a grammatical sentence. Those differences need to be interpreted with caution, because they do not survive correction for multiple comparisons. Nevertheless, the differences are not arbitrary. Poor verbal fluency has been documented as a prominent feature of ALS. 30 and has been identified as poorer in FTD-ALS than bvFTD using different fluency measures. The greater difficulty cannot be ascribed to motor slowing in FTD-ALS because the fluency measures control for motor speed by calculating the time to generate items in relation to the time to read/copy those same items. The suggestion of greater problems in grammar in FTD-ALS than bvFTD is in keeping with findings from our previous retrospective study involving an independent cohort of patients. 16 They are consistent too with independent reports of syntactic impairments in both FTD-ALS and ALS. 33 34 Studies have also identified significant semantic impairments in FTD-ALS. 35 Behavioural changes were common in both patient groups (in keeping with the selection criteria that patients should fulfil criteria for the behavioural form of FTD on the basis of behaviour and executive changes). Nevertheless, whereas apathy predominated and was ubiquitous in FTD-ALS, patients with bvFTD showed more widespread behavioural changes, andtypically endorsed more behavioural features from diagnostic criteria. Disinhibition, impulsivity, loss of empathy and repetitive behaviours were all significantly more common in bvFTD. These findings reinforce previous observations. 14 16 Severity of illness is unlikely to provide an adequate account of observed behavioural differences. Despite differences in duration of symptoms, severity as measured by the FTLD modified CDR 17 did not differ between groups. Moreover, the two groups were largely matched in terms of their cognitive performance and where differences occurred these were in the direction of poorer performance in FTD-ALS. Furthermore, if behavioural differences were an artefact of disease severity alone more behavioural changes overall might be anticipated but not differential impairment in specific domains. Arguably, the physical limitations in people with FTD-ALS might reduce patients' capacity to exhibit certain behaviours, such as repetitive behaviours or disinhibition. It is also possible that caregivers Neurodegeneration might under-report behaviour changes. Their focus on practical management of patients' physical disability might reduce their attention to behaviour, or else they might attribute behavioural changes to a natural reaction to a life-changing diagnosis. Yet, caregiver under-reporting would not account for the disproportionately high occurrence in FTD-ALS of apathy relative to other domains of behavioural change. The semistructured interview techniques, with provision of specific examples of behaviour and the comparison of behaviours before and after illness onset, aimed to mitigate potential secondary effects of ALS. Patients with bvFTD significantly under-reported behaviour changes compared with their informant. The disparity between informant and self-report of symptoms was, moreover, substantially greater than in the FTD-ALS group. This novel finding suggests a greater reduction in insight in bvFTD, in keeping with their more marked behavioural change.
Within the FTD-ALS cohort, there were no cognitive or behavioural differences as a function of nature of onset: cognitive/behavioural vs motor. While small numbers might arguably explain the lack of statistical difference comparably small numbers did elicit systematic statistical differences on language tasks in patients with and without bulbar signs. The findings suggest that onset type is not a major determining factor and the terms FTD-ALS and ALS-FTD may be used interchangeably. It is instructive that patients with FTD-ALS presented more commonly to an ALS than specialist dementia clinic, yet more caregivers noted behavioural/cognitive before motor symptoms. Moreover, some caregivers reported simultaneous development of motor/cognitive symptoms or expressed uncertainty, suggesting that the evolution of symptoms may be blurred. Findings from other studies suggest that the designation FTD-ALS may be more appropriate than ALS-FTD. A large study of ALS 11 distinguished between motor and behavioural-predominant phenotypes. Although patients with motor presentation developed alterations in cognition and behaviour over time these were less severe and more circumscribed than in patients with behavioural presentation and did not fulfil criteria for bvFTD. In ALS-FTD, motor symptoms rarely preceded the onset of behaviour change. Other authors have highlighted the lack of congruity between motor and cognitive/behavioural decline in ALS, 14 again suggesting that the term FTD-ALS might be a more appropriate designation for the behavioural disorder.
The potential influence of genetic factors is intriguing. Repeat expansions in the C9ORF72 gene were present in six bvFTD and seven patients with FTD-ALS: 32% and 50%, respectively of those who were tested. Tasks on which expansion carriers were more impaired than non-carriers, or showed a trend towards greater impairment, all make substantial linguistic demands: spelling, sentence comprehension, block sorting based on semantic/verbal rules, sentence ordering, semantic association and category fluency. Such a pattern suggests a specific association between language system dysfunction and C9ORF72 repeat expansions. This might feasibly drive the subtly greater language impairments in FTD-ALS than bvFTD observed in this study.
We did not observe neuropsychiatric differences in people with and without the C9ORF72 expansion contrary to previous reports. 8 9 36 This likely reflects the small numbers and, possibly, selection bias against psychotic symptoms in a study requiring voluntary participation.
The small number of patients with C9ORF72 expansions precluded meaningful sub-comparisons of cognition and behaviour in bvFTD and FTD-ALS as a function of C9ORF72. Nevertheless, the findings in the whole C9ORF72 positive group are sufficient to suggest that the repeat expansion exerts an influence on patients' cognitive profile.
C9ORF72 repeat expansions might also feasibly contribute to group differences in the proportion of patients showing frontotemporal atrophy on structural neuroimaging. A normal scan or generalised atrophy occurred in both groups in association with the presence of C9ORF72 expansions. This is in line with previous observations that atrophy in C9ORF72 patients may be less strikingly focal than in other forms of FTD. 8 9 Genetic screening was not available for one bvFTD and five patients with FTD-ALS in whom generalised atrophy was reported. The possibility of C9ORF72 positivity in those patients cannot be excluded. Clinical scans were carried out in different diagnostic centres so reporting differences can also not be ruled out.
The principal limitation of the study is the relatively small size of the bvFTD and FTD-ALS groups. Some participants could not complete all tasks, further diminishing group size. There was, in consequence, inherently limited statistical power to detect differences, particularly as the cohort of patients proved to be variable with regard to severity of symptoms, despite endeavours to select patients in the mild-to-moderate stages of disease. The data do, nevertheless, reinforce findings from our earlier retrospective study 16 involving an independent patient cohort and they serve as pointers to possible differences that require prospective investigation in larger-scale studies.
Within-group heterogeneity was particularly evident on cognitive testing, with some patients in both groups showing impairment on language tasks and others performing relatively well. The suggestion in this study that genetic factors may play a role highlights also the need to consider distinct genetic and sporadic variants in future large-scale comparative studies of bvFTD and FTD-ALS.
A related limitation of this study stems from the fact that a large battery of tests was administered. The rationale was to encompass the spectrum of cognitive and behavioural domains affected in bvFTD. The inevitable consequence is that the relatively subtle group differences do not survive correction for multiple comparisons. As noted above, however, identified differences were not isolated but rather constitute a coherent pattern, and are in line with predictions and previous findings. They are unlikely therefore to have occurred due to chance alone.
The possibility cannot be excluded that some patients in the bvFTD group will later develop ALS. Indeed, two patients initially recruited into the bvFTD group were later reclassified. However, misclassifications are likely to be rare. Of the 23 patients with bvFTD in the study, 21 were followed up for at least 1-year post-study and had not exhibited signs of ALS. Moreover, the mean duration of symptoms in the bvFTD group at the time of assessment was 5 years. Current evidence suggests that the risk of developing ALS declines with the duration of FTD symptoms and is unlikely after 5 years. 37 In any event, misclassifications would have the effect of masking rather than exaggerating differences between bvFTD and FTD-ALS, suggesting that identified differences are likely to be real.
There are potential clinical implications of the study. If prominent verbal fluency and other language difficulties, occurring in the context of prevailing apathy, prove to be predictors of FTD-ALS, then patients with bvFTD exhibiting those symptoms might be especially vulnerable to developing ALS and should be monitored closely.
There are potential theoretical implications too. The notion of a continuum of disease between FTD and ALS 3 10 is attractive, yet it presents challenges. Heterogeneity in underlying pathology and genetic mutations suggests that not all patients with FTD are vulnerable to developing FTD-ALS. Our findings indicate commonalities between bvFTD and FTD-ALS but also sufficient differences to raise the possibility of FTD-ALS as a distinct clinical phenotype. We speculate that FTD-ALS is not simply the summation of ALS and FTD, but rather a specific behavioural/cognitive entity, allied to bvFTD but with specific pathology, linked genes and clinical characteristics.
CONCLUSION
The data suggest subtle differences between bvFTD and FTD-ALS in both behavioural and language profiles, which are not simply a function of illness duration or overall severity of disease. Repeat expansions in the C9ORF72 gene may contribute to those differences. A task of future studies is to clarify the factors that contribute to phenotypic variation, both within and between these groups. 1
|
2020-10-17T13:06:21.884Z
|
2020-10-14T00:00:00.000
|
{
"year": 2020,
"sha1": "d19b1b144b31a3371ce2874664dad5b2bae198e7",
"oa_license": "CCBY",
"oa_url": "https://jnnp.bmj.com/content/jnnp/91/12/1304.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "916966ca2e64d16769a69e16e6018b1311d39d19",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
128081844
|
pes2o/s2orc
|
v3-fos-license
|
Estimating Canopy Nitrogen Content of Rice Using Hyperspectral Reflectance Combined with SG-FD-CARS-ELM in Cold Region
In this study, visible and near infrared hyperspectral imaging technique was used to predict canopy leaf nitrogen content (CLNC) of rice in cold region. Canopy hyperspectral images of rice were acquired at tillering, jointing and heading stage, respectively. Original spectra was extracted using ENVI5.0 software, and leaf nitrogen content was obtained by chemical analysis method. 5 pre-processing methods of savitzky-golay smoothing (SG), multiplicative scatter correction (MSC), standard normal variate (SNV), first derivative (FD) and second derivative (SD) were used to eliminate unexpected noise. After comparing the performance of PLSR models based on spectra of full wavelengths after pre-processing, SG combined with FD had the best performance for eliminating the noise interference and improving the performance of models. In order to further simplify and enhance the models, 3 variable selection methods of successive projections algorithm (SPA), uninformative variable elimination (UVE) and competitive adaptive reweighted sampling (CARS) were used to select the characteristic wavelengths, and partial least square regression (PLSR) and extreme learning machine (ELM) were used to establish prediction models. After comparing the performance of PLSR models and ELM models, CARS could effectively select the wavelengths that had strong information and were not sensitive to external disturbance factors, and the nonlinear ELM model was more suitable for predicting CLNC of rice in cold region, the specific values of RC 2 and RP 2 of ELM models based on CARS were 0.906 and 0.888 for tillering stage, 0.903 and 0.892 for jointing stage, and 0.894, 0.887 for heading stage, respectively. The results of this study could provide a reference for quantitative analysis of nitrogen content of rice using hyperspectral technology.
Introduction
Nitrogen is one of the essential nutrients for rice growth and the most active factor in soil fertility [1,2]. In certain range of nitrogen application, the nitrogen uptake, nitrogen utilization efficiency, traits and yield of rice will be improved with nitrogen application increased. However, if nitrogen application is excessive, it will lead to reduced nitrogen utilization efficiency, soil degradation, decreased rice yield and inferior grain quality, and may even cause ecological pollution. Therefore, it is very meaningful to realize the rapid diagnosis of nitrogen status of rice in order to rationally and accurately apply nitrogen fertilizer. Traditional nitrogen diagnostic methods are visual diagnosis, chemistry diagnosis and chlorophyll meter diagnosis. The visual diagnosis is intuitive, but it's easy to cause confusion and misjudgment. The chemistry diagnosis is more accurate, but so much work is needed to do and the cost is high. The chlorophyll meter diagnosis can only quantitatively estimate nutritional content of specified leaf, which is difficult to reflect nutritional status of large areas of cropland [3][4][5]. Therefore, traditional diagnosis methods have been difficult to meet the actual demand of large scale rice production in time and space hyperspectral technology has the advantages of convenience, accuracy and environmentally friendly, and has become one of the most effective techniques for crop nutrition diagnosis. At present, there are many research results in this field, involving grain crops [6], fruit trees [7], vegetables and so on. In terms of nitrogen nutrition diagnosis of rice, also made a lot of research results. Tian found that the ratio vegetation index constructed by 553 and 537 nm had a good contribution to estimating leaf nitrogen content of rice [8]. Chu used the ratio vegetation index constructed by 770 and 752 nm to well predict nitrogen accumulation of rice leaf [9]. Yu analyzed the laws of nitrogen content and canopy spectra of rice under different nitrogen levels, and found that the optimum multiple narrow band reflectance (OMNBR) model established by maximum R 2 improvement method (MAXR) could improve the accuracy for predicting nitrogen content [10]. Qin believed that the linear model constructed from the ratio of the first derivative of 738 and 522 nm selected by contour of determination coefficient was the optimal prediction model for nitrogen content of rice [11]. Similar to the above studies, scholars mostly used sensitive wavelengths or vegetation indices constructed by sensitive wavelengths to establish the simple linear or nonlinear models to predict nitrogen content of rice. The advantages of this method are simple and intuitive, computing fast and easy to implement. However, the disadvantages are that the accuracy is generally low, the anti-interference ability is poor, and the models in different studies are quite different. At present, there are few studies on the use of various pre-processing methods to filter original spectra, the use of multiple variable selection methods to extract characteristic wavelengths, and then to establish the higher precision linear and nonlinear models. As the raw spectra obtained by hyperspectral technique usually has the obvious noise and contains a large amount of irrelevant information that will weaken the performance of models, especially the canopy spectra collected in the cropland [12]. Therefore, it is necessary to carry out the elimination of uninformative variables and the selection of key variables before using hyperspectral data to quantitatively analyze nitrogen content of rice. Meanwhile, some new studies have shown that the nonlinear models had a more pronounced advantage in quantitative analysis of plant nutrition than the linear models [13]. Therefore, it is very meaningful to establish the nonlinear models to predict nitrogen content of rice. In the existing studies, there are also few studies on hyperspectral monitoring of nitrogen content of rice in cold region of northeast China. Especially, the monitoring periods, indicators and models under the canopy scale are still lack of systematic research. Heilongjiang Province, located in the northeastern part of China, is not only the largest growing area of rice in cold region, but also the most important commodity grain base in China. Using hyperspectral technology to monitor nitrogen content of rice, to provide rich and comprehensive research results, which is of great significance to guarantee rice yield and quality. Therefore, the rice cultivated in Heilongjiang Province is selected as the research object in this study. On the basis of experiments with different nitrogen levels, the canopy spectral information of rice at tillering, jointing and heading stage was obtained by visible and near infrared hyperspectral imaging technique. The performance of various pre-processing methods, variable selection methods and modeling methods in predicting canopy leaf nitrogen content (CLNC) of rice was systematically compared, and then the optimal pre-processing method, wavelength selection method and modeling theory for quantitative analysis of CLNC of rice in cold region were obtained.
Experimental Design
The field experiments were carried out in 2016 in the area of Harbin, Heilongjiang Province, as shown in Figure 1.
The climate is medium-temperate continental monsoon, with very cold winters and warm summers. The annual average temperature and precipitation are 3-4°C and 500-800 mm, respectively. The climatic characteristics are suitable for many field crops (e.g. rice, soybeans, wheat and corn), which have only one harvest per year, and the most widely planted crop in this area is rice. In this study, the experimental rice variety was Daohuaxiang, the major cultivar grown in Heilongjiang Province. The experimental soil was meadow paddy soil, organic matter content was 35.5 g·kg -1 , total nitrogen content was 1.44 g·kg -1 , effective phosphorus content was 51.8 g·kg -1 , available potassium content was 111 g·kg -1 and pH was 6.30. The experimental field was designed with four replications and four nitrogen gradient treatments: N0 (0 kg·ha -1 ), N1 (60 kg·ha -1 ), N2 (120 kg·ha -1 ), N3 (180 kg·ha -1 ), that were used to obtain a large range of nitrogen content. The individual plot size was 4 by 4 m. Other field management practices, such as irrigation, pesticide application, etc., followed local standard practices.
Canopy Images Acquisition
The canopy hyperspectral images of rice were captured with SOC710VP ® HS-Portable (Surface Optics Corp., CA, USA). Its spectral range was 372-1 038 nm and resolution was 4.68 nm. The canopy images were obtained between 10 a.m. and 14 p.m. local time under clear and cloudless conditions, on June 25th, July 15th and August 20th, corresponding to tillering stage, jointing stage and heading stage, respectively. Prior to the canopy images acquisition, calibration measurements were done with a white reference panel. When the images were captured, hyperspectral equipment was placed at a height of 1 m above the rice canopy, two positions were randomly selected to capture images in each plot, and two rice plants were contained in each image. Hyperscanner software (Surface Optics Corp., CA, USA) was used to complete the acquisition and transmission of hyperspectral images. The canopy hyperspectral images of rice at tillering stage under different nitrogen levels are shown in Figure 2.
Nitrogen Content Measurements
After the canopy images acquisition was done, the top 10 leaves of each rice plant in captured images were cut off and put into a numbered sealed bags. All leaves were rinsed with water, put into the oven at 105°C for 30 min and dried at 80°C to constant weight, then crushed and digested with Kjeldahl method. Total nitrogen content of leaves was measured by AA3 flow analyzer (SEAL Analytical Corp., Norderstedt, Germany) according to indophenol blue method, and the average value of 10 leaves was taken as CLNC of the corresponding rice plant.
Reflectance Measurements
A total of 96 canopy images were captured at 3 growth stages, and the reflectance measurements were performed by ENVI5.0 software (Research Systems Inc., CO, USA). 5 regions of interest (ROI) were selected for each rice plant, and the average value of reflectance was taken as the canopy reflectance of the corresponding rice plant. At tillering stage, 64 sets of data were obtained. After removing 2 sets of abnormal data, 42 sets of data were selected randomly as calibration set, the rest were prediction set. In the same way, at jointing stage, 41 sets of data were selected randomly as calibration set, 20 sets of data were prediction set. At heading stage, 42 sets of data were selected randomly as calibration set, 20 sets of data were prediction set.
Spectral Pre-processing
In both spectroscopy and hyperspectral imaging, the spectra is often disturbed by various disturbances. For example, the path length of light transmission is usually affected by the thickness of sample, and the measured values are often affected by the physical properties such as particle size and distribution [14]. Spectral pre-processing was performed using Unscrambler software (Version 9.7, CAMO, Oslo, Norway). The purpose of spectral pre-processing is typically to eliminate the influence of light scattering, background noise, baseline shift, and random error caused by uncontrolled external factors [15]. In this study, 5 spectral pre-processing methods, namely savitzky-golay smoothing (SG), multiplicative scatter correction (MSC), standard normal variate (SNV), first derivative (FD) and second derivative (SD) were applied in 11 strategies, namely SG, MSC, SNV, FD, SD, SG-FD, SG-SD, MSC-FD, MSC-SD, SNV-FD and SNV-SD. SG can filter out the high frequency noise in spectral data, MSC is a transformation method used to compensate for additive and multiplicative effects and SNV is commonly applied to remove the variability caused by light scattering [16]. FD and SD are often used to remove background noise, baseline drift and enhance small spectral features [17]. In order to screen out the optimal pro-processing method from the above methods, PLSR was used to model and predict using the spectral data after various pre-processing methods. Then, the appropriate pre-processing method was selected according to the determination coefficient (R 2 ) and root mean square error (RMSE) of calibration set and prediction set.
Characteristic Wavelengths Selection
Hyperspectral image data is characterized by its 3-dimensionality with multicollinearity, redundancy among contiguous wavelengths, which make the data processing time consuming and could weaken the performance of models [18]. Therefore, the most informative wavelengths should be selected from the whole spectral range of samples to reduce or even eliminate redundancy, thus speeding up data processing and improving the efficiency of data analysis [19]. In this study, 3 variable selection methods, namely successive projections algorithm (SPA), uninformative variable elimination (UVE) and competitive adaptive reweighted sampling (CARS) were used for wavelengths selection in 4 strategies, namely SPA, UVE, UVE-SPA and CARS. SPA, UVE and CARS are all typical variable selection methods for spectral analysis [20][21][22]. SPA selects the variables with minimally redundancy to solve the collinearity problems [23]. UVE selects the informative variables according to their stability calculated from PLSR regression analysis [24]. In the calculation of CARS, the wavelengths with larger absolute regression coefficients of PLSR models are considered as good candidates and selected based on the principle of 'survival of the fittest' from Darwin's Evolution Theory [25]. In addition, besides the calculation of SPA based on full wavelengths, SPA is also commonly carried out after UVE calculation to select the variables that informative but no collinearity.
Models Establishment and Evaluation
The application of chemometrics in modeling spectral data is widely used and is considered as a standard procedure for establishing prediction models in the analysis of hyperspectral images [26]. In this study, partial least square regression (PLSR) and extreme learning machine (ELM) were respectively used to establish prediction models between the spectral data of samples and the corresponding CLNC. PLSR is a classic linear multivariate statistical analysis method that is widely used in stoichiometric modeling analysis. Its principle is to perform factor analysis on characteristic wavelengths matrix X and sample target matrix Y, decompose X and Y into multiple latent variables, and select the optimal latent variables by cross validation method for regression. The cross validation method can well verify the accuracy of models and whether they are supersaturated [27,28]. ELM is a simple supervised learning algorithm for single-hidden layer feedforward neural network (SLFN), which randomly generates the connection weights between the input layer and the hidden layer, and the threshold of neurons in the hidden layer. In the training process, the uniquely optimal solution will be obtained, just by setting the number of neurons in the hidden layer. Compared with traditional computational intelligence techniques, ELM has proved to be an alternative in terms of generalization performance, learning speed, and computational stability [29]. After the models were established, the performance of models needed to be evaluated quantitatively to determine the merits and demerits of them. The determination coefficient of calibration set (R C 2 ) and determination coefficient of prediction set (R P 2 ) were the main criterion, the root mean square error of calibration set (RMSEC) and root mean square error of prediction set (RMSEP) were the auxiliary criterion. The best model should have high R C 2 , R P 2 and low RMSEC, RMSEP. All prediction models development procedures were carried out with MATLAB R2014a (The Math Works, Inc., Massachusetts, USA). The main steps of CLNC prediction of rice from sampling to modeling is shown in Figure 3.
Comparison of Spectral Pre-processing Methods
The canopy spectra of rice of each growth stage was treated by 11 pre-processing methods, respectively. The prediction models were established by PLSR, and the performance of models is shown in Table 2. The performance of PLSR models based on spectra of full wavelengths after different pre-processing methods was different, because not all methods were able to reduce the noise effects and improve the robustness of models. At tillering stage, the performance of PLSR models based on SG and SG-FD was better than model based on original spectra, the specific values of R C 2 and R P 2 were 0.831 and 0.827 for the SG model and 0.848, 0.834 for the SG-FD model. The PLSR models based on SNV, FD and SG-SD also had higher R C 2 values, but the R p 2 values were not as good as that of original spectra. In addition, the performance of models based on MSC, SD, MSC-FD, MSC-SD, SNV-FD and SNV-SD was not better than model based on original spectra. At jointing stage, the performance of PLSR models based on SG and SG-FD was also better than model based on original spectra, the specific values of R C 2 and R P 2 were both higher. Meanwhile, the regularity of performance of PLSR models based on SNV, FD, SD, SG-SD, MSC-FD, MSC-SD, SNV-FD and SNV-SD was the same as tillering stage. At heading stage, the PLSR models based on SG and SG-FD still performed better than model based on original spectra, but the performance of models based on other pre-processing methods was not better. Among them, the PLSR models based on FD, SD and SG-SD performed slightly better than MSC, SNV, MSC-FD, MSC-SD, SNV-FD and SNV-SD. From tillering stage to heading stage, compared the performance of SG, MSC and SNV, the PLSR models based on SG were the best, indicating that the noise of canopy spectra was mainly caused by the high frequency noise. The PLSR models based on SNV had higher R C 2 values and lower R P 2 values at tillering and jointing stage, lower R C 2 and R P 2 values at heading stage, which indicated that the light scattering was really present in original spectra, but it was not the main factor that affected the performance of models. The PLSR models based on SMC with lower R C 2 and R P 2 values indicated that although the additive and multiplicative effects were compensated, the high frequency noise in original spectra was also amplified. Compared the performance of SG, SMC, SNV combined with FD and SD, the performance of models based on SG-FD, SMC-FD and SNV-FD was better than models based on SG-SD, SMC-SD and SNV-SD, indicating that FD was more suitable for the filtering of canopy spectra than SD. Meanwhile, the performance of models based on MSC-FD, MSC-SD, SNV-FD and SNV-SD was not better than models based on single one pre-processing, indicating that the small spectral features enhanced by FD and SD contained some noise, so the noise was also amplified. In summary, at tillering, jointing and heading stage, SG and SG-FD had better performance in eliminating the noise interference and improving the performance of models.
Meanwhile, the R C 2 and R P 2 values of models based on SG-FD were higher than that of models based on SG. Therefore, the best model to predict CLNC of rice should be based on SG-FD pre-processing method. Original spectra and spectra after SG and SG-FD are showed in Figure 4.
Characteristic Wavelengths Selection
After selecting the optimal spectral pre-processing method, SPA, UVE, UVE-SPA and CARS were used to select the characteristic wavelengths to reduce the number of input variables, improve the operation speed, and improve the accuracy and robustness of models. In the SPA calculation, after comparing the RMSEs of different candidate subsets of variables that were obtained by a sequence of projection operations, 30, 26 and 22 wavelengths which had the lowest RMSEs were selected as the characteristic wavelengths at tillering, jointing and heading stage, respectively. The number of these selected wavelengths, which was 23.43% of the number of full wavelengths at tillering stage, 20.31% at jointing stage and 17.18% at heading stage, respectively. In the UVE calculation, the distribution of stability values of all wavelengths was obtained, and the wavelengths with stability values outside the threshold line were defined as the characteristic wavelengths. 56, 51 and 57 wavelengths were obtained at tillering, jointing and heading stage, respectively. The number of these wavelengths was 43.47% of full wavelengths at tillering stage, 39.84% at jointing stage and 44.53% at heading stage, respectively. Besides the calculation of SPA based on full wavelengths, SPA was also commonly carried out after UVE calculation to select the variables that informative but no collinearity [30]. In this study, this strategy was also applied, 16, 16 and 14 wavelengths were obtained at 3 growth stages, respectively. After the UVE-SPA calculation, the information in original spectra was greatly compressed, the number of selected wavelengths was just 12.5% of full wavelengths at tillering stage and jointing stage, 10.93% at heading stage, respectively. At last, CARS was carried out to select the characteristic wavelengths based on the identification of wavelengths with higher absolute coefficients of PLSR models. 11, 10 and 10 wavelengths were selected at tillering, jointing and heading stage, respectively. The number of these wavelengths was just 8.59% of full wavelengths at tillering stage, and 7.81% at jointing stage and heading stage, respectively. The characteristic wavelengths of each growth stage selected by SPA, UVE, UVE-SPA and CARS are shown in Figure 5.
PLSR Models
The characteristic wavelengths of each growth stage selected by the above methods were used to establish the new PLSR models to predict CLNC of rice, the results are shown in Table 3. At tillering stage, the performance of PLSR models based on UVE, UVE-SPA and CARS were better than model based on full wavelengths, the specific values of R C 2 and R P 2 were 0.895 and 0.879 for the UVE model, 0.863 and 0.860 for the UVE-SPA model, and 0.881, 0.871 for the CARS model. But the PLSR model based on SPA had lower R C 2 and R P 2 values than that of full wavelengths. At jointing stage, the PLSR model based on CARS performed better than model based on full wavelengths, the specific values of R C 2 and R P 2 were 0.886 and 0.851. However, the performance of PLSR models based on SPA, UVE and UVE-SPA was not as good as full wavelengths. Among them, the PLSR models based on UVE performed slightly better than SPA and UVE-SPA. At heading stage, the regularity of performance of PLSR models based on SPA, UVE, UVE-SPA and CARS was the same as tillering stage.
From tillering stage to heading stage, the performance of PLSR models based on SPA was not better than PLSR models based on full wavelengths, which indicated that even after filtering by SG-FD, some noise still existed in original spectra, which was non-colinear with the important spectral information. The noise existed in the wavelengths selected by SPA and the performance of PLSR models would be affected. Compared with PLSR models based on full wavelengths, the PLSR models based on UVE had higher R C 2 and R P 2 values at tillering and heading stage, lower R C 2 and R P 2 values at jointing stage, perhaps because during the elimination of uninformative variables, a small amount of useful information was also removed, which resulted in lower R C 2 and R P 2 values at jointing stage. Meanwhile, at tillering and heading stage, the PLSR models based on UVE performed the best, but the number of wavelengths was too large. After carrying out the SPA calculation on the wavelengths selected by UVE, the colinear variables were removed, and the number of wavelengths was significantly reduced, but the R C 2 and R P 2 values of PLSR models also decreased slightly. Compared with SPA, UVE and UVE-SPA, the PLSR models based on CARS always had the better performance. The prediction effect of prediction set of each growth stage based on CARS-PLSR is shown in Figure 6.
ELM Models
Next, the selected characteristic wavelengths of each growth stage were used to establish the ELM models to evaluate the ability of non-linear models in predicting CLNC of rice, the results are shown in Table 4. When implementing the ELM algorithm, the number of neurons in the hidden layer ranged from 5 to 50 in increments of 5, and the number of neurons that achieved the best prediction results was chosen. After repeated training, the optimal number of neurons for 3 growth stages was 15. At tillering stage, jointing stage and heading stage, the regularity of performance of ELM models based on SPA, UVE, UVE-SPA and CARS was consistent with PLSR models. Compared Table 3 and Table 4, the nonlinear ELM models were superior to the linear PLSR models in predicting CLNC, and the ELM models based on UVE, UVE-SPA and CARS all obtained better performance. This might be because, on the one hand, when the amount of nitrogen fertilizer changed, rice plants would undergo complex chemical changes, so there might be a nonlinear relationship between the spectral characteristics and CLNC. On the other hand, although the characteristic wavelengths were selected by SPA, UVE and CARS, which are the variable selection methods based on the linear analysis, there might be still non-linear information in the selected wavelengths.
Compared the performance of different wavelength selection methods in Table 3 and Table 4, if SPA was used to select the characteristic wavelengths from original spectra directly, the performance of models established by these wavelengths was not ideal due to the influence of many external disturbances. UVE could select the wavelengths that had strong information and were not sensitive to external influencing factors, but the number of wavelengths was too large and the performance of models was unstable. By combining UVE with SPA, the number of characteristic wavelengths could be reduced to a minimum of 14, only 10.93% of full wavelengths, but the accuracy also declined. Compared with SPA, UVE and UVE-SPA, CARS could also effectively select the wavelengths that had strong information and were not sensitive to external influencing factors. The maximum number of the selected wavelengths was only 8.59% of full wavelengths, and the performance of PLSR models and ELM models established by these wavelengths was better, especially the ELM models. Therefore, the comprehensive evaluation showed that CARS-ELM could be used as an effective wavelength selection and modeling method for predicting CLNC of rice in cold region. The prediction effect of prediction set of each growth stage based on CARS-ELM is shown in Figure 7.
Conclusions
This study was conducted to evaluate the feasibility of using visible and near infrared hyperspectral imaging technology combined with multiple spectral pro-processing methods, different characteristic wavelength selection methods, linear and nonlinear models for the rapid and non-destructive prediction of CLNC of rice in cold region. In order to eliminate the noise influence and the redundant information, and then select the key variables to establish the higher precision linear and nonlinear models, 5 pre-processing methods of SG, MSC, SNV, FD and SD, 3 variable selection methods of SPA, UVE and CARS, and 2 modeling methods of PLSR and ELM were applied. The results of comprehensive comparison showed that, SG-FD was the optimal pre-processing method to eliminate unexpected noise and enhance the performance of models. CARS could effectively select the characteristic wavelengths that had strong information and were not sensitive to external disturbance factors, and the nonlinear ELM model was more suitable for predicting CLNC of rice in cold region. The results of this study could provide a reference for quantitative analysis of nitrogen content of rice using hyperspectral technology, and technical support for guiding the application of nitrogen fertilizer during the growth process of rice in cold region. Future research is needed to test the performance of models with more samples from different places and different varieties in Heilongjiang Province. Meanwhile, for the ELM model, the wavelengths selected by the methods used in this study may not be the optimal variables. How to develop the nonlinear variable selection methods will also be studied in the future.
|
2019-04-23T13:21:47.487Z
|
2018-12-28T00:00:00.000
|
{
"year": 2018,
"sha1": "1ac9d3191ae3ac8c94be504128b7a5343117ac99",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijimse.20180304.11.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "994107640d4011eb76bc52088f42d43408cd7bd5",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
55902392
|
pes2o/s2orc
|
v3-fos-license
|
Optimization of Multiperiod Mixed Train Schedule on High-Speed Railway
For providing passengers with periodic operation trains and making trains’ time distribution better fit that of passengers, the multiperiod mixed train schedule is first proposed in this paper. It makes each type of train having same origin, destination, route, and stop stations operate based on a periodic basis and allows different types of train to have various operation periods. Then a model of optimizing multiperiod mixed train schedule is built to minimize passengers generalized travel costs with the constraints of trains of same type operating periodically, safe interval requirements of trains’ departure, and arrival times, and so forth. And its heuristic algorithm is designed to optimize the multiperiod mixed train schedule beginning with generating an initial solution by scheduling all types of train type by type and then repeatedly improving their periodic schedules until the objective value cannot be reduced or the iteration number reaches its maximum. Finally, example results illustrate that the proposed model and algorithm can effectively gain a better multiperiod mixed train schedule. However, its passengers deferred times and advanced times are a little higher than these of an aperiodic train schedule.
Introduction
Train schedule which determines all trains' arrival times, departures times, and dwell times at stations is the cornerstone of trains organization and operation for rail enterprise.Generally, it is formulated based on a predesigned train plan which has stipulated all trains origin and destination stations, routes, stop stations, and operation frequencies.However, there are still very few studies such as Michaelis and Schöbel [1], Kaspi and Raviv [2], and Zhou et al. [3] trying to optimize train plan and train schedule integrally in recent years.Obviously, a high-quality train schedule not only contributes to providing passengers with less in-vehicle times and waiting times at origins, but also can bring railway enterprise great convenience in trains organization and operation, which can effectively improve the competitiveness of rail transit in passenger public transportation market.Moreover, train schedule is also the basis of designing the usage plan of railway Electric Multiple Units or locomotives and crew schedule.Surely a better train schedule can effectively reduce the usage count of Electric Multiple Units and crews, which means that more investment and operation costs will be saved for rail enterprise.
According to train organization mode, train schedule can be divided into two types, namely, periodic train schedule and aperiodic train schedule.Periodic train schedule makes trains operate on a periodic basis, for example, 1 hour, and has the obvious advantage of regularity of train operation, which is convenient for passengers to be familiar with.Thus, it has been widely adopted in not only high-speed railway but also urban railway system in the world, especially in Japan and European countries.Regarding the optimizing approach of periodic train schedule, trains of peak hour in one day are generally scheduled firstly and then they are copied to other nonpeak hours, and some trains of nonpeak hours are deleted for fitting the decrease of passenger demand.Periodic train scheduling for railway is usually modeled by the Periodic Event Scheduling Problem (PESP) which was first proposed by Willem and Peeters [4].The main advantage of this model is easily to describe many requirements that practitioners impose on periodic train schedule.Moreover, Liebchen [5] further integrated symmetry into it, and Caimi et al. [6] extended it to propose the Flexible Periodic Event Scheduling Problem (FPESP), which can generate flexible time slots for the departure and arrival times instead of exact times.Besides the PESP model, Serafini and Ukovich [7] proposed a mathematical model for scheduling periodic events with particular time constraints and designed an algorithm of implicit enumeration type for it.Odijk [8] used a mathematical model consisting of periodic time window constraints to construct periodic train schedule.Lindner and Zimmermann [9] developed a mixed integer linear programming model of periodic train schedule with the aim of minimizing operational cost and then decomposed it for being solved by an algorithm integrating cutting plane and branch-and-bound method.For more studies about periodic train schedule, refer to Nachtigall [10], Liebchen [11], and Liebchen and Möhring [12].
Compared with periodic train schedule, aperiodic train schedule has not the periodic regularity of train operation and is optimized integrally based on the time-distance distribution of passenger demand in one day.As aperiodic train scheduling need not consider train periodic operation restriction, and it has more flexibility to arrange trains arrival and departure times.Thus, it can make trains' time distribution fit that of passenger demand better, which contributes not only to reducing passengers deferred times or advanced times at origin stations, but also improving rail enterprise operation efficiency.Since now, many studies have strived to optimize the aperiodic train schedule with different objectives such as minimizing train travel time and maximizing passenger travel cost using many approaches including mathematics programming method, simulation method, and artificial intelligence method.For example, Szpigel [13] first developed a linear programming model to optimize the aperiodic train schedule for minimizing trains total travel time.Higgins et al. [14] developed a branchand-bound solution framework to optimize aperiodic train schedule.And Zhou and Zhong [15] further applied a lagrangian-relaxation-based lower bound rule, an exact lower bound rule, and a tight upper bound rule into it to improve the optimizing quality and efficiency.Carey and Lockwood [16,17] developed an iterative decomposition approach which contains several node branches, variable fixing, and bounding strategies to solve the train scheduling and pathing problems.Medanic and Dorfman [18,19] proposed a local feedback based train travel advance strategy (TAS) by using a discreteevent model to simulate train advance along railway line.Li et al. [20] further proposed an algorithm based on the global information of the train to obtain an effective train travel advance strategy.Carey and Crawford [21] developed some heuristic algorithms to find and resolve the conflicts in draft train schedules.In addition, in some literatures, train scheduling problem is modeled as a blocking parallelmachine job shop scheduling problem solved by the alternative graph model.For example, Liu and Kozan [22] regarded the train scheduling problem as a blocking parallel-machine job shop scheduling problem and solved it by a feasibility satisfaction procedure algorithm.And Burdett and Kozan [23] proposed a novel hybrid job shop approach to scheduling trains; Törnquist and Persson [24] proposed an approach to reschedule railway traffic in an -tracked network when a disturbance has occurred with the aim of minimizing the consequences for multiple stakeholders.For more studies about periodic train schedule, refer to Li and Lo [25], Sahana et al. [26], and Dollevoet et al. [27].
It is hard and not necessary to decide which is better between periodic train schedule and aperiodic train schedule as they both have their own advantages and disadvantages.For periodic train schedule, it has the rhythmicity of train periodic operation and brings great conveniences to passengers.For aperiodic train schedule, it has the advantage of better making trains operation time distribution fit that of passenger demand.In this paper, we attempt to formulate a train schedule having the advantages of both periodic and aperiodic train schedules; that is, trains not only operate periodically, but also can better fit demands' time distribution.And so for that, we first propose a new type of train schedule called multiperiod mixed train schedule in which trains having the same origin, destination, route, and stop stations are regarded as one same type, and the same type of train operates based on a periodic basis.Moreover, trains of different types can have various operation periods.For example, while the operation period of trains of the first type is 1 hour, these of the second type can operate with the period of 1 hour or other periods such as 1.5 hour and 2 hour.For description convenience, trains of one type are also called as the same period trains.Compared with the general periodic train schedule in which all trains operate with only one period, the multiperiod mixed train schedule has the following differences.Firstly, it is optimized integrally like aperiodic train schedule, so we need not delete any trains to fit the decrease of passenger, which may disrupt trains periodic operation regularity.Secondly, trains of different types not only can have various operation periods, but also operate with different numbers, operation time ranges.Thirdly, we have to coordinate the start times, periods, and end times of operation of all types of train to make trains time distribution better fit that of passenger demand.
The main contributions of this paper are as follows: (1) First propose a new type of train schedule; that is, multiperiod mixed train schedule, which not only can provide passengers with periodic operation trains, but also can better fit demand time distribution.
(2) An optimization model of multiperiod mixed train schedule is built to minimize passengers generalized travel costs under the constraints of trains of same type operating periodically, safe interval among trains departure times and arrival times, and so forth.
(3) A solving algorithm is designed to solve the proposed optimization model.It is first to schedule each type of train type by type and then to repeatedly adjust their schedules until it reaches the stop conditions.
The remainder of this paper is organized as follows.In next section, we present an optimization model of multiperiod mixed train schedule.In Section 3, passenger travel costs are analyzed and their calculation method is proposed.In Section 4, an algorithm is designed for scheduling trains of one type based on a given partial train schedule, and then an optimization algorithm of multiperiod mixed train schedule is given in Section 5.An example of Wu-guang high-speed railway is used to illustrate the effectiveness of the proposed model and algorithm in Section 6.Finally, the conclusion and further study are given in Section 7.
Optimization Model of Multiperiod Mixed Train Schedule
A A given train plan of line is denoted by Ω which has specified trains origin and destination stations, travel routes, and stop stations.In this paper, all trains are assumed to be configured with a same type of Electric Multiple Unit; thus they have the same technical speed in each rail section and vehicle number.In reality, the Electric Multiple Units used on a same high-speed railway generally are the same type because this contributes to their management and maintenance, but the number of Electric Multiple Units among trains is usually set as 8 or 16, which will lead to a difference of passenger capacities among trains.Thus, we have to further consider the different passenger capacity restriction of trains when arranging passengers to trains if without this assumption.According to train's origin station, destination station, travel route, and stop stations, trains of Ω are classified into types.Trains of same type have same origin and destination stations, travel route, and stop stations and operate with a same period.
For trains of type = 1, 2, . . ., , their origin and destination stations are, respectively, denoted by and , their route is expressed with a sequence of stations denoted by = { 1 , 2 , . ..}, and their stop stations set is expressed by K ∈ whose element number is denoted by .All trains of this type will operate according to a start time and a fixed period.In other words, their first train departs at a start time, and, after a fixed period, their second train departs again; then their third train departs until all trains have departed.This periodic operation requirement of trains of one type can be clearly illustrated with Figure 1, in which one type of train originally departs at station 1, stops at station 2, and gets through station 3, finally arriving at station 4. As you can see, its first train departs at 8:00, and after a period of 2 hours, that is, at 10:00, its second train departs, and then its third, fourth, and fifth trains depart at 12:00, 14:00, and 16:00, respectively.
For description convenience, the th train of the th type is denoted by (, ), and its arrival time and departure time at station are expressed by and , respectively.As trains of the same type operate periodically, the difference of arrival and departure times between train (, ) and train (, 1) is −1 times of which is the operation period of trains of the th type.That is Thus, the schedule of the th type of train denoted by { , } can be obtained once its first train's arrival and departure times 1 , 1 and its operation period are determined.Hence, the arrival and departure times 1 , 1 and period are selected as the decision variables in this paper.Theoretically, period can be any integer numbers that can ensure that this type of train departs in one day, but for bringing memory convenience to passengers, it is suggested to be the integer times of 10 min, 15 min, or 30 min.All types of periodic trains' schedules constitute a multiperiod mixed train schedule denoted by {, } in which each type of periodic train has an operation period.In fact, if they have a same period, it becomes a general single-period train schedule.Figure 2 shows a simple example of multiperiod mixed train schedule.As seen from it, there are total 3 types of periodic trains which all have the same origin, destination, and route but have different stop stations.The first type of periodic train with stop-by-stop pattern shown with red solid line has the period of 60 min and its earliest departure time is 8:00; the second type of periodic train only stopping at station 2 shown with blue dotted line has the period of 70 min and departs at 8:21 at the earliest; and the third type of periodic train, that is, through trains, shown with green dotted line operates with period of 100 min and departs earliest at 8:42.
The key of multiperiod mixed train schedule optimization is to coordinate all types of periodic trains' schedules and operation periods aiming to maximize passenger service level on the basis of satisfying all types of constraints such as operation time and safety interval requirements.
Besides periodic operation constraints among trains of the same type, that is, satisfying formula (1) and ( 2), another five type constraints below should be satisfied when optimizing a multiperiod mixed train schedule.
(1) Operation Time Constraints.All trains must operate during the operation time [ , ] of high-speed railway, and railway maintenance is usually performed during the nonoperation time.Hence, train's arrival and departure times and should satisfy (2) Constraints of Train Minimum Travel Times in Sections.Train's minimum travel time in a section is composed partly or completely of additional time for starting, pure travel time, and additional time for stopping, which depends on whether train stops at section's endpoints.Obviously, train travel time in section should be greater than this value.
(3) Constraints of Train Minimum Dwell Times at Stations.For making passengers have normal necessary time for getting on and off a train at stations, train's dwell time at each stop station should not be less than a normal necessary time; namely, where is the minimum dwell time of the th type of periodic train at stop station .
(4) Safe Interval Constraints of Train Departure Times.For ensuring that trains depart safe at stations, the interval of departure times between any two trains entering into a same section must be more than the safe interval.That is where is the safe departure time interval of trains departing from station to station .
(5) Safe Interval Constraints of Train Arrival Times.Similarly, for ensuring that trains arrive safe at stations, the arrival time interval among any two trains arriving from a same section must not be less than the safe arrival time interval.That is where is the safe arrival time interval of trains arriving at station from station .
Minimizing trains total travel time is mostly used as the objective of optimizing train schedule; for example, Higgins et al. [14], Zhou and Zhong [15], Carey and Crawford [21], and Zhou et al. [28] all took it as the optimization objective of train schedule.Besides, some studies optimized train schedule with other objectives such as maximizing railway profit (Brännlund et al. [29]) or passengers expected waiting time (Zhou and Zhong [15]) and maximizing trains adjustment ability (Ghoseiri et al. [30]).However, these objectives cannot roundly reflect passenger service level related to train schedule.In this paper, we strive to not only reduce passenger in-vehicle time, but also lower their deferred time or advanced time at origins.Thus, minimizing passenger generalized travel cost is chosen as the objective of multiperiod mixed train schedule optimization.That is where is the generalized travel cost of OD (, ), whose components and calculation method are given in detail in Section 3.
Analysis and Calculation of Passenger Generalized Travel Costs
High-speed railway demand varies with not only OD pair, but also time of one day.Passenger demand of OD (, ) at time is denoted by ().They usually have two travel strategies based on a multiperiod mixed train schedule.One strategy is arriving in origin station at time and then boarding a train departing after that time, which is called Later Travel, and other one is arriving at station in advance for getting on a train departing before time , which is called Earlier Travel.
In this paper, it is assumed that passengers only choose trains that will stop successively at their origin and destination stations and do not transfer between two trains, which is very common on rail network, because the stop-by-stop trains generally have to be operated on the high-speed railway for ensuring that passengers can travel with at least one type through train.The set of candidate trains of OD (, ) passengers at time is denoted by Ω ().It can be divided into two subsets denoted by Ω () and Ω (), respectively for Earlier Travel and Later Travel.
For Earlier Travel passengers, their generalized travel costs include price expense, in-vehicle time, and additional cost of advanced travel, while Later Travel passengers have to bear price expense, in-vehicle time, and additional cost of deferred travel.In fact, additional costs of advanced travel and deferred travel are just a penalty fee for making trains departure time distribution better fit demand time distribution, which contributes to satisfying more passengers' expectation of departing at their favorite time.
When passengers () choose train (, ) ∈ Ω () for Later Travel, their price expense (), in-vehicle time () and deferred time () can be given as follows: where is the price rate of the th type of periodic train, , is the mileage of section (, ), and is the section set from station to station .
For balancing price expense, in-vehicle time and deferred time, the parameter of penalty rate is introduced to describe passenger generalized travel cost as follows: where is the average time value of passengers.When passengers () choose train (, ) ∈ Ω () for Earlier Travel, their price expense () in-vehicle time () can also be calculated by formulas (10) and (11), and their advanced time () is given by Similarly, two parameters, that is, time value and penalty rate , are introduced for balancing price expense, in-vehicle time, and additional cost; namely, According to the generalized travel cost of passengers travelling with each candidate train, we can determine the finally chosen train of passengers () as ( * , * ) with minimum travel cost * ().Consider Considering that passenger demand of each OD is a continuous distribution of time, passenger travel period [ , ] in one day is divided into subperiods with a same length = ( − )/, for example, 1 min, which are denoted by 1 , 2 , . . ., , respectively.For subperiod , its start time and end time are + ( − 1) and + , respectively, and its total demand is given by () .
Thus, the total generalized travel cost of OD (, ) passengers in objective function ( 9) can be obtained by accumulating the travel costs of passengers at each subperiod in one day; namely, Obviously, we can get passenger minimum generalized travel cost by comparing their travel costs of all candidate trains, which is one most direct method but with less efficiency.However, a more efficient approach can be designed according to some characteristics of multiperiod mixed train schedule.
Property 1. Passengers 𝑞 𝑔
have the lowest generalized travel cost traveling with train departing latest comparing with other trains in set Ω ( ), or with train departing earliest comparing with other trains in set Ω ( ).
Property 1 is true because passengers have same price expense and in-vehicle time when they travel with the same type of periodic trains, and the closer the train departure time is to their expectant time, the less their deferred times or advanced times are.
Meanwhile, as passengers deferred time and advanced times monotonously change with the departure time of their travel train, another two properties can be drawn.cost of (), and the other part passengers' minimum travel cost is ().Based on the above analysis, a high-efficiency algorithm for calculating passengers minimum travel costs is designed as in Algorithm 1.
Scheduling One Type of Trains Based on a Given Partial Train Schedule
This section focuses to schedule a new type of periodic train based on a given multiperiod mixed train schedule (, ) in which partial type of periodic trains have been scheduled.Suppose the new train type being scheduled is the th one.For this type of periodic train, their service OD pairs are denoted as .After scheduling them, OD (, ) ∈ passengers travel with the th type of periodic train instead of other type of periodic train if the former has less travel costs than the latter, which will lead to a decrease of their travel costs.For OD (, ) ∈ passengers , their travel cost is denoted as ( ) before scheduling the th type of periodic train, and their travel cost turns to ( ) after that.Obviously, if ( ) < ( ), passengers give up the former train and rechoose the th type of periodic train for travel, and they have the following decrease of travel cost: Else if ( ) ≥ ( ), passengers still choose their former train for travel and their decrease of travel cost is regarded as Δ ( ) = 0. Thus, passengers' total decrease of travel cost of OD (, ) can be calculated by Similarly, we can determine the travel cost decrease of other OD passengers in set .Based on these, the total decrease of passenger travel costs caused by scheduling the th type of periodic train can be given by And minimizing it is chosen as the objective of scheduling the th type of periodic train; namely, Meanwhile, all trains' departure and arrival times have to satisfy the constraints of formula (1) through formula (8) when scheduling ( , ).
Before scheduling the th type of periodic train, if there are other types of scheduled periodic trains traversing section ( 1 , 2 ), the operation period [ , ] can be divided into subperiods denoted as [ 1 , 1 ], [ 2 , 2 ], . . ., [ , ], respectively, by their departure times at station 1 ; otherwise, we express period [ , ] also by [ 1 , 1 ] for uniform description.A train of the th type can depart in subperiod [ ℎ , ℎ ] only when the next condition shown in Figure 4 is satisfied: where is the travel time of the th type of periodic train in section ( 1 , 2 ), and , are, respectively, these of trains departing at times ℎ and ℎ in section ( 1 , 2 ).
Accordingly, the feasible departure period [ ℎ , ℎ ] of the th type of periodic train in subperiod [ ℎ , ℎ ] can be determined by All feasible departure periods of one day form the candidate set of departure time denoted by ( 1 , 2 ) of the th type of periodic train in section ( 1 , 2 ).
The beginning of scheduling the th type of periodic train is to choose time numbers constituting an arithmetic progression with the difference of from ( 1 , 2 ) as their departure times at origins.As the number of their combination solution is enormous, two strategies are applied to reduce the search scope: (1) Take time t at which a train departing can make largest decrease of passenger travel cost as one necessary departure time for the th type of periodic train.
(2) Make the operation period of the th type of periodic train only be an integer time of ( can be 10 min, 15 min, or 30 min) which contributes to remembering trains operation regularity for passengers.
Based on above two strategies, we only have to determine which train is departing at time t and how many integer times of are being the operation period.Denote the train departing at time t as the * th train, that is, train (, ), and * as the integer times.Then the departure and arrival times in section ( 1 , 2 ) of the th type of periodic train can be obtained as follows: Obviously, the value scope of integer * is from 1 to , and the value of integer * must satisfy That is However, departure times calculated by formula (25) do not always belong to ( 1 , 2 ), we have to ignore these solutions not belonging to ( 1 , 2 ).As the combination of * and * is very limited, we can search their all possible combinations and determine their best one according to the decrease Δ of passenger travel cost.
Given a feasible solution ( * , * ), the departure and arrival times of the th type of periodic train in their first traverse section ( 1 , 2 ) can be easily determined by formulas ( 25) and ( 26).However, we still have to arrange their departure and arrival times in other traverse sections.In section ( 2 , 3 ), their earliest departure times can be given firstly by And similarly, a candidate set of departure times of the th type of periodic train in section ( 2 , 3 ) can be determined as ( 2 , 3 ).Then a minimum value of Δ ≥ 0 is determined for satisfying Then the departure and arrival times of the th type of periodic train in section ( 2 , 3 ) can be given by And their departure and arrival times in other left traverse sections can be determined similarly.Now a whole periodic schedule of the th type of periodic train is got according to ( * , * ).Based on it, passenger travel cost decrease Δ can be calculated according to formulas (19), (20), and (21).
It should be pointed out that if there are no feasible solutions when taking the time with the maximum decrease of passenger travel cost as one necessary departure time t of the th type of periodic train, another time making that has secondary maximum decrease can be chosen as its necessary departure time t .
Optimization Algorithm of Multiperiod Mixed Train Schedule
This section gives a general optimization algorithm of multiperiod mixed train schedule based on the scheduling algorithm of one type of periodic train proposed in Section 4. Its solving frame is to circularly optimize each type of periodic train.Firstly, all types of periodic trains are scheduled type by type according to a given initializing order using Algorithm 2. The initializing order of each type of periodic train is determined based on trains' travel mileage and their number of stop stations.The more travel mileage and less number of stop stations one type of train has, the earlier scheduled it is.Secondly, we calculate the total numbers of passengers on each type of periodic train, and, based on these, determine their adjustment orders, and then reschedule each type of periodic train according to this adjustment order with Algorithm 2. The rescheduling process of all types of trains' is repeated until one of the given termination conditions of the algorithm is satisfied.Before this algorithm starting, all OD passengers cannot choose any trains for travelling because train schedule (, ) is empty.Thus, their travel cost is set as a very big number = .Then one type of periodic train, for example, the th type, is selected according to the initializing order, and scheduled using Algorithm 2. After that, passengers' total travel cost declines from = to = − Δ , and train schedule is updated as (, ) = (, ) ∪ ( , ).When all types of periodic trains are scheduled, an initial multiperiod mixed train schedule is obtained.
As passengers' traveling trains have changed with the scheduling of all types of periodic trains one by one; trains passengers numbers and service levels also have changed.Thus, it is necessary to repeatedly reschedule all types of periodic trains for improving trains service level after generating the initial train schedule.Thus, we sort all types of periodic trains by the descent order of their passenger numbers and denote as the order position of the th type of periodic train.When rescheduling the th type of periodic train, they are deleted from (, ) firstly, which results in that passengers Input is partial train schedule (, ), and the th type of periodic train; Output is multiperiod mixed train schedule (, ) ∪ ( , ); Start Determine OD set , and set of train candidate departure time ( 1 , 2 ) in its first traverse section ( 1 , 2 ); Calculate passenger travel cost decrease Δ () when an th type of periodic train departing at time ∈ ( 1 , 2 ); Set t as the time with maximum value of Δ (); Set Δ = 0, * = 0, and * = 0; For = 1, 2, . . ., , do Set = 1; While satisfies formula (28) travelling originally with these types of trains have to choose other type of trains, and their total travel costs increase by Δ − .Then we reschedule the th type of periodic train based on the current train schedule (, ), which also leads to that some passengers choose this type trains again and have their travel cost decrease by Δ + .Thus, passengers' travel cost changes from to + Δ − − Δ + after rescheduling the th type of periodic train.
The termination condition of the algorithm is that passenger total travel cost changes in a little range for more than Υ times of rescheduling or the number of rescheduling has reached its maximum allowed value.
Example Analysis in Wu-Guang High-Speed Railway
In this section, an example in Wu-guang high-speed railway is given to analyze the convergence and effectiveness of the proposed model and algorithm.2, operating on the down direction of Wu-guang high-speed railway.As seen from there, the first type of train has the least stop stations, that is, Changsha and Shaoguang stations, and they mainly service the passengers among big stations.The last type of train stops at all traverse stations, and they mainly provide services for passengers whose origins and destinations locate between two big stations.And the other types of train have the stop stations with the average number of 6, and service passengers between big stations and other stations.All types of train have the maximum technical speed of 300 km/h and their additional times for starting and stopping are 1 min in all railway sections, and they have the minimum dwell time of 1 min at all stop stations.Algorithm 3 is developed with computer language C# on the platform of Microsoft Visual Studio.net.The computer language C# developed by the development team of Anders Hejlsberg is released by Microsoft in 2000.It aims to become an object-oriented programming language with the characteristics of simple, modern and general.This language is derived from the computer language C and C++, and has inherited their powerful performances.Moreover, it takes the .NET Framework Class Library as a basis, and therefore has the advantage of rapid application development similarly to Visual Basic.All instances run on the computer with the system of Microsoft Windows XP (Home Edition), RAM configuration of Pentium(R) Dual-Core CPU E5800, 3.19 GHz, 2.96 GB.The values of parameters in the above model and its solving algorithm are given in Table 3.
Based on the above inputs and parameter values, a multiperiod mixed train schedule is optimized as shown in Figure 5, in which each type of line represents one type of train, when all types of train operating with the periods of the multiples of 10 min.As we can see, each type of train has its own operation periods and start times and end times 6:00 7:00 8:00 9:00 10:00 11:00 12:00 13:00 14:00 15:00 16:00 17:00 18:00 19:00 20:00 21:00 22:00 23:00 6:00 7:00 8:00 9:00 10:00 11:00 12:00 13:00 14:00 15:00 16:00 17:00 18:00 19:00 20:00 of each OD need not know the operation regularity of all types of train and just to know these of their candidate trains.Hence, the information that passengers should be familiar with is smaller than these of all types of train.For example, most of passengers from Hengshan station to Guangzhou station would like to choose the 3rd, 6th, and 9th type of train for travelling because that the 11th type of train has too many stop stations for them, and the other types of train do not stop both at their origins and destinations.Thus, they only have to remember the operation regularities of these three types of train.
Next, we analyze the convergence of Algorithm 3. The operation periods of all types of train are set as the multiples of 10 min, 15 min (one quarter), and 30 min (half an hour), respectively, and then optimize their corresponding multiperiod mixed train schedules for 20 times, each case based on the above inputs and parameter values.The computational results show that some instances of three cases cannot obtain a feasible solution because a small part of types of trains are not scheduled successfully as the restriction of its operation period and other scheduled trains.The numbers of invalid instances in three cases are 1, 3, and 7, respectively.Obviously, it becomes larger with the period multiple increasing from 10 min to 15 min and then to 30 min.However, we can avoid them by adjusting train scheduling order and then rescheduling them.
The change relation of objective with the iteration number is given as shown in Figure 6, and the change relationship of computing time with the iteration number is shown in Figure 7.As seen from Figure 6, when trains' operation periods are set as the multiples of 10 min, the algorithm terminates as the objective value changes in a little range for more than Υ = 6 times, and it can be better converge to a satisfactory solution.However, when they are set as these of 15 min and 30 min, although the objective value also descents quickly at the first 8 iterations, it waves among a small range later.Finally, the algorithm stops as it reaches the maximum iteration count, that is, 16 times.Based on Figure 7, we can find that the computing time of the case with the multiples of 10 min being trains' operation periods is far smaller than these of the other two cases.The computing time of the case with = 10 min is only 6 min while these of the cases with = 15 min and = 30 min are 15 min and 32 min, respectively.Moreover, the average computing times per iteration also have a lot of differences among these three cases.The average computing time per iteration in the case with = 10 min is 0.6 min, while these of the other two cases are 1.0 min and 2.3 min, respectively.The most reasonable explanation is that with the value of increasing from 10 min to 15 min and then to 30 min, more and more impossible train schedules may appear when scheduling each type of train because of the operation period restriction, which not only leads to a lower solution quality, but also resulted in more computing times.
In order to compare with an aperiodic train schedule, firstly each train is regarded as one type and is scheduled to create an aperiodic train schedule.Then optimize the multiperiod mixed train schedule with operation periods as the multiples of 10 min, 15 min, and 30 min, respectively.Passenger service level indexes including average deferred time and average advanced time of the aperiodic train schedule and the three multiperiod mixed train schedules are given in Table 5.As seem from there, three multiperiod mixed train schedules have more deferred times and advanced times comparing with the aperiodic train schedule.While the average deferred time of the aperiodic train schedule is 35 min, these of the three multiperiod mixed train schedules are 45 min, 48 min, and 54 min, respectively, which are larger than the former by 28.5%, 37.1%, and 54.3%.That is because the optimization of multiperiod mixed train schedule has more restrictions comparing with the aperiodic train schedule for making same type of train operate periodically.Hence, it is suggested that it is prior to take the multiples of 10 min or 15 min as trains' operation periods.
Conclusion and Further Study
In this paper, a new type of train schedule called multiperiod mixed train schedule is first proposed to make trains operate with multivarious periodic bases.Then its optimization model is built to minimize passengers generalized travel costs including price expense, in-vehicle time, and penalty cost for deferred or advanced travel subjecting to lots of constraints covering the periodic operation requirement of trains of same type, high-speed railway operation time, and safe headway requirement of train departure and arrival times.Then a heuristic algorithm, in which each type of periodic train is rescheduled circularly, is designed to solving this model.
Example results illustrate that a satisfactory multiperiod mixed train schedule can be gained using the proposed model and algorithm.However, it has more average deferred or advanced time comparing with the aperiodic train schedule.This paper only considers the optimization of multiperiod mixed train schedule on a high-speed rail line.It is very necessary to optimize that of a rail network in the further researches because those trains on different rail lines interact with one another.Moreover, as passenger demands of highspeed railway largely depend on their service level under the competitive environment with air transportation and highway, another further research is to take this effect into consideration when optimizing multiperiod mixed train schedule.
Figure 2 :
Figure 2: A simple example of multiperiod mixed train schedule.
For
OD (, ) passengers in period [ , ℎ ], their minimum generalized travel cost can be calculated based on Properties 1, 2, and 3. Firstly, the minimum travel cost of time passengers for Later Travel and Earlier Travel can be obtained, respectively, as ( ) and ( ) by comparing their lowest cost among all types of periodic train according to Property 1. Then the minimum generalized travel cost of other time passengers in [ , ℎ ] for Later Travel and Earlier Travel can be calculated as () = ( ) − ( − ) and () = ( ) + ( − ), respectively, based on Properties 2 and 3. Finally, the minimum generalized travel cost of passengers in [ , ℎ ] can be obtained according to the change relation of () and () with time shown in Figure 3. Obviously, the minimum travel cost of passengers at any time ∈ [ , ℎ ] is () in Figure 3(a), that is, () in Figure 3(b).But in Figure 3(c), passengers are divided into two parts, these in time ∈ [ , ] have the minimum travel
Figure 5 :
Figure 5: The multiperiod mixed train diagram with periods of the multiples of 10 min.
Figure 6 :
Figure 6: The change relation of objective with the iteration number.
Figure 7 :
Figure 7: The change relation of computing time with the iteration number.
Table 1 :
The Wu-guang high-speed railway between Wuhan city and Guangzhou city operates since 2009 in China, and is a busy passenger railway line operating with 57 trains each day, more trains in festival and holiday such as Spring Festival.It consists of 16 stations and has the total length of 1069 km as shown detailed in Table 1.Wuhan station and Guangzhou station are its endpoint stations, and Changsha station, Shaoguang station Stations and mileages of Wu-guang high-speed railway.railway have different stop stations and the numbers of trains having some stop stations are very small, Input is High-speed rail line , train plan Ω, all operating time standard; Output is multiperiod mixed train schedule ( * , * ), passenger total travel cost * ; Start Determine the initializing order of all type of periodic train; Set (, ) = 0, and = ; For = 1, 2, . . ., do Schedule the th type of periodic train, and calculate the decrease of travel time Δ ; Set (, ) = (, ) ∪ ( , ), and = − Δ ; End for Set Ψ max be the maximum number of rescheduling, Ψ = 0 be its current number, and = 0 be the count of travel cost without reducing, and = 0.05; Do Set ( , ) = (, ), and = ; For = 1, 2, . . ., do Delete the th type of periodic train, that is, set ( , ) = ( , ) \ ( , ); Calculate the increase of passenger travel cost Δ − , and set = + Δ − ; Reschedule the th type of periodic train, and set ( , ) = ( , ) ∪ ( , ); Calculate the decrease of passenger travel cost Δ + and set = − Δ + ; End for If | − |/ ≤ , then = + 1, else = 0; Set Ψ = Ψ + 1; While ≤ Υ and Ψ ≤ Ψ max Set ( * , * ) = ( , ), and * = ; End Algorithm 3: Optimizing the multiperiod mixed train schedule of high-speed railway.
Table 2 :
All types of trains and their numbers.
Table 3 :
Parameter values of model and algorithm.
Table 4 .
For example, the first type of train operates with start time of 6:52, end time of 16:52, and its operation period is 5 h, while the second type of train has the operation period of 2 h and 50 min and operates starting from 8:43 to 17:13.Obviously, the operation periods of all types of train are the multiples of 10 min.Although this multiperiod mixed train schedule provides 11 types of periodic trains for passengers, actually passengers
Table 4 :
Trains operation times and periods.
Table 5 :
The service level comparison between aperiodic and multiperiod mixed train schedules.
|
2018-12-07T22:06:13.601Z
|
2015-04-20T00:00:00.000
|
{
"year": 2015,
"sha1": "29d9d5bf9ae988cf25949bc26904ff1e0a80466b",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ddns/2015/107048.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "29d9d5bf9ae988cf25949bc26904ff1e0a80466b",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
18839735
|
pes2o/s2orc
|
v3-fos-license
|
Enhancement of Cerenkov Luminescence Imaging by Dual Excitation of Er3+, Yb3+-Doped Rare-Earth Microparticles
Cerenkov luminescence imaging (CLI) has been successfully utilized in various fields of preclinical studies; however, CLI is challenging due to its weak luminescent intensity and insufficient penetration capability. Here, we report the design and synthesis of a type of rare-earth microparticles (REMPs), which can be dually excited by Cerenkov luminescence (CL) resulting from the decay of radionuclides to enhance CLI in terms of intensity and penetration. Methods: Yb3+- and Er3+- codoped hexagonal NaYF4 hollow microtubes were synthesized via a hydrothermal route. The phase, morphology, and emission spectrum were confirmed for these REMPs by power X-ray diffraction (XRD), scanning electron microscopy (SEM), and spectrophotometry, respectively. A commercial CCD camera equipped with a series of optical filters was employed to quantify the intensity and spectrum of CLI from radionuclides. The enhancement of penetration was investigated by imaging studies of nylon phantoms and nude mouse pseudotumor models. Results: the REMPs could be dually excited by CL at the wavelengths of 520 and 980 nm, and the emission peaks overlaid at 660 nm. This strategy approximately doubled the overall detectable intensity of CLI and extended its maximum penetration in nylon phantoms from 5 to 15 mm. The penetration study in living animals yielded similar results. Conclusions: this study demonstrated that CL can dually excite REMPs and that the overlaid emissions in the range of 660 nm could significantly enhance the penetration and intensity of CL. The proposed enhanced CLI strategy may have promising applications in the future.
Introduction
Nuclear imaging allows for sensitive and noninvasive measurement of radionuclide-labeled probes in living animals and humans [1]. However, wider application of nuclear imaging is limited by the necessity for long acquisition time and expensive instruments [2]. Although optical imaging is of much lower expense and higher throughput, it is still limited by the paucity of available imaging agents for clinical use, with only three non-specific agents approved by the US Food and Drug Administration (FDA): indocyanine green (ICG), methylene blue, and fluorescein [3]. Cerenkov luminescence (CL) is an intrinsic optical signal generated when a charged particle travels through a medium faster than the velocity of light in that medium. Because many radionuclides (e.g. 131 I, 18 F) approved by the FDA for clinical use are emit charged particles, capable of producing CL that can be detected by low-cost chargecoupled device (CCD) cameras [4,5]. The concept of Cerenkov luminescence imaging (CLI) provides a potential method to achieve multimodality molecular imaging by combining radionuclide labeled probes and optical imaging together [6]. Since 2009, CLI has been successfully utilized in various fields of preclinical study, including in vivo tumor imaging [5,7,8], therapy monitoring [9], intra-operative guidance [10], lymphography [11], endoscopy [3,12] and in vivo 3-dimensional reconstruction [13][14][15].
Despite these notable advancements in CLI, the use of CLI is highly restricted by its relatively weak luminescent intensity and insufficient tissue penetration capability [2,7,16]. This could mainly be attributed to the spectral characteristics of CL. For example, the CL spectrum is continuous, and the most intensive CL is in the spectrum of short wavelengths (ultraviolet/blue) below 650 nm, which can be easily scattered and absorbed by biological tissues [2,17]. The region of the CL spectra from 650 nm to 900nm is very weak although it has better penetration ability in biological tissues. Additionally, the regions of the CL spectra below 500 nm and over 900 nm are out of the maximally effective detection range of common CCD cameras, according to the user's manuals for commercial CCD camera. Because of those reasons, the natural distribution of the CL spectra is not perfectly suitable for in vivo imaging, especially for the detection of deeply seated targets. Thus, to summarize, the preferred emission band for Cerenkov luminescence, taking both penetration and intensity into consideration, is within the narrow range from 650 to 900 nm.
Previous studies have indicated that the coupling of CL with other fluorophore (e.g., small molecules, quantum dot (QD)) was able to transform some of the blue-weighted CL spectra to red-shifted emissions [18][19][20]. In this scenario, CL serves as the energy donor, while the fluorophores represents the energy acceptor. Such a strategy may be of great significance for the development of CLI technologies with enhanced intensity and penetration. Considering the intensity of CL is relative to the inverse square wavelength while the longer wavelength light has better penetration and lower absorption by tissue, a material with large Stock-shift should be better to convert much shorter wavelength CL to longer wavelength luminescence. Rare-earth nanoparticles (RENPs) with advantaged properties such as high photostability, absence of blinking, large Stokeshifts, long lifetimes and low cytotoxicity are promising for such CL conversion strategy [21].
RENPs are reported to simultaneously possess both downconversion and up-conversion effects [22]. Down-conversion is a process through which higher energy photons are absorbed, while lower energy photons are emitted [23]. In contrast, the up-conversion effect involves of emission of higher energy photons through sequential absorption of lower energy photons [24]. Therefore, RENPs possessing these two effects are capable of being dually excited by either the ultraviolet/blue spectrum or the near-infrared (NIR) spectrum. Moreover, emissions resulting from the two excitation sources could be adjusted to overlay in a desirable wavelength range for in vivo imaging by modifying the proportion of excitation sources within the RENPs [25]. Such dual excitation characteristics of RENPs indicate possible enhanced emission resulting form excitation by both the blue-weight band and the NIR band of CL.
In this study, we aimed to design and synthesize a novel type of REMPs, NaYF 4 :Er 3+ , Yb 3+ hollow microtubes, which could be simultaneously dually excited by the CL spectrum below 650 nm (520 nm) and over 700 nm (980 nm). The emission bands from the dual excitation were adjusted to be overlaid in the desirable range of 650-900 nm for in vivo imaging. The feasibility of this dual-excitation-based enhancement strategy was then evaluated on phantoms and pseudotumor models. Our findings offer an alternative route for exploring the possible further applications of CLI with enhanced emission intensity and tissue penetration.
Materials
The radionuclides 18 F and 131 I were obtained in the form of 2-18-fluoro-2-deoxy-D-glucose ( 18 F-FDG) and Na 131 I. 18 F-FDG was produced by cyclotron (GE Industries Inc., USA) and FDG reagent kit (ABX, Germany). Na 131 I was purchased from Chengdu Gaotong Isotope Co., Ltd. (China). For the synthesis of REMPs, NaF was purchased from Tianjin Yong Sheng Fine Chemical Co., Ltd. Y(NO 3 ) 3 , Yb(NO 3 ) 3 and Er(NO 3 ) 3 were purchased from Alfa Aesar (UK). All chemicals were analyticalgrade reagents and were used without further purification.
Synthesis and characterization of REMPs
NaYF 4 microparticles codoped with Yb 3+ and Er 3+ ions were synthesized by a hydrothermal method. In a typical procedure, Y(NO 3 ) 3 , Yb(NO 3 ) 3 , and Er(NO 3 ) 3 solutions with a molar ratio of 80:18:2 were added into beaker and mixed by stirring. Then, 50 mmol sodium fluoride dissolved in 10 mL ultrapure water was added to the above solution with a Ln 3+ (Y 3+ , Yb 3+ , Er 3+ ):NaF molar ratio of 1:16. Subsequently, the solution was stirred for 10 min. The pH value of the solution was then adjusted to around 3.0 by using dilute HNO 3 and NH 3 ·H 2 O solutions. The mixture was then transferred into a 50-mL Teflon vessel. The vessel was tightly sealed in an autoclave, heated at 180°C for 14 h, and then naturally cooled down to room temperature. The products were washed and centrifuged for 3 times using ethanol and deionized water. After drying in a vacuum oven for 12 h, the final white powders were collected for further use. Scanning electron microscopy (SEM; FEI Quanta 200, Philips, Netherlands) was utilized to identify the size and morphology of REMPs. The crystal phase of REMPs was characterized by an X-ray diffractometer (XRD, Bruker D8 Discover, USA). The absorption spectrum of REMPs with a concentration of 0.025 mg/mL was recorded with a UV-VIS-NIR absorption spectrophotometer (Cary 500, Varian, USA). To determine the emission spectrum of REMPs excited by external sources with wavelength of 520 and 980 nm, the emission spectrum of the REMPs were measured respectively by using a fluorescence spectrophotometer (Edinburgh Instruments, Britain). 520 nm excitation source was generated by the optical grating of the spectrophotometer, and the 980 nm one was from a laser diode. All spectral measurements were performed at room temperature.
Measurement of emission spectrum excited by CL
Radioactive sources containing 3.7 MBq of Na 131 I or 18 F-FDG with or without 2 mg/mL REMPs dissolved in dimethylsulfoxide (DMSO) with a final volume of 200 μL were prepared in 96-well black plates (Nunc, USA). The emission spectrum of REMPs excited by CL was measured by IVIS System (Caliper Life Science, USA) equipped with 18 filters ranging from 500 nm to 840 nm, with a 20-nm interval in full width at half maximum. The concentration of the added REMPs was based on the tested favorable ratio of radionuclide and microparticles [26]. Optical images were collected for 20 sec and were further analyzed using the region of interest (ROI) method. The same procedure was repeated for 3 times.
Assay of the enhancement of CLI intensity
CL excites a secondary source of emission from REMPs, which may improve the intensity of CLI. To investigate the enhanced CLI intensity, samples containing Na 131 I, REMPs, or both Na 131 I and REMPs were added to 3 wells of a 96-well black plate with a final volume of 200 μL as follows: 2 mg/mL REMPs, 3.7 MBq Na 131 I, or 2 mg/mL REMPs with 3.7 MBq Na 131 I. 18 F-FDG as the parallel group, and 99m Tc as the negative control group, were similarly prepared as above. Optical images were detected using IVIS system for 60 sec. The same procedure was repeated for 6 times. Total CLI intensity was measured by drawing ROIs along the wall of each well.
Assay of the enhancement of CLI penetration
To demonstrate the enhancement of CL penetration capacity, we performed a cubic nylon phantom experiment. The optical properties of the homogeneous nylon phantom were the same as that of mouse lungs. Two holes with a diameter of 2 mm and the small distance from its center axel to the top surface of the phantom were drilled into the phantom to embed rubber capillary tubes with the same scattering effect of the phantom. A series of depths from the top surface of the phantom to the center axel of the hole (0, 2.5, 5.0, 7.5, 10, and 15 mm) was tested. 3.7 MBq 18 F-FDG in a volume of 20 μL was injected at the bottom of a rubber capillary tube, and 3.7MBq 18 F-FDG with 2 mg/mL REMPs with a final volume of 20 μL was injected into another tube. The 2 tubes were painted black except the side facing the camera to avoid the interaction of the CL from each other. Then the two tubes were placed into the holes of the phantoms. Optical images were detected using IVIS system for 60 sec. The same procedure was repeated for 6 times.
Pseudotumor Study
To demonstrate the comprehensive enhancement capacity of REMPs on the real biological organism, we followed a pseudotumor study on living animals as described in previous studies [18,19,27]. Animal care and protocols were approved by the Fourth Military Medical University Animal Studies Committee (Protocol 20090260). All animal procedures were performed under anesthesia by inhalation of a 1%-2% isoflurane-oxygen mixture. 50 μL Matrigel (BD Biosciences, USA) was mixed with either 50 μL REMPs dissolved in DMSO or 50 μL pure DMSO in a microfuge tube. Then, 3.7 MBq 18 F-FDG was separately added in the microfuge tubes to achieve a total volume of 150 μL and a final REMPs concentration of 2 mg/mL. Anesthetized nude mice were injected subcutaneously with 100 μL of the Matrigel+REMPs+ 18 F-FDG mixture in the right flank and 100 μL of the Matrigel+DMSO+ 18 F-FDG mixture in the left flank. Mice were kept in warm for 5 min until the Matrigel solidified. The mice were then imaged in a microPET/CT (Mediso Ltd., Hungary) to assure the radioactive sources of each pseudotumor were roughly the same. Optical imaging was performed on IVIS system for 60 sec after finishing PET/CT scans. The total optical signal value of each pseudotumor was calculated by ROI method. The radioactivity in each pseudotumor was calculated based on PET/CT imaging by using the 3D-ROI method. The optical signal value was normalized by the radioactivity in the same pseudotumor.
Statistical analysis
Data were reported as the mean ± SEM. Pairs were compared by Student's t-tests, and p-values of less than 0.05 were considered significant.
Characterization of REMPs
A representative SEM image of NaYF 4 :Yb 3+ ,Er 3+ microparticles is shown in Figure 1A, which illustrated that the synthesized microcrystals were hollow tubes. We observed that the microtube with an average size of 133±28 nm × 467±72 nm (diameter × length). XRD patterns of the as prepared microparticles are shown in Figure 1B. As can be seen, these microparticles mainly had a hexagonal structure of NaYF 4 , which agreed well with the standard pattern (JCPDS 16-0334). The UV-NIR spectrum of the NaYF 4 :Yb 3+ ,Er 3+ microparticles ( Figure 1C) showed strong absorption and characteristic absorption peaks of 380, 520, 640, and 980 nm. This makes it a potential energy acceptor to absorb the 520 and 980 nm luminescence of CL. The emission spectrum of NaYF 4 :Yb 3+ ,Er 3+ microparticles is shown in Figure 1D. Under excitation of 980 nm, the up-conversion emission spectral distribution was between 500 and 800 nm, with peaks at 520, 540, and 660 nm. Under excitation of 520 nm, the down-conversion emission spectrum of NaYF 4 :Yb 3+ ,Er 3+ microparticles was between 550 and 800 nm, with a peak at 660 nm. The inset in Figure 1D shows that the synthesized microparticles emitted strong green fluorescent light following excitation by the 980 nm laser with power density of 250 mW.
Measurement of emission spectrum excited by CL
For the CLI tested, the imaging of the luminescence at different wavelengths using narrow band filters is shown in Figure 2A, and the spectral distribution is shown in Figure 2B. Both Na 131 I and 18 F-FDG shared a similar distribution of CL. When mixed with REMPs, both of these two nuclides shared another similar spectral distribution, and significantly increased intensity peaks at 540 and 660 nm were identified.
Assay of the enhancement of CLI intensity
As shown in Figure 3A and 3B, the detected luminescence intensity was almost twice as enhanced in the 18 F-FDG or Na 131 I samples containing REMPs, as compared with samples of pure 18 F-FDG or Na 131 I (p < 0.001). No emission, but background noise, was observed in samples of pure REMPs, indicating that the REMPs themselves could not be excited by γ rays. Since 99m Tc emitted γ photons with energies lower than the threshold for producing CL [7], the wells containing 99m Tc failed to generate CL and thus failed to excite REMPs.
Assay of the enhancement of CLI penetration
MicroPET/CT scans revealed that the radioactivity of the two radiation sources were roughly the same ( Figure 4A). The results of optical imaging showed that the emission intensity of both 18 F-FDG with REMPs and 18 F-FDG alone decreased with increasing depth from the top surface of the phantoms to the center axel of the holes. The maximum penetration of 18 F-FDG with REMPs reached up to 15 mm, while 18 F-FDG alone had a maximum penetration of around 5 mm ( Figure 4B and 4C).
Pseudotumors Study
The radioactivity of the pseudotumors detected by microPET/CT was demonstrated to be similar in both pseudotumors with or without REMPs ( Figure 5A). Optical images showed an enhanced intensity on the right flanks of the mice ( Figure 5B). Using the nuclear signal as a reference, we observed that the relative intensity of the REMPs mixed with 18 F-FDG injected pseudotumors was significantly higher than that of the 18 F-FDG injected pseudotumors (244.7 ± 23.5 vs. 159.9 ± 11.6; n = 6, p < 0.001; Figure 5C).
Discussion
In this study, we found that the Er 3+ ,Yb 3+ -doped REMPs has two main absorption peaks on 520 nm and 980 nm and main emission peak on 660 nm which was within the desirable range for CCD detection and for penetrating biological tissues. And when mixed with 18 F-FDG or Na 131 I, the photon emission peak at 660 nm was also observed as we expected. It could be considered as the REMPs were dually excited by the partial light of CL around 520nm and 980nm because the spectrum of CL covers the absorption spectra of REMPs. But as we known, those two radionuclides we tested in this work both emit highenergy charged particles, and we are not sure whether the high energy charged particles have any effect on REMPs or not. We hypothesis that the excitation mechanism for our experimental case may also involves non-radiation resonance energy transfer process. During this process, the nuclides ( 131 I and 18 F) act as energy donor while the REMPs act as energy acceptor. So, when nuclides and REMPs get closed to each other (e.g., mixture), non-radiation energy transfer may occur and generate specific emission. This will be further studied in our future works. Whatever, this strategy approximately doubled the overall detectable intensity of CLI and extended its maximum penetration in nylon phantoms about 3-fold. The penetration study in living animals indicated a similar result.
RENPs doped with rare-earth activator ions and rare-earth sensitizer ions possess a unique optical property known as photon up-conversion. Such a unique property gives them several advantages in bioimaging, including remarkable penetration depth into tissues upon NIR excitation [28], significantly decreased autofluorescence [29], no photobleaching or photoblinking, and high spatial resolution during bioimaging [30,31]. In addition to photon up-conversion, these nanoparticles or microparticles simultaneously possess the capability of photon down-conversion [32], giving them dual excitation characteristics under an excitation source having both UV and NIR light. In fact, the concept of combining upconversion and down-conversion to achieve dual excitation effects has already been proposed and tested [25,33]. Apart from their superior optical properties, RENPs also exhibit low cytotoxic to a broad range of cell lines [28,[34][35][36].
In previous studies, enhanced CLI could be achieved by utilizing QDs as energy acceptors [18][19][20]27]. The principles of this enhancement were based on the down-conversion effect of QDs by transferring the excitation of a short wavelength from CL into the emission of a longer wavelength. Liu et al., Gelovani et al., and Capenter et al. clarified such QDs-based enhancement strategies and intuitively improved the penetration of CLI by using pseudotumors mouse models [18,19,27]. In their studies, the penetration of CL could be enhanced by transferring the excitation in the blue-weighted spectrum to an emission in the range of 629-705 nm. Using a similar mechanism, we demonstrated, in this proof-of-concept study, that Er 3+ ,Yb 3+ -doped REMPs, with both down-conversion and up-conversion effects, could not only be excited by the blue-weight spectrum of CL as the QD, but could also transfer the undetectable, longer-wavelength more than 900 nm CL spectrum into a peak at 660 nm, thus extending the excitation source and adding additional intensity to CLI.
CLI has recently been recognized as a potential optical imaging modality [16]. However, the limited intensity and penetration capacity of CLI prohibits its in vivo application, especially in the clinical setting. Although the enhancement effect of the dual-excitation-based strategy proposed in the current study cannot be used for all in vivo applications, it shows potential for use in the following fields. Firstly, in preclinical in vivo imaging, this strategy would make it possible to achieve whole-body imaging or even Cerenkov luminescence tomography (CLT) on mice. According to the research on RENPs-based up-conversion luminescence imaging conducted by Chen et al., whole-body imaging of mice could be achieved with a penetration depth of 3.2 cm [37], indicating a reasonable potential of our dual-excitation-REMPsbased strategy to achieve whole-body imaging or even CLT in mice. Secondly, this strategy is also helpful in current clinical CLI technologies, including endoscopy [3,14] and thyroid imaging [38]. The mean thickness of the human gastrointestinal tract is 3-4 mm, and the thickness can increase to several centimeters under pathological conditions [39]. The mean thickness of the human nuchal skinfold is 5.2 mm [40], and the location of the thyroid gland can be much deeper depending on the thickness of subcutaneous tissue and the capsula glandular thyroidea. As a result, even if the enhancement of CL penetration is modest, from several millimeters to 1-2 centimeters, the sensitivity of CLI in the above clinical applications may be significantly improved while reducing the required radioactive dose. Moreover, the high magnetic moment of certain rare-earth ions, like Gd 3+ , renders REMPs potent contrast agents for magnetic resonance imaging (MRI). Thus, the radionuclides combined with REMPs will be a potential contrast agent for CLI/MRI multimodality imaging.
Although we clarified the feasibility of REMPs-based enhancement of CLI, some limitations remain unresolved. Firstly, in this proof-of-concept study, the images were taken upon physically mixing REMPs and radioactive sources in DMSO. The interactions between the REMPs and radioactive sources were unclear. The emission intensity may be affected by the distance and conjunction behaviors between the radionuclide and REMPs. Secondly, the CL energy varies with different radionuclides [7]; thus, other radionuclides should be explored to determine whether better REMPs-assisted CLI could be achieved. Thirdly, although the intensity and penetration of CLI has been enhanced by 2-fold, the extent of this enhancement by the addition of REMPs, especially on the intensity of CLI, was not as high as that achieved by QDs in previous studies [18,19]. There is still room for extensive improvement in the energy transfer efficiency of the REMPs to further enhance the intensity and penetration of CL. And according to the research on REMPs-based up-conversion luminescence imaging conducted by Liu et al., the REMPs could be modified and it is possible to delivery of the REMPs to the tissue of interest by coating the REMPs with PEG and labeled with radionuclides (for excitation). Therefore, further investigations are still needed to address these above limitations.
Conclusion
This study demonstrated that Er 3+ , Yb 3+ -doped REMPs can be dually excited by Cerenkov luminescence, producing overlaid emissions at 660 nm with improved intensity and penetration. The proposed strategy can significantly enhance the penetration and intensity of Cerenkov luminescence imaging, indicating its potential applications in the clinical settings.
|
2018-04-03T05:38:48.727Z
|
2013-10-25T00:00:00.000
|
{
"year": 2013,
"sha1": "c901347c84b37aefe8815492d79acc06c6f58cf3",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0077926&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c901347c84b37aefe8815492d79acc06c6f58cf3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
117457587
|
pes2o/s2orc
|
v3-fos-license
|
Pressure fluctuation analysis and extreme pressure prediction in the transient processes of pumped storage power station
The analysis of hydraulic transient process is very important for the design and operation of pumped storage power (PSP) station. The one-dimensional mathematical model is often used in the transient process calculation, by which the maximum value at the spiral case inlet and minimum value at the draft tube inlet can be obtained. However, the one-dimensional mathematical model only provides cross section pressure trends at spiral case inlet and draft tube inlet, and the pressure fluctuation in load rejection cannot be revealed. But the pressure fluctuation amplitude of pump turbine in load rejection process is proved to be huge by test on site. In load rejection test, the measured pressure signals obtained from one or two taps, and the measured pressures affected by tap locations, and polluted by noise. So there exists great difference between the measured extreme values and numerical extreme values. To improve the accuracy of theoretical prediction, analysing the measured data and extracting the pressure fluctuation required in computed results are needed. This paper analyses the load rejection test results in generating mode and pumping mode of a PSP plant using empirical mode decomposition (EMD) method. The measured results at spiral case inlet and draft tube inlet are successfully separated to the trend terms and pulsation terms. Comparison between the measured trends and theoretical calculated results are also performed. The pressure fluctuation value superposed in the transition process calculated result of extreme pressure is recommended, which is of great significance for the safe operation of the PSP station.
Introduction
The amplitude of pressure fluctuation of pump turbine is much larger than that of conventional turbine, due to the effects of rotor stator interference (RSI), rotational stall, vortex and other factors [1] [2] , especially in load rejection process, the pressure fluctuation amplitude may be up to 100 meters [3] . However, there are few studies on pressure fluctuation in transient process. During the engineering design, the pressure trends of the spiral case inlet section and draft tube inlet section can be calculated by the one-dimensional mathematical method [4] . Based on the calculated results, the extreme pressure of the PSP station in transient process can be forecasted by adding a certain proportion pressure fluctuation and calculated deviation from empirical value. The values of pressure fluctuation and calculated deviation are generally selected according to the following principles: for PSP station with maximum head over 200m, the 5% to 7% of net head before load rejection is usually given as the pressure fluctuation value at spiral case inlet, the calculated deviation of the 10% increase pressure can be selected ; the pressure fluctuation value at draft tube inlet is usually given 2% to 3.5% of net head before load rejection and the calculated deviation can be selected 7% to 10% of pressure drop. However, the measured value in load rejection is much larger than the above-mentioned experience value, because the acquired data is influenced by the measurement method, the measuring point location, the length of measuring pipeline and the dynamic response characteristic of the sensor, etc. Hence, in this paper we analysed the test results of load rejection of a pump-turbine at a case power plant using empirical mode decompositon (EMD) method, the measured pressures were successfully separated into the trend terms and pulsation terms. The peak to peak (p-p) values of the pressure pulsation were obtained by analysing the pulsation terms with 95% confidence interval method, and the calculated deviations were obtained by comparing the trend terms with the numerical simulation results.
EMD method
The EMD method is an adaptive decomposition method proposed by N.E. Huang et al., Which can be used for decomposing nonlinear and non-stationary signals into a series of frequency modulation and amplitude modulation signals. The signals are decomposed into a series of Intrinsic Mode Function (IMF) with EMD. An IMF is a function that satisfies two conditions: (1) The number of extreme and the number of zero crossings in the whole data set must either equal or differ at most by one; (2) At any point, the mean value of the envelope defined by the local maxima and the envelope defined by the local minima is zero [5] .
A signal x(t) can be decomposed with EMD as follows [6] : Calculating all the extreme points of the signal x(t).
Interpolating all the maxima with a cubic spline function obtain the upper envelope, which is defined as u(t), then repeat the procedure for all the minima to produce the lower envelope, which is defined asl (t).
The mean value of u(t) and l(t) is designated as m1(t), that is The difference between the x(t) and m1(t) is the first component h1(t); that is IF h1(t) is not an IMF, we can repeat the above procedure for k times, until ℎ (t) is an IMF, then it is designated as 1 In which, 1 (t) is the first IMF component from the data. The sifting process can be stopped by the following criteria [5] : The above steps are also called the sifting process of the intrinsic mode function .The first IMF component 1 (t) can be obtained by the above process.
The 1 (t) can be separated from the rest of the data by equation (2.5) The residue 1 (t) is treated as the new data and subjected to the same sifting process as described above. This procedure can be repeated on all the subsequent rj(t), and the result is In which , (t) is the i-th order IMF. According to the literature [7], the lower order IMF represents the higher frequency component, and the higher order IMF represents the lower frequency component, that is, the signal frequency component represented by the IMF decreases gradually as the order increases. The residual signal of a signal trend can also be treated as a higher order IMF.
By summing up higher order IMF and the residue, we can obtain the trend terms of pressure (t), that is By summing up the first n-1 signals, we can obtain the high frequency component of the pressure signals,and it is the difference between the original signals and the trend terms,which is defined as a fluctuation term, that is
Analysis method of the pressure fluctuation
The fluctuation terms are analysed with 95% confidence interval, which is the statistical concept, and it is described as follows: Estimating the overall average by means of the interval estimation, we must have three elements, which are the point estimator and the sample mean ̅ , and the average sampling limit deviation ∆x, and the confidence degree F (t). The equation is In which ∆x = tσ , and the t is the probability and the σ is mean square deviation of the sample, that is When F(t) is 0.95, the t is 1.96 according to the normal distribution probability table.
The process of analysing the pressure fluctuation is as follows: If the sampling frequency of the measured pressure is n ,we treat 1 second as a statistical interval, then we can obtain the mean pressure fluctuation by moving average method according to equation (2.12) as follows: In which is the fluctuation pressure at the time k point. Instead ̅ and in equation (2.12) with ̅̅̅̅ and p, we can obtain the mean square deviation of the of pressure pulsation ,that is When the confidence interval is 95%, we can obtain the confidence interval of the pressure pulsations according to equation (2.10), then connect all the maxima in the confidence interval as the upper envelope and connect all the minima in the confidence interval as the lower envelope.
If the fluctuation value at time k is greater than the point in the upper envelope, then make its value equal to the point in the upper envelope; If the fluctuation value at time k is less than the point in the lower envelope, then make its value equal to the point in the lower envelope.
Then we can obtain the peak-to-peak value of pressure fluctuation with specified confidence interval.
Decomposition of the measured pressures
Taking a PSP station as an example, the measured pressures at the spiral case inlet and at the draft tube inlet in load rejection were analysed.
The measured pressures at spiral case inlet were decomposed into the trend terms and pulsation terms by EMD method. Comparison between the measured trend terms and the numerical calculated results was also performed, which is illustrated in figure 1.
Fluctuation terms were analysed with 95% confidence , then the peak-to-peak amplitude of fluctuation in the confidence interval was obtained, which is illustrated in figure 2.
By summing up the pressure fluctuations in the confidence interval and the trend terms, the pressures in the confidence interval were obtained, which is illustrated in figure 3.
The measured pressures at the draft tube inlet in load rejection were analysed as the same process, which are illustrated in figure 4 to figure 6.
Results analysis of a PSP power station
In this section, the measured pressures at spiral case inlet and draft tube inlet of a PSP station were decomposed into trend terms and fluctuation terms with EMD method, the numerical simulation model of this station were established with LTS-SJD-Model , which is a professional software used for calculation of hydraulic-mechanical transient process in hydropower station. The Calculated deviations were obtained by comparing the trend terms and calculations, and the peak-to-peak values of fluctuation pressures in transient process were obtained by analysing the fluctuation terms with 95% confidence. The extreme pressures in transient process of this station were predicted.
Basic parameters and numerical simulation model of the PSP station
The numerical simulation model of a PSP station is shown in figure 7, the basic parameters of this station is shown in table 1. 2 and table 3 were performed by numerical simulation, which are the same as the tested cases of load rejection.
Analysis results
The calculated extreme pressures and the tested trend terms are listed in table 4 and table 5 and the peakto-peak (p-p) values of the pressure fluctuation are listed in table 6 and table 7. In table 4 and table 5, the difference is equal to the extreme pressures of the test value of the trend term minus the calculated value, which is also called calculated deviation. The percent in table 4 to table 7 is relative to the net head before load rejection. The calculated deviations at the spiral case inlet and the draft tube inlet in turbine mode and pump mode are illustrated in figure 10 to figure 13, the peak-to-peak (p-p)values of the pressure fluctuations at the spiral case and the draft tube in turbine mode and pump mode are illustrated in figure 14 to figure 17. From table 4 to table 5 and figure 9 to figure11, it can be seen that the calculated deviations in pump mode are smaller than that of turbine mode, the calculated deviations of spiral case vary from -6.27% to 10.91% in turbine mode and from -1.81% to 3.58% in pump mode, and the calculated deviations of draft tube vary from -4.09% to 2.01% in turbine mode and from -0.71% to 2.43% in pump mode .The initial calculated deviations in draft tube are less than 1% and the initial calculated deviations in spiral case are less than 2% except that the case U2 rejects load in turbine mode. From table 6 to table 7 and figure 12 to figure 15 we can find that p-p values of pressure fluctuation in pump mode are smaller than that of turbine mode. The p-p values of pressure fluctuation at draft tube vary from 0.09% to7.87% in turbine mode and from 0.18% to 5.87% in pump mode, and the p-p values of pressure fluctuation at spiral case vary from 1.62% to 14.14% in pump mode and from14.21% to 36.87% in turbine mode except the case U3 and U4 rejecting 100 % load simultaneously.
It should be noted that in the same condition pressure fluctuation of U4 is larger than U3. The reason to this phenomenon is that U4 water in the measuring pipeline resonates with the pipeline itself, which results in larger pressure fluctuation.
Therefore, it is appropriate to choose the mean value of the pressure fluctuations and calculated deviations of each unit as correction value by analysing the measured pressures comprehensively. As for this PSP station, it is suggested to choose the correction values of pressure fluctuation and calculated deviation as the table 8.
Extreme Pressure Prediction
As shown in table 9, the numerical simulation of 3 cases was performed and the calculated results and predictive results are listed in table 10, in which the predictive value is equal to the calculated value plus or minus the correction value according to the table 8. The predictive pressure at spiral case inlet is 752.34m, which is less than the designed pressure 784m, and the predictive pressure in draft tube inlet is -2.33m, which is more than the -8m. Therefore, the safety in this station can be guaranteed in transient process.
|
2019-04-16T13:28:40.276Z
|
2018-07-30T00:00:00.000
|
{
"year": 2018,
"sha1": "de44c78d83c7cb4d6cb322544977232502a317d9",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/163/1/012097/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7eddd09f12e124292f8cc109722576bbd1ffda05",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
267310015
|
pes2o/s2orc
|
v3-fos-license
|
COVID-19 pandemic and its impact on medical interns’ mental health of public and private hospitals in Guadalajara
ABSTRACT Introduction Burnout syndrome is a global burden characterized by exhaustion, work detachment, and a sense of ineffectiveness. It affects millions of individuals worldwide, with a particularly high prevalence among medical students. Factors such as demanding education, exposure to suffering, and the COVID-19 pandemic have contributed to elevated stress levels. Addressing this issue is crucial due to its impact on well-being and health-care quality. Materials and methods This cross-sectional survey study assessed fear of COVID-19 and burnout levels among medical student interns in hospitals in Guadalajara, Jalisco. The study used validated scales and collected data from September 2021 to September 2022. A snowball sampling method was employed and a minimum sample size of 198 participants was calculated. Results This study included 311 medical students (62.1% female and 37.9% male with a mean age of 23.51 ± 2.21 years). The majority were in their second semester of internship (60.5%) and from public hospitals (89.1%). Most students believed that the COVID-19 pandemic affected the quality of their internship (82.6%). Female students had higher personal burnout scores, while male students had higher work-related burnout scores. The mean score for fear of COVID-19 was 13.71 ± 6.28, with higher scores among women (p = 0.004) and those from public hospitals (p = 0.009). A positive weak correlation was found between COVID-19 scores and burnout subscales. Conclusion Our study emphasizes the significant impact of various factors on burnout levels among medical students and health-care professionals during the COVID-19 pandemic. Prolonged exposure to COVID-19 patients, reduced staffing, and increased workload contributed to burnout, affecting well-being and quality of care. Targeted interventions and resilience-building strategies are needed to mitigate burnout and promote well-being in health-care settings.
Introduction
Burnout syndrome is referred to as a psychological prolonged response to chronic occupational stressors characterized by overwhelming exhaustion, job disengagement, sense of ineffectiveness, and lack of professional achievement [1].It is a global burden for millions around the globe, with frequencies reported as high as 4.2% in the general population [2] and as high as 12.1% in university students from high-and middle-income countries [3] Of the latter, medical students are especially susceptible [4].
One of the primary responsibilities of medical schools is to educate future physicians, such as helping them to develop clinical competence in patient diagnosis and treatment through intensive theoretical and clinical instruction.Taken together with academic aspirations, learning environment, personal life events, exposure to human suffering and educational debt, this extensive training contributes to heightened levels of stress, overwhelm, and poor mental health [5].Some occupations are more vulnerable than others to the effects of burnout, such as health-care workers and students [6].Forty-nine percent and 28%-61% of US and Australian medical students have reported burnout, respectively [7].In Mexico, medical students in clinical rotations experience great amounts of emotional distress leading to burnout [4].
This susceptibility to burnout has been increasing in recent years since the beginning of the COVID-19 pandemic [8,9].Pandemic-related reasons for burnout, such as the withdrawal of medical students from clinical rotations during the initial peak phase as well as fear of infection [10] contributed to an increase in fear and anxiety in a population that was already susceptible [11].As a result of the pandemic, medical students suffered significant disruptions to their education and the majority experienced heightened burnout and stress [12].
In a global event such as the COVID-19 pandemic, stress may increase when facing a new disease where the mechanism of transmission, treatment, or complications in the short or long term are unknown.In addition to being in mandatory quarantine, this situation has profoundly serious psychological consequences [13][14][15][16].Among hospital personnel, the absence of protective equipment, long working hours, isolation from families and loved ones, and the constant fear of contagion increased the frequency of depressive episodes, suicidal ideation, anxiety, insomnia, substance abuse, and burnout syndrome, which all decreased emotional well-being [17][18][19][20][21].
In the medical field, chronic stress can manifest as disengagement from patients, decreased dedication, disregard for patient emotions, and a lack of tact when delivering care.These behavioral changes frequently coincide with feelings of inadequacy and decreased motivation to maintain high-quality service delivery, which can fundamentally alter physicians' baseline disposition.These stress manifestations can occur at various stages of a medical practitioner's career and tend to worsen as practitioners approach retirement.The issue of chronic stress among healthcare professionals has been addressed inadequately; hence, underestimating its significance is a significant concern in the field, despite its high prevalence.
A recent study on the psychological experience of health-care personnel who interact with patients with COVID-19 discovered that the prevalence of fear and anxiety significantly increased during the early stages of the outbreak, primarily due to the high intensity of work and concern for patients and their families [22].A metaanalysis of 44 studies revealed that hospital staff exhibited greater fear than the general population [10].Concerning the mental health of medical students on average, an increase in anxiety, fear, and a poor mental state have been discovered [11].Locally, an analysis of hospital personnel in the metropolitan area of Guadalajara in Jalisco, Mexico, revealed that medical students experienced higher levels of fear than professional medical personnel [17].
Aims
The objective of this study was to characterize the association between burnout syndrome and fear of COVID-19 on the quality of interns among a sample of undergraduate medical interns who worked in Guadalajara hospital units as part of their studies.In addition, we aimed to identify whether the internship semester and student gender or vaccination status had any association with the presence of burnout syndrome, levels of fear toward COVID-19, and the interns' perception of the quality of their internship.
Study design
We used a cross-sectional analytical survey design in this study, where the validated Spanish versions of the Fear of COVID-19 Scale [23] and the Copenhagen Burnout Inventory (CBI) scale [24], were applied to medical students performing their internship year in hospital units in Guadalajara, Jalisco, from September 2021 to September 2022.The survey included questions about their age, gender, the internship semester they were in, vaccination status, and if they thought the pandemic had affected the quality of their internship.
To create a snowball sampling effect, we encouraged the participants to distribute the survey to other undergraduate and graduate students.We included all medical students who were attending their undergraduate internship in hospital units in Jalisco.We excluded those students who did not answer the survey completely, those who were still studying before completing their undergraduate internship semester and those who had previously completed their undergraduate internship.Participants who returned an incomplete survey or mentioned wanting to leave the study were also excluded.
Sample size
The sample size was calculated using a formula for calculating frequency in finite populations, considering that the estimated population of medical interns in the hospitals to be studied was 1000.Miranda-Ackerman et al. [4], observed that 20% of undergraduate medical interns reported burnout, while Maske et al. [2], found that 4.2% of the general population experienced burnout syndrome.Taken together, these data suggest that a minimum of 198 interns were needed to respond to this survey for the study results to be validated.
Copenhagen burnout inventory
The CBI is a 19-item questionnaire that measures the prevalence of burnout.The CBI scale was created as an alternative to the Maslach Burnout Inventory scale [25], which emphasizes different burnout categories, rather than focusing on the syndrome construct.The CBI scale comprises six items for personal burnout, six items for work-related burnout, and seven items for patient-related burnout.Participants must choose which options represent their perceptions of the statements presented in each section.The options and scores were: 'never' (0 points), 'only sometimes' (25 points), 'sometimes' (50), 'many times' (75), and 'always' (100 points).We found high reliability in the personal burnout subscale (i.e., 0.880), work-related burnout subscale (i.e., 0.891), and patient-related burnout (i.e., 0.865).The proposed cutoff scores for assessing burnout levels were as follows: a mean score of 50 indicated no/low burnout, a mean score of 50-74 indicated moderate burnout, a mean score of 75-99 indicated high burnout, while a score of 100 indicated severe burnout [26].
Data analysis
The descriptive analyses included proportions, means, and standard deviations.We performed an inferential analysis using the chi-squared test, analysis of variance (ANOVA), and Student's t-test.In addition, we performed a post hoc analysis using Tukey's honestly significant difference and Bonferroni tests.A probability level of p < 0.05 was considered significant.All variables were assessed using Levene's test for equality of variances, assuming that all variables had a parametric distribution.We used Pearson's correlation to determine whether there was a significant relationship between CBI and FCV-19 scores.We conducted the data analysis using SPSS Statistics software.
Ethical considerations
We obtained written authorization from each participant.The surveys were anonymous to guarantee the confidentiality of each participant.This study follows national committees' ethical standards for human experimentation and the 2013 Declaration of Helsinki.We submitted the study protocol to ClinicalTrials.govand it was registered (NCT04420416).The National Ethics Committee and the National Scientific Research Committee authorized the study protocol (R-2021-1301-188).
Results
We included 311 students, of whom 193 (62.1%) were women and 188 (37.9%) were men with a mean age of 23.51 ± 2.21 years.Most students (188, 60.5%) were in their second semester of internship, while 123 (39.5%) were in their first internship semester.Most students (277, 89.1%) were from public hospitals, while 34 (10.9%) were from private hospitals.Two hundred fifty-seven students (82.6%) believed that the COVID-19 pandemic affected the quality of their internship, while 54 (17.4%) believed that the pandemic did not impact the quality of their internship.Regarding COVID-19 vaccinations, most students (270, 86.8%) had received two immunization shots.Table 1 presents the students' demographic characteristics.
Considering CBI subscale scores, the mean personal burnout score was 75.01 ± 15.68 and the mean work-related burnout score was 61.02 ± 19.81, while the mean patient-related burnout score was 37.13 ± 21.39.
When comparing genders and CBI subscales, female students reported higher personal and workrelated burnout scores, but only the personal burnout score was statistically significant (p = 0.001).Personal and work-related burnout scores were higher among first-semester students, but the difference was not statistically significant.When comparing hospital types, students from public hospitals had higher scores on all CBI subscales, but this was also not statistically significant.When comparing CBI subscale scores and whether students believed the COVID-19 pandemic affected the quality of their internship, students who believed COVID-19 affected internship quality showed higher scores on all subscales.Only the difference in patient-related burnout scores was statistically significant (p = 0.001).When comparing the CBI subscale scores and hospital attendance, those students who attended during their service and stayed on-call had overall higher scores in all subscales.The difference between personal and work-related burnout scores was statistically significant (p = 0.002).Finally, when comparing the CBI subscale mean scores with the year of their medical internship, students who began their internship in 2022 reported higher scores in all subscales, but only patient-related burnout subscale scores were statistically significant (p = 0.001).Overall, 291 (93.56%) students reported moderate to severe burnout on the personal burnout subscale, 238 (76.52%) on the work-related burnout subscale, and 82 (26.36%) on the patient-related burnout subscale.When comparing gender and personal burnout severity, we found a greater proportion of high and severe burnout among female students that was statistically significant (p = 0.004).Similarly, a significant difference was found among firstsemester students (p = 0.028) and student interns in public hospitals (p = 0.018), who showed high and severe burnout, respectively.Regarding work-related burnout, male students had a greater proportion of high and severe burnout when compared with female students; this difference was statistically significant (p = 0.006).In addition, the students who attended their hospital internship and stayed on-call had a greater proportion of high and severe burnout when compared with those who only attended calls (p = 0.002).Finally, when comparing the severity of patient-related burnout among the students, most students had no or low burnout.Only those who started their hospital internship in 2022 had statistically significant higher proportions of moderate, high, or severe burnout when compared to those who started their internship in 2021 (p = 0.020).Table 3 shows the complete comparison of burnout severity.
The mean FCV-19 score was 13.71 ± 6.28.When comparing gender and FCV-19 scale mean scores, female students had significantly higher mean scores (p = 0.004).In addition, when comparing the type of hospital, students in public hospitals had higher mean scores; this difference was statistically significant (p = 0.009).However, there were no significant differences when comparing semesters and whether COVID-19 affected the quality of their internship, hospital attendance, and year of medical internship.Table 4 compares the FCV-19 mean scores.
Discussion
This investigation aimed to ascertain the correlation between burnout syndrome and fear of COVID-19 on the quality of undergraduate interns, in addition to determining whether their demographic characteristics were linked to these variables.Within the categories of personal and work-and patient-related burnout out of a sample of 311 individuals, a total of 20 (6.4%), 73 (23.5%), and 229 (73.6%) individuals, respectively, exhibited either no or low levels of burnout.Substantial discrepancies were observed among the burnout subscales across different demographic groups.Personal burnout was found to be more prevalent among women, first-semester college students, and public hospital employees.The highest prevalence of high and severe occupational burnout was observed among men and those undertaking 24-hour shifts.Although the occurrence of patient-related burnout was relatively low within our sample, the prevalence of moderate, high, and severe burnout was significantly greater among students who began their rotation in 2022 compared to those who began in 2021.These findings align with prior research conducted in the United Kingdom [28] and Malaysia [29], which identified a significant surge in burnout among health-care personnel, particularly frontline workers, in the context of the COVID-19 pandemic.
Considering the COVID-19 fear scale, there was a notable increase in scores among women and public hospital employees.A positive, albeit weak, correlation was observed between the FCV-19 scale and CBI.This finding is consistent with earlier research conducted by García [30], which highlights health-care personnel as a vulnerable group in the pandemic response, with 20% of COVID-19 cases attributed to frontline health-care workers.
Our study also aligns with the literature, such as Sheehan et al. [31], Ferro et al. [32], and Caillet et al. [33], who reported a higher prevalence of characteristics related to burnout symptoms among female undergraduate interns.Variations in specific staffing characteristics were noted, such as those based on residential care workers, emergency department staff, or intensive care unit staff.When examining the CBI subscales in relation to demographic variables, we observed that secondsemester students working in public hospitals had higher personal and work-related burnout scores.This finding contrasts with a study conducted by Rosas-Paez et al. [34], which did not identify a significant association between these characteristics when comparing them with items from the Utrecht Work Engagement Scale and the Maslach Burnout Inventory.
The COVID-19 pandemic subjected many medical students to elevated levels of stress and uncertainty, along with disruptions to their education and clinical training.Concerns for their personal safety have further compounded these challenges, which have likely contributed to an increased susceptibility to burnout among medical students.
Stress is an adaptation mechanism to the environment resulting from cognitive and organic processes.When stress is experienced continuously or repeatedly, it can develop into a chronic condition.This phenomenon is known to result in a variety of maladaptive mechanisms, which can eventually lead to neurological, metabolic, hormonal, and even cardiovascular alterations [35].
As a form of chronic stress, work-related stress is of the utmost importance to Mexico.In 2019, the World Health Organization reported that up to 75% of the Mexican population experiences high levels of work-related stress.This can lead to the development of burnout syndrome [36], which has been associated with high-stress occupations, caregiving occupations, and jobs with long working hours [37].
Isolation has led to a rise in stress and anxiety among the general population and hospitalized patients in particular [18,19,38,39].Both institutional and individualized interventions for student physicians within medical institutions to address burnout could reduce the incidence of this syndrome and early strategies for treatment and prevention could be planned.
Implementing targeted interventions to reduce burnout and promote well-being is essential for maintaining the resilience and effectiveness of healthcare providers in delivering high-quality care to patients, while also nurturing a supportive learning environment for medical students.Future research should focus on identifying effective strategies to mitigate burnout and enhance resilience in healthcare settings, not only during pandemics but also ordinary circumstances.
It is important to acknowledge the limitations of our study.The findings may not be generalizable to other populations or settings due to the specific sample of medical students from a particular region.The reliance on self-report measures and the crosssectional design restricts our ability to establish causality or temporal relationships.The use of convenience sampling and the potential for response bias may have influenced the results.Despite these limitations, our study provides valuable insights for future research and interventions addressing burnout among medical students.
Table 2 .
Comparison between demographic variables and CBI mean scores.
Table 3 .
Comparison between demographic variables and burnout severity.
Table 4 .
Comparison between demographic variables and FCV-19 scale mean scores.
|
2024-01-30T06:17:42.763Z
|
2024-01-28T00:00:00.000
|
{
"year": 2024,
"sha1": "e8de5f29e3779146d5cc38fb6dbc3ab9f5fb4687",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2adb74a38118d49749091c11e2dfc8c97c1416ca",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252622157
|
pes2o/s2orc
|
v3-fos-license
|
Study protocol: primary healthcare transformation through patient-centred medical homes—improving access, relational care and outcomes in an urban Aboriginal and Torres Strait Islander population, a mixed methods prospective cohort study
Introduction For over 40 years, Aboriginal and Torres Strait Islander Community-Controlled Health Services (ACCHS) in Australia have led strategic responses to address the specific needs of Aboriginal and Torres Strait Islander populations. Globally, there has been rapid growth in urban Indigenous populations requiring an adaptive primary healthcare response. Patient-centred medical homes (PCMH) are an evidenced-based model of primary healthcare suited to this challenge, underpinned by principles aligned with the ACCHS sector—relational care responsive to patient identified healthcare priorities. Evidence is lacking on the implementation and effectiveness of the PCMH model of care governed by, and delivered for, Aboriginal and Torres Strait Islander populations in large urban settings. Method and analysis Our multiphased mixed-methods prospective cohort study will compare standard care provided by a network of ACCHS to an adapted PCMH model of care. Phase 1 using qualitative interviews with staff and patients and quantitative analysis of routine primary care health record data will examine the implementation, feasibility and acceptability of the PCMH. Phase 2 using linked survey, primary care and hospitalisation data will examine the impact of our adapted PCMH on access to care, relational and quality of care, health and wellbeing outcomes and economic costs. Phase 3 will synthesise evidence on mechanisms for change and discuss their implications for sustainability and transferability of PCMHs to the broader primary healthcare system Ethics and dissemination This study has received approval from the University of Queensland Human Research Ethics Committee (2021/HE00529). This research represents an Aboriginal led and governed partnership in response to identified community priorities. The findings will contribute new knowledge on how key mechanisms underpinning the success and implementation of the model can be introduced into policy and practice. Study findings will be disseminated to service providers, researchers, policymakers and, most importantly, the communities themselves.
INTRODUCTION
Health-promoting and resilience factorssuch as, connection to culture, country and community and agency 1-3 are fundamental for Aboriginal and Torres Strait Islander
STRENGTHS AND LIMITATIONS OF THIS STUDY
⇒ Prospective cohort design support data collection from intervention and standard care sites to determine the impact of the patient-centred medical homes (PCMH) on access, quality of care and health outcomes. ⇒ Triangulation of quantitative and qualitative data will enable examination of implementation, feasibility and acceptability of the PCMH from the perspective of health providers and patients. ⇒ Participatory action research which privileges Aboriginal and Torres Strait Islander worldviews, knowledge, realities and terms of reference will guide the conduct of the study. ⇒ Randomisation was not feasible in this real world, primary healthcare context, where the priority for implementation of a significant system reform was site readiness, and randomisation may also be negatively perceived by the community as restricting access to the new model of care. ⇒ Specific measures of patient's self-reported experience were developed for the study as validated culturally modified measures of self-reported patient experiences are limited. on October 3, 2022 by guest. Protected by copyright. Open access health and wellbeing. However, these protective factors have been undermined by ongoing colonisation and resultant intergenerational trauma. 4 Consequently, Aboriginal and Torres Strait Islander peoples experience high levels of both non-communicable and communicable diseases. 5 Added to this, improvements in healthcare over the last 50 years have resulted in an increase in the number of Aboriginal and Torres Strait Islander people reaching older age as well as a booming younger population, 6 with trends projected to increase significantly over the coming decades. 7 Furthermore, recent global trends towards urbanisation of Indigenous peoples are also reflected in the Australian context, with rapid population growth most evident in urban settings. 6 This population growth, change in the age distribution and overall health levels, has required an adaptive approach to healthcare-particularly, primary healthcare (PHC). Aboriginal and Torres Strait Islander Community-Controlled Health Services (ACCHS) are holistic PHC services, delivered and governed by Indigenous peoples for Indigenous peoples. 8 9 Established in the 1970s, the formation of ACCHS across Australia was a political and strategic response to the health and social inequities experienced by Aboriginal and Torres Strait Islander peoples. In 2009, in response to significant growth and geographic dispersal of Aboriginal and Torres Strait Islander peoples in the South-East region of Queensland, the Institute for Urban Indigenous Health (IUIH) was established to drive innovation in delivery of health and family wellbeing services. 9 The region is one of the most populous-being home to more than 11% of Aboriginal and Torres Strait Islander peoples-and fastest population growth areas in Australia. 6 Over a 10-year period, IUIH and its member services (the 'IUIH network') have increased service coverage to the Aboriginal and Torres Strait Islander population in the region from 16% to 45%, with the number of regular patients now just under 40 000. 9 Through this and the consequent improved relational care delivered, 10 substantial gains in health outcomes have been observed. 11 12 However, for the IUIH to continue to respond to identified community needs, and build on these health gains, further redesign of the current system of PHC was necessary. The patient-centred medical home (PCMH) is a model of healthcare delivery that has been implemented to address the challenge of growing urban populations with complex care needs, internationally and in Australia. [13][14][15] Defining features of PCMH models include multidisciplinary team-based care, voluntary enrolment of patients with a team of providers, patient education and self-management, the use of technology to support patient care (including data-driven improvements in care) and service planning and coordination. 16 17 Conceptually, PCMHs operationalise the core functions of PHC (universal access, comprehensive care provided within people's community, coordination, relational continuity of care and intersectoral collaboration) 18 19 with an explicit focus on-and responsiveness to-patient needs.
International evidence has established that PCMH models contribute to reductions in hospital admissions and improved clinical outcomes in diabetes, asthma and preventative care and patient satisfaction. 14 20 21 Similar findings were observed for the Southcentral Foundation's model of PCMH in Alaska, the only published example of a PCMH implemented for, and by, Indigenous peoples. 22 There are few published Australian studies examining the implementation of the PCMH model, 23 24 and none examining the effectiveness of the model for improving quality of care or health outcomes, including in urban Indigenous communities.
Leveraging from findings of a pilot study within the IUIH network, this study will undertake an evaluation of a PCMH adapted by, and for, the South-East Queensland Aboriginal and Torres Strait Islander community (IUIH PCMH System of Care -ISoC2). Conducted over 5 years, our study will extend the pilot study and expand the research programme to a second, larger health hub. This Indigenous led and governed study will generate evidence on implementing a PCMH for a large urban Aboriginal and Torres Strait Islander population in Australia. Furthermore, it will contribute new knowledge on: the effectiveness of such a model for improving access, relational care and health outcomes; the impact on economic costs; and the transferability and scalability of the model for the broader PHC sector.
Objectives
The overall aim of this study is to undertake a process and outcome evaluation of an adapted model of a PCMH (ISoC2) at two ACCHS in South-East Queensland. Specifically, this study aims to: 1. Examine the process of implementing ISoC2, including how model elements are operationalised and the extent to which the model is delivered as planned (fidelity). 2. Identify barriers and enablers to implementation and delivery (feasibility) of ISoC2 and explore its acceptability to staff and patients. 3. Evaluate the effectiveness and economic impact of ISoC2 by quantifying changes in access, quality of care (with a specific focus on relational qualities of that care) and health outcomes, following implementation compared with baseline and standard care.
METHODS AND ANALYSIS Study setting
The IUIH network is the largest provider of PHC to the South-East Queensland Aboriginal and Torres Strait Islander population. The standard model of care offered at each of the IUIH network's 20 clinics supports universal primary care, with a blended payment model and provided at no cost to the patient. A range of comprehensive services and programmes are made available as a one-stop shop for patients. 9 The study will evaluate two IUIH ACCHS located in the greater Brisbane region of
Open access
Queensland, Australia. Collectively, the clinics provide services to almost 4000 Aboriginal and Torres Strait Islander patients. These services were the first sites to have their premises redesigned and workforce reconfigured to support the ISoC2 model, the first beginning 2019, and the second in early 2020. The study evaluation began in June 2020 and will conclude in June 2025 (6-year and 5-year post-implementation of ISoC2 at site one and two respectively).
Intervention
Adapted from an Alaskan Native community-controlled health service, 22 25 ISoC2 builds on the strengths of the existing IUIH model of care through adaptations intended to: strengthen access, relationship-based care, patient engagement and agency; improve health outcomes; increase efficiency by directing resources within the service to deliver greatest impact and to scale the service model to cater for growing demand. Figure 1 summarises the key changes in the care pathways that will result from implementation of ISoC2. In the ISoC2 model, team-based care comprises an Aboriginal or Torres Strait Islander health worker, administrative coordinator, registered nurse and general practitioner (GP, Australia's primary care physicians) (operationally referred to as a 'Pod') working collectively to lead and coordinate care based on the patient's identified health and wellbeing priorities. All staff in these roles were assembled into Pods, with approximately 3-4 Pods per intervention site. During implementation of ISoC2, all clients attending the service were assigned to their preferred Pod, with new clients similarly assigned throughout the evaluation study.
Study design
The study is a mixed-methods prospective cohort study, using a hybrid implementation and clinical effectiveness design (type 1) 26 where the effect of an intervention on outcomes is tested while gathering information on implementation. The study will evaluate the model of care over three sequential phases, from June 2020 to June 2025 (figure 2). Phase 1 will examine the implementation of ISoC2 (how it is operationalised, and its feasibility and acceptability from the perspective of patients and health staff). Phase 2 will examine the effectiveness of ISoC2 on access to care, quality of care, economic costs and health and wellbeing outcomes. Phase 3 will bring the findings together to synthesise evidence on the process of implementing ISoC2 and mechanisms for change, sustainability and the translation of PCMHs and their key elements into the broader PHC system. Randomisation was not feasible in this real-world context, where the priority for implementation of a significant system reform was site readiness. Further, randomisation may also be negatively perceived by the community as restricting access to the new model of care.
Participatory action research was used to codesign a programme logic to guide the research process, ensuring that the IUIH Cultural Integrity Framework 2 underpins the evaluation and Aboriginal and Torres Strait Islander worldviews, knowledge, realities and terms of reference are privileged throughout the research process. [27][28][29] Participatory action research will also be used to feedback research findings to health and management staff in real time to inform a continuous process of service refinement as well as the conduct and interpretation of the research itself. A steering committee, representing potential knowledge brokers and knowledge users, including Aboriginal and Torres Strait Islander and non-Indigenous researchers, clinicians, managers and community liaison officers, will oversee the project. A clinical reference group will also meet to provide advice on long-term outcomes, including those related to linked emergency department admissions and potentially preventable hospitalisations. Both the steering committee and clinical reference group will provide oversight with respect to data sovereignty ensuring that what is measured is meaningful, culturally and clinically.
Study participants
Study participants include staff and regular patients (defined as at least three visits in the preceding 24 months). Patients are eligible to participate if they are registered with the intervention clinic, identify as Aboriginal and/or Torres Strait Islander, and, for qualitative and survey data, are at least 18 years of age. Eligible patients can choose to consent to participate in qualitative Open access interviews or complete a survey. Those completing the survey can also consent to having their data linked to deidentified electronic health records (at intervention clinics and hospital administrative data). Eligible staff are those currently employed at intervention clinics, or who were employed at the clinics during implementation of ISoC2 and consent to participate.
In addition, for quantitative analysis using routinely collected electronic clinic health record data, study participants will also include regular clients at standard care clinics located within the IUIH network and matched for clinic characteristics.
Data collection
Routinely collected health data will be extracted from electronic clinic health records for client attendance numbers, service access relative to the estimated resident population and specific health outcomes for all eligible participants. Data will be extracted retrospectively for a period of 3 years prior to implementation of ISoC2 at intervention sites, with updated extracts approximately every 12 months until the study end date. All data will be deidentified; individual participants will be given a unique identifier to enable follow-up, multilevel modelling (random effects) and for linking survey and hospital administrative data (secondary care data). Information relating to sociodemographics, long-term health conditions, medications, clinical measures (eg, blood pressure, weight and other investigation results), consultations and Medicare Benefit Schedule service item claims (for medical services funded through Medicare, Australia's universal health insurance scheme) will be collected from routinely collected health record data.
Survey data will be collected using an adapted survey questionnaire currently used in a national study of Aboriginal and Torres Strait Islander Wellbeing (the Mayi Kuwayu (MK) study), which was developed in consultation with communities across Australia. 30 Survey questions will inquire about clients' sense of connection to service and care providers, relational continuity of care, adapted from standard instruments specifically for this study, 31 and health and wellbeing outcomes from the original survey instrument (see online supplemental file for further details). Health and wellbeing outcomes are related to cultural practice and expression, health and wellbeing, health behaviours and family support and connection, which are not captured through routinely collected health record data. A random sample based on the age and sex distribution of intervention clinics will be invited to participate in the survey until final sample sizes are achieved. Two waves of survey data collection will occur-baseline survey (completed during implementation at each intervention site) and follow-up survey (completed 3 years post the baseline survey at each intervention site) (figure 2).
For controls, routinely collected clinic health data will be extracted from comparable standard care clinics based on location, clinic size and composition, history of clinic Figure 2 Timeline for research programme across intervention sites. EHR; electronic health records; FU, follow-up. Collection of interview and baseline survey data from intervention site 1 was completed by end of June 2020, as a pilot study with separate ethics approval. Collection of interview and baseline survey data from intervention site 2 and EHR data from all sites (intervention and standard care) was planned to begin June 2020. However, given the subsequent disruption to services and research activities due to the COVID-19 pandemic, actual data collection was deferred until mid-2021, with further delays due to later COVID-19 infection waves. EHR extraction in 2022 under the ISoC2 study from all sites covering period from 1 January 2016 -31 December 2022; up to 2 years prior to implementation of site 1 accounting for disruption of services in 2018 due to a fire on the clinic premises in December 2017. Subsequent EHR update planned at approximately 12 month intervals. Survey participants will be invited to complete a follow-up survey approximately 3 years post the baseline survey. Linked hospital and emergency department data will be received in two files; first in 2023 and then a subsequent update in 2025. Open access establishment and demographics, with at least two randomly selected controls for every intervention participant. For outcomes derived from survey data, intervention participants will be matched to at least two randomly selected controls within the MK study cohort (as these survey data are unavailable from standard care sites). Subject to MK data custodian approval, matching will be performed by the data custodians and be minimally based on age, gender, remoteness/geography of residence and other relevant sociodemographic and health characteristics.
Qualitative data will be collected from patients and healthcare providers using individual interviews. Semistructured individual interviews with staff will explore their experiences and perceptions of ISoC2 related to coherence, strengths and limitations of the model, barriers and enablers to its implementation and the nature and extent to which providers collaborate to implement the model and embed it into everyday practice for routine delivery. Interview data will be analysed thematically to identify, characterise and explain mechanisms that promote and inhibit the implementation and embedding of ISoC2 in everyday work for routine delivery. Yarning interviews, culturally respectful conversation that is relaxed, narrative-based and emphasises the value of story telling, 32 will be undertaken with patients from each intervention site to explore their experiences and perceptions of ISoC2. Interviews will be conducted using a yarning guide, developed by the lead for IUIH's Cultural Integrity Investment Framework (RB), comprising domains used in patient satisfaction surveys at IUIH: Community and Belonging, Country and Culture, Health and Wellbeing, Connection, Your Clinic and One Time. Yarns will be analysed using thematic analysis, privileging interpretations by Aboriginal and Torres Strait Islander researchers and healthcare providers. Interviews with staff and clients are expected to take between 30 min and 60 min and will be audio recorded and stored in MP3 or MP4 format.
Outcomes measures
Primary and secondary outcomes were selected on the basis that they aligned with the study objectives, were considered a priority from the services' perspective and reflected current guidelines. Measures will be calculated according to standard methods where available. Due to the lack of a validated culturally modified measure of self-reported patient experiences and outcomes for our population of interest, standard instruments have been adapted for this study (see online supplemental file for further details).
Effectiveness-access, quality of care and health outcomes Primary outcomes 1. Proportion of clinic catchment population that will be active patients. 2. Proportion of regular patients with a continuity of care score of ≥75% by care team. 33 3. Proportion of patients with type 2 diabetes with glycosylated haemoglobin A1C (HbA1C) <7% (or if >7%, decreased by at least 2% from baseline). 34 4. Proportion of patients at high absolute cardiovascular disease risk. 35
Rates of potentially preventable hospitalisations and emergency department presentations. 36
Secondary outcomes 1. Regularity score 37 for clients with asthma or diabetes. 2. Self-reported relational continuity of care score (collected in patient surveys). 3. Self-reported shared decision-making and reciprocity in care planning (collected in patient surveys). 4. Proportion of regular patients who have participated in a health assessment. 5. Ratio of care plan reviews to chronic disease management plans/team care arrangements. 6. Proportion of those at high absolute risk of cardiovascular disease on guideline-recommended medications (lipid-lowering and blood pressure-lowering medication). 35 7. Self-reported agency regarding healthcare access and engagement (collected in patient surveys). 8. Self-reported community cohesion (collected in patient surveys). 30 Process outcomes related to the implementation of ISoC2 and its core components 1. Staff perceptions and experiences of barriers and enablers (feasibility) to delivering ISoC2 (qualitative data). 2. Staff and patient perceptions and experiences of the acceptability of ISoC2 (qualitative data). 3. Patient enrolment: % of total visits with assigned pod team. 4. Distribution of care between providers: per cent of total visits with each pod team member (quantitative data). 5. Accommodation/modalities of care (quantitative data). -Proportions of patient consultations delivered by modality. -Third next available appointment by pod team and by GP (number of days).
Power and sample size
This study has been powered to detect changes in clinical outcomes of patients accessing care at both intervention sites pre-implementation and post-implementation of ISoC2. To be able to detect a minimum difference of 5% in the proportion of people attending the clinic from baseline (38.5%) to after implementation (43.5%) with 80% power and at a 5% level of significance using an independent χ 2 test requires at least 1520 people in the catchment area (table 2). Remaining sample size calculations above are derived assuming zero correlation, thus these are conservative sample size estimates. For example, a sample size of about 250 clients with diabetes in each of the pre-time and post-time periods will have 80% power to detect a minimum difference of 0.25 SD in Open access continuous HbA1c (eg, from 7.4 baseline to 6.9 at postimplementation, at a 5% level of significance) using an independent samples' t test. The sample size required to detect the same difference will be smaller with a paired sample t test, for example, assuming a precoefficient and postcoefficient of correlation of 0.2, the sample size goes down to 200 clients per time point. The sample size for qualitative interviews with patients will be determined through the course of data collection. Recruitment will cease when sufficient numbers of participants have been interviewed to reach data saturation.
Data linkage
For linkage of survey data to clinic health record data, deidentified survey data with a unique code and linkage key will be sent as a secure encrypted file to IUIH data custodians. Linkage of clinic health record data to secondary care data (hospital admission and emergency department data) and death registration data will be performed by the Statistical Services Branch, Queensland Health, subject to data custodian approval (in progress). 38 Linkage is performed by deterministic and probabilistic methods and/or the Master Linkage File, as appropriate to the in-scope cohort.
The data sets which will be linked for this study include the following: Queensland hospital-admitted patient data collection: data for all admitted patients from public and private hospitals, and day surgery units within the state, including their date of admission and separation, primary diagnosis/other diagnoses (International Classification of Diseases [ICD] 10 codes), procedures, discharge destination, facility type as well as basic demographic and geographical information.
Emergency department minimum data collection: data for all patients presenting to emergency departments in Public Hospitals in Queensland, including their presentation/triage/discharge date and time, triage category, arrival transport mode, visit type, principle diagnosis/other diagnoses (ICD 10 codes) and other basic demographic and geographical information.
Death Registration Data (for censoring only): includes all death registrations in Queensland.
Patient and public involvement statement
This research represents an Aboriginal led and governed partnership between community, service providers and researchers, seeking to respond to identified community priorities. The project has been initiated by and is embedded in IUIH, a community-controlled health service. Community governance and ownership of IUIH have practical expression through a board of directors that combines community-elected and independent skill-based directors, underpinned by a community accountability framework, centred on the principle that decision-making should occur at the closest level possible to clients, families and communities.
Statistical analysis
Health record and survey data will be analysed at the individual patient level. Primary, secondary and quantitative process outcomes derived from electronic health record data and linked hospital data will be summarised using descriptive statistics at 3 years prior to implementation (baseline), following the implementation phase, then at 12 monthly intervals for the follow-up period. Bivariate and regression analyses will be used to examine differences between cohorts for baseline characteristics and to quantify changes in primary and secondary outcome from pre-implementation to post-implementation of ISoC2, between intervention and standard care sites. Data allowing, multilevel (randomeffects) or interrupted time series analysis will be used to quantify changes in outcomes over time (at minimum baseline and post-implementation) and to compare outcomes Open access between intervention and randomly matched control (standard care) participants. Models adjusted for covariates (minimally age and sex, additional other sociodemographic and health characteristics) will be used to determine ORs and CIs for the association of ISoC2 with outcomes. Similar analyses will be conducted for self-reported data collected through surveys, comparing differences between intervention and matched MK survey controls, and changes between baseline and follow-up responses. Qualitative interviews with staff will be analysed using The Framework Analysis, a method of qualitative data analysis that begins deductively from predefined objectives and is explicit and informed by a priori reasoning. 39 Interviews with patients will be analysed using Interpretative Phenomenological Analysis, a method that describes how a person experiences their world. 40
Economic analysis
The economic component of the study will include two types of economic analysis: cost consequence analysis and cost-effectiveness analysis (CEA). Both analyses will take a perspective from the Australian Government Department of Health, and the time horizon used in this study will be 2 years to capture the changes of chronic conditions. All the direct costs related to the management of a patient's condition will be included: consultations with GPs or other health workers, diagnostic tests, pharmaceuticals, hospital inpatient admissions and emergency department admissions. The consultation costs will be estimated using the Medicare Benefits Schedule and the Pharmaceutical Benefit Scheme (for medications funded under Medicare) will be used to estimate the pharmaceutical costs. All the costs for hospital admissions will be estimated using Australian Refined Diagnosis Related Groups. Costs will be measured in 2025 Australian dollars and 3.5% discounting rate will be applied. Clinical outcomes in the CEA will be outcome measures that demonstrate a clinically significant improvement in the intervention group. The cost difference per patient and proportion of per achieved outcome will be calculated with a 95% CI between the standard care and intervention (ISoC2). The incremental cost-effectiveness ratio with a 95% CI will be estimated using non-parametric bootstrap (1000 replications) methods and the simulation results will be graphed on a cost-effectiveness plane. The cost-effectiveness acceptability curve will be drawn to summarise the impact of uncertainty on the results.
ETHICS AND DISSEMINATION Consent
Prior, free and informed written consent will be enacted throughout the study. 41 All materials for the conduct of the study (staff interview and yarning guides, surveys, client information and consent forms) have been codesigned with the steering committee providing cultural and clinical oversight of the study. Participants for survey (clients) and interviews (clients and staff) will be provided with a plain-language information sheet about the study along with a consent form. Participants may choose to withdraw at any time during the study. Survey participants have specifically consented for linkage of survey data to routinely collected health data and for health and wellbeing research (subject to approvals by the MK Aboriginal and Torres Strait Islander governance committee). A waiver of the requirement for consent has been obtained for secondary use of routinely collected deidentified health data (electronic health record and linked hospitalisation data) for intervention and standard care sites. Ethical approval for the study was obtained from the University of Queensland Human Research Ethics Committee (2021/HE000529).
Dissemination
Research findings will be disseminated using IUIH's existing communication strategies and those developed specifically for this study. This includes: to patients and the broader community through social media, brief yarns with existing patient groups and infographics in the form of posters and flyers; to staff through internal websites in the format of articles, short presentations and webpage for project updates; formal dissemination through conference presentations and publications in peer-reviewed journals; and seminars and roundtables with policymakers and peak bodies to share findings relevant for the broader PHC policy and practice context.
DISCUSSION AND IMPLICATIONS
This study reflects the aspirations and obligation of ACCHS in South-East Queensland to build the evidence base for high-quality PHC able to meet the needs of rapidly growing urban Aboriginal and Torres Strait Islander populations. Building from the foundations of a pilot study, this research incorporates a process, outcome and economic evaluation of a model of PCMH in an ACCHS setting in Australia derived from international best evidence and adapted to local context to optimise its acceptability, feasibility and effectiveness. The study has been powered to detect a difference in clinical outcomes shown to improve with successful implementation of a PCMH.
This study is anticipated to be of direct benefit to Aboriginal and Torres Strait Islander people living in South-East Queensland through strengthened relational care and improved access to high-quality, comprehensive and culturally responsive PHC. The knowledge, learnings and evidence from this study are likely to be of public benefit through contributing new knowledge to inform policy and service delivery in the broader Australian PHC sector. If ISoC2 can be successfully implemented and demonstrates a good return on investment, this will represent an Indigenous designed and implemented, culturally safe and cost-effective model of PCMH transferable for trialling in settings in the broader context-within Australia and globally.
|
2022-10-01T06:16:11.607Z
|
2022-09-01T00:00:00.000
|
{
"year": 2022,
"sha1": "ff276dabad6f443985dd25740c9048e9f734bb24",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4e0de03f72bd6e458d1a6a3423fc601ee868785a",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
227358950
|
pes2o/s2orc
|
v3-fos-license
|
Carbon Stock Estimates for Acacia mangium Forests in Malaysia and Indonesia Potential for Implementation of Afforestation and Reforestation CDM Projects
: The Conference of the Parties 9 in Milano, Italy (COP 9, 2003) approved modalities and procedures for afforestation and reforestation (A/R) project activities under the Clean Development Mechanism (CDM) of the Kyoto Protocol. According to the conclusions of COP 9, several approaches are available to monitor temporary carbon sequestration in A/R-CDM projects. Developing baselines for such monitoring is difficult because of a lack of basic growth and management data. In this paper, we present guidelines for preparing a project design document (PDD), in which growth and yield prediction in plantation forests plays an important role, and present a methodology for modeling and estimating carbon stocks using inventory data from Acacia mangium plantations in Malaysia and Indonesia.
Introduction
Indonesia and Malaysia have the opportunity to serve as host countries for Clean Development Mechanism (CDM) projects through afforestation and reforestation (A/R) efforts. The Conference of the Parties 9 in Milano, Italy (COP 9, 2003) sought to clarify modalities and procedures of A/R-CDM project implementation under the Kyoto Protocol. One of the most difficult requirements for the formulation of A/R-CDM projects is meeting the requirements of an "additionality scheme." Project participants are asked to describe the project scenario in the form of a project design document (PDD). The PDD must clearly define the "additionality scheme," or how the project will augment carbon sequestration with respect to the identified baseline scenario.
In this paper, we present brief guidelines for designing a PDD and discuss our approach to developing suitable carbon stock estimates for Acacia mangium plantation forests in Malaysia and Indonesia. These countries have a history of using Acacia mangium plantations for land rehabilitation under short rotation-high yield management schemes. Plantation forestry plays an important role in climate change mitigation, especially under the CDM scenario. Acacia mangium shows promise as tropical plantation tree species that provides significant benefits to investors and indigenous populations involved in A/R-CDM projects.
Modalities and Procedures for A/R-CDM Projects
Project participants are asked to submit the completed version of a CDM-PDD, together with any attachments, to an accredited designated operational entity for validation (Figures 1 and 2
CDM Project Design Documents
A CDM-PDD presents information on the essential technical and organizational aspects of the project activity and is a key input into the validation, registration, and verification of the project, as required under the Kyoto Protocol. The CDM-PDD contains information on the project activity, the approved baseline methodology applied to the project activity, and the approved monitoring methodology applied to the project. It discusses and justifies the choice of baseline methodology and the applied monitoring concept, including monitoring data and calculation methods.
The following is a general overview of PDD design in a CDM project (UNFCCC, yield table is quite important. However, due to a lack of long-term observed plantation data, it can be difficult to construct a precise table for the deviation of local yield. In this paper, we temporally construct a yield table that is derived from various plots in Acacia mangium plantations and propose a procedure for estimating carbon stocks.
Data Collection
Acacia mangium is a fast growing tree species that is well suited to reforestation efforts in degraded landscapes. In Malaysia, it has been widely planted since the beginning of the Compensatory Forest Plantation Program (CFPP) in 1981. Its initial survival rate is relatively high, but disease (frequently heart rot) sometimes appears in later years. Figure 3 (below), shows the location of field sites where data was collected for the yield table we have constructed (Matsumura and Ismail, 1996;Matsumura, 2004). Sites were distributed throughout the Malaysian Peninsula.
Methods
According to the conclusions of COP 9, project participants must account for all changes in the following five carbon pools: 1) Above-ground biomass (leaf, branch, and trunk) 2) Below-ground biomass (root) 3) Dead wood 20 Matsumura et al.
5) Soil organic carbon
Mean carbon stock (MC) is usually estimated by the biomass expansion factors method (IPCC 2003) that includes above-and below-ground biomass carbon pools: where the variables are defined as follows: MC: mean carbon stock (tC/ha) Merchantable volume is generally estimated by the allometric relationship between diameter and height. The following basic equations were used to calculate mean values in Acacia mangium stands (Matsumura and Ismail, 1996;Matsumura, 2004 where, a = f(trees per ha), b = 1.5474, and c = 0.8093 A flow chart of the growth model used to construct the yield table is shown in Figure 4. The yield table we derived for each study site was compared with the yield tables from other study sites in West Java and in Sabah (Inose, 1991).
Results and Discussion
The comparison among study sites at age 10 is shown in depended on the total number of trees per ha. Figure 5 (below) shows estimated carbon stocks in the West Java and Malaysia study sites. To ensure a smooth implementation of A/R-CDM projects, it is also important to analyze the growth and carbon stock difference at local project sites from the investors' and communities' points of view, respectively. The design of A/R-CDM projects and PDD layout should become increasingly easier as general yield tables are developed for promising tree species in the coming years.
|
2020-06-04T09:02:48.003Z
|
2008-01-01T00:00:00.000
|
{
"year": 2008,
"sha1": "8aa00393d373f97e75f3014711fc4622018bb8c0",
"oa_license": "CCBY",
"oa_url": "https://www.jstage.jst.go.jp/article/formath/7/0/7_07.002/_pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "579adf9eeced31a9af059f599a264266ca0e93ee",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
225438620
|
pes2o/s2orc
|
v3-fos-license
|
Radial Movement Optimization Based Optimal Operating Parameters of a Capacitive Deionization Desalination System
: The productivity of the capacitive deionization (CDI) system is enhanced by determining the optimum operational and structural parameters using radial movement optimization (RMO) algorithm. Six di ff erent parameters, i.e., pool water concentration, freshwater recovery, salt ion adsorption, lowest concentration point, volumetric (based on the volume of deionized water), and gravimetric (based on salt removed) energy consumptions are used to evaluate the performance of the CDI process. During the optimization process, the decision variables are represented by the applied voltage, capacitance, flow rate, spacer volume, and cell volume. Two di ff erent optimization techniques are considered: single-objective and multi-objective functions. The obtained results by RMO optimizer are compared with those obtained using a genetic algorithm (GA). The results demonstrated that the RMO optimization technique is useful in exploring all possibilities and finding the optimum conditions for operating the CDI unit in a faster and accurate method.
Introduction
Despite the fact that water represents more than two-third of the earth, less than 1% is suitable for industrial and domestic usage [1]. The saline water represents 97% of the total water sources [2]. Therefore, water desalination is considered the best choice to secure the needs of humanity [3,4]. State-of-the-art water desalination techniques incur high energy cost, as well as high environmental impacts which are the main challenge [5,6]. Different strategies have been used like harnessing renewable energy sources [7][8][9], increasing efficiency of the existing methods [10,11], and devising new methods [12]. Capacitive deionization (CDI) is a newly developed water desalination technology for purification of river/brackish water based on electrochemical phenomena [13][14][15][16]. CDI has several advantages on other desalination techniques for its simple operation and low specific energy consumption. The salt is removed from brackish water using low voltage (~1 V), and it does not Processes 2020, 8,964 3 of 17 single cost functions and multi-objective optimization for multiple cost functions, both subjected to multiple constrained variables [43]. Through this GA optimization, the performance of the CDI desalination process was improved from 15% to 92% with single objective optimization and from 1% to 75% with multi-objective optimization. The objective (cost) functions consisted of lowest concentration, pool water concentration, salt ion adsorption, freshwater recovery, energy consumption per liter, and energy consumption per gram. However, some limitations had been experienced with GA, such as in all the cases of multi-objective optimization, the optimized solution had to be manually selected from the Pareto optimal front. This was a major shortcoming in terms of objective-based optimization to find a feasible solution according to our required goals. To overcome this drawback, RMO is utilized in this manuscript to obtain the optimal solution set automatically. The same mathematical model and cost functions, as used in our previous paper [43], are adopted and optimized through RMO method. The results obtained are compared with the result of GA optimization of the previous paper.
Mathematical Model of Capacitive Deionization
The CDI cell operation is a dynamic phenomenon: the effluent concentration changes from maximum to minimum and repeats itself in cyclic manner. There are two developed modes of operation i.e., Constant Voltage (CV) [44] and Constant Current (CC) [45], and a novel hybrid CV-CC mode [46]. Biesheuvel et al. [47] demonstrated a well matching between the Gouy-Chapman-Stern (GCS) model for CDI that describes the charge and ions adsorption capacity with current and the effluent ion concentrations measured experimentally.
CV process was developed at the first stage, because it is the basic mode of capacitor operation, and later on the CC process was developed. Afterwards, based on CV and CC processes, Hybrid CV-CC process was developed, according to the water purification requirements. Therefore, in this paper and in our previous paper, CV process was considered as a base process for the performance evaluation and optimization. In a complete cycle of CDI, there are two processes: adsorption and desorption [44].
Adsorption process provides deionized water stream at outlet by absorbing the salt ions from the inlet water stream and storing them in electrically polarized porous electrodes. The effluent purified water stream concentration with respect to the cycle time during the adsorption of the CV process is measured by [44].
and all other parameters used are reported in Table 1. The effluent concentration (C ad ) given by this equation is time dependent and changes from maximum to minimum, until, no more ions can be adsorbed. The model works by subtracting the absorbed salt from the influent concentration. Furthermore, as the above equation indicates, number of ions adsorbed and thus the effluent concentration varies with the capacitance (C) of the cell. After voltage is applied across the cell, and the charges begin to store in the porous electrodes, they attract the ions from the feed solution. When no more ions can be stored, the effluent concentration starts increasing, and ultimately becomes equal to influent concentration. Now the electrodes are said to be saturated. Therefore, to further purify the inlet water, electrodes are regenerated. The regeneration of electrodes is known as the desorption process. During this process, the applied voltage is reversed or zero, due to which, the adsorbed ions are desorbed. The effluent concentration stream with respect to the cycle time during the desorption process of CDI for the CV process can be measured [44] by: where µ d = (1 − e − Vs φRC ), ρ 1 = e βt ad , ρ 2 = e αt ad , V cel is the CDI cell voltage at the time of adsorption process completion V cel = V ad 1 − e − t ad RC and all other parameters used are reported in Table 1. As the electrodes are full with change and reverse or zero potential is applied, the already stored charges start moving out of electrodes steadily. While there are no opposite charges to hold the ions in electrodes, to ion starts flowing with the regenerative inlet stream, eventually, electrodes are regenerated. During regeneration, effluent concentration during desorption (C des ) varies with time and has values greater than inlet concentration (C in ). The regeneration process is also function of capacitance along with other parameters mentioned in Table 1.
Performance Criterion
The output of the CDI system can be measured with reference to the purified water concentration, water recovery, salt ions adsorption, and specific energy consumption.
For the purified water concentration, the salt concentration in the effluent water for the complete adsorption process is accumulated in separate tank. It measured for a specific duration of time to get the average accumulated concentration of desalinated water, which is known as pool water concentration [43]. Equation (3) represents the pool water concentration. It is time dependent, and other operational parameters governing the pool water concentration (from Equation (3)) are inlet feed concentration, spacer volume, capacitance, applied voltage, and flowrate.
Water recovery is another criterion used to evaluate the performance of the desalination system. Water recovery is the ratio of water purified during the adsorption process to the total amount of water supplied for the complete cycle (adsorption process + desorption process). However, in this manuscript, the term freshwater recovered is used, which is the product of the inlet flow rate and the adsorption time of the CDI cycle, given in Equation (4). This formulation of freshwater recovery is specifically adopted in this manuscript for performance evaluation and optimization. This performance metric is important to compare the same system by varying the input parameters. Further, this equation provides an opportunity for optimization and results will be compared before and after optimization for the same system. However, in the general desalination process, water recovery is the evaluation criterion.
Fresh Water Recovery = ∅ .t ad (4) The working principle of CDI is based on the adsorption of the salt ions into the porous electrodes to produce deionized water as a product. Therefore, salt ions adsorption is also a performance evaluation criterion for the CDI process. The following mathematical equation is derived in reference [43] to measure the salt ions adsorption in the porous electrodes.
where M w is the molecular weight of NaCl. Moreover, the salt absorption is a function of flowrate (∅), and summation of effluent concentration within the time interval starting from initial time (t in ) to end of adsorption (t ad ). Energy consumption is a basic criterion that is generally used to check the performance of an electrically or mechanically powered systems. In this manuscript, the specific energy consumption of the CDI system is evaluated and optimized. Specific energy consumption in terms of a liter of freshwater recovered, and a gram of salt ions adsorbed is measured with Equations (6) and (7), respectively.
The operational dynamics and constraints of these equation are taken from [43,46]. Energy consumed per liter gives the energy for a liter of water desalinated, and not the liter of water supplied to the cell. Similarly, the energy consumed per gram gives the energy required to absorb a gram of salt out of water stream.
Since the same mathematical model was used in our previous papers, and it is well verified with the experimental data [43,46]. Therefore, it is not again experimentally assessed because in this manuscript, our focus is performance evaluation and optimization.
Performance Evaluation
From Equations (2) to (6), one may see that the output of the CDI system depends on different operational and structural parameters such as spacer volume, cell volume, applied voltage, capacitance, and flow rate. Figures 1 and 2 evaluate the CDI performance for different operating parameters. Figure 1 shows the impact of electrode capacitance and spacer volume on the performance of the CDI cell with all other operating parameters are constant and the same as mentioned in references [43,47]. As evident in the figure, the increase of the electrode capacitance generally resulted in increasing the cell performance in terms of decreased specific energy consumption (per liter or per gram of salt), increased freshwater productivity, salt ion removal, and the minimum exit water concentration. However, the increase in the spacer volume has a negative effect on all of these performance metrics. Figure 2 shows the effect of applied voltage and flow rate on the performance of the CDI cell. As is obvious from the figure, increasing the applied voltage has a positive effect on the CDI performance in terms of decreasing the exit water concentration during the adsorption process, increasing the freshwater productivity, and increasing salt ion removal. While, at the same time, increasing the applied voltage negatively affects the CDI's performance with the increase in the specific energy consumption. On the other hand, the increase of the feed flow rate has a positive effect in terms of decreasing the specific energy consumption and increasing both freshwater production and salt ion removal, while it has an adverse effect in terms of lowest concentration and pool water concentration. This compliance with the fact, that with the increase of flow rate, the resident time of the salt ions in the CDI cell reduced which results in the pass of ions through the cell without adsorbing and produced a high concentration effluent. This also increased the overall concentration of the accumulated produced water. Figure 2 shows the effect of applied voltage and flow rate on the performance of the CDI cell. As is obvious from the figure, increasing the applied voltage has a positive effect on the CDI performance in terms of decreasing the exit water concentration during the adsorption process, increasing the freshwater productivity, and increasing salt ion removal. While, at the same time, increasing the applied voltage negatively affects the CDI's performance with the increase in the specific energy consumption. On the other hand, the increase of the feed flow rate has a positive effect in terms of decreasing the specific energy consumption and increasing both freshwater production and salt ion removal, while it has an adverse effect in terms of lowest concentration and pool water concentration. This compliance with the fact, that with the increase of flow rate, the resident time of the salt ions in the CDI cell reduced which results in the pass of ions through the cell without adsorbing and produced a high concentration effluent. This also increased the overall concentration of the accumulated produced water. Figure 2 shows the effect of applied voltage and flow rate on the performance of the CDI cell. As is obvious from the figure, increasing the applied voltage has a positive effect on the CDI performance in terms of decreasing the exit water concentration during the adsorption process, increasing the freshwater productivity, and increasing salt ion removal. While, at the same time, increasing the applied voltage negatively affects the CDI's performance with the increase in the specific energy consumption. On the other hand, the increase of the feed flow rate has a positive effect in terms of decreasing the specific energy consumption and increasing both freshwater production and salt ion removal, while it has an adverse effect in terms of lowest concentration and pool water concentration. This compliance with the fact, that with the increase of flow rate, the resident time of the salt ions in the CDI cell reduced which results in the pass of ions through the cell without adsorbing and produced a high concentration effluent. This also increased the overall concentration of the accumulated produced water.
Performance Optimization
Variation of the performance with the change of input operating variables, as shown in Figures 1 and 2, urges the need of optimization to decide the best-operating conditions at which the improved CDI performance could be achieved. Although we have done the optimization of such variables in our previous study using GA [43], however, the followings discrepancies were encountered.
1. The optimization solution was extracted manually from the Pareto optimal front in the case of multi-objective optimization.
2. The goal-based solution was not possible to determine directly. Instead, the solution was sorted manually based on our requirements from the optimal solution set obtained through GA optimization.
Therefore, radial movement optimization (RMO) is utilized in this manuscript to rectify the deficiencies reported above. RMO belongs to the class of metaheuristic algorithms as GA, but it uses the concept of memory, which is important to the algorithm. In RMO, the update of the next particle's position is based on the knowledge of the past particle's position, which is different from GA that uses genetic operators like crossover and mutation for updating the position. It escapes from falling in local optima and robustly discovers the global optimum. The mathematical representation of RMO is explained in detail in [48][49][50].
The governing relations of the CDI performance are utilized as the objective functions for the optimization and shown in Table 2. The performance functions utilized here are based on the following: Lowest concentration point: The lowest concentration point is specific only for CV process of CDI, which defines the maximum absorption of salt ions at that particular time.
Pool water concentration:
It is a performance indicator that can be used for desalination where the effluent purified concentration is not constant and varies with time, such as in CDI desalination system.
Salt ion adsorption:
Salt adsorption is general performance measures used for CDI to indicate how many salt ions are adsorbed in a specific cycle. Specific energy consumption: Specific energy consumption in terms of gram is utilized for CDI to calculate energy consumption for one gram of salt ion removal. This energy consumption indicator is a very important criteria to evaluate and optimize the performance of the CDI system.
Similarly, energy consumption per liter is the type of specific energy consumption performance indicator which can be used to compare the different desalination technologies (such as RO, CDI, FO, and MD) in terms of energy required to obtain one liter of freshwater. The freshwater production criterion is somehow similar to the water recovery performance indicator. It is specifically used here to check the CDI performance before and after optimization, to indicate how much freshwater production in terms of liter will be increased after the optimization.
RMO is used to optimize the CDI performance under different five constrained operating parameters, i.e., spacer volume, capacitance, applied voltage, flow rate, and cell volume, as shown in Table 3. The constraints are based on system limitations and very well reported in ref. [43]. As stated earlier, this is an extension of previously published work [43]; therefore, the same input parameters are used as in previous paper to make a comparison between RMO and GA optimization. Two types of optimization are adopted in this manuscript to improve the performance of the CDI process: single objective and multi-objective. In single-objective optimization, each single-objective was optimized individually using the five operating parameters as decision variables, mentioned in Table 2. Hence, specific performance was optimized regardless of the other performance criteria. Furthermore, simultaneous optimization of all the performance functions was obtained using the multi-objective optimization where an acceptable solution that satisfies all the objective functions was generated. According to the number of constraints applied, there are single-constrained multi-objective function (SCMOF) optimization and multi-constrained multi-objective functions (MCMOF) optimization. In SCMOF, only a single decision variable is used for optimizing all the objective functions, and all other variables are kept constant. While, in MCMOF optimization, several decision variables are used for optimizing the objective functions. In general, SCMOF is used when only one operating parameters are allowed to change for enhancing the system performance, while MCMOF is used when more than one operating parameters are allowed to change for optimizing the performance of the system. In the current study, single-objective optimization and multi-objective optimization techniques both using multiple constraints are applied for optimizing the CDI performance. The mathematical expression of a single objective and multi-objective optimization is as follows. It can be observed in both mathematical expressions that Y 5 and Y 6 both have only three decision variables as compared with other objective functions. This is because the CDI cell performance function corresponding to the objective functions Y 5 and Y 6 are not dependent on the spacer and cell volume.
Single and multi-objective optimization with multiple constraints: To optimize the single objectives individualy Y 1 = Lowest concentration point Y 2 = Pool water concentration Y 3 = Salt ion adsorption Y 4 = Energy consumption per gram Y 5 = Energy consumption per liter Y 6 = Fresh water recovery and To optimize the Multi − objective
Results and Discussion
The parameters of the RMO algorithm and their values are tabulated in Table 4. The results demonstrated a decrease in the specific energy consumption during the optimization process (Figures 3a and 4) that would be related to corresponding decision variables such as a decrease of spacer volume (SV) in Figure 3b, increase of the capacitance (C) in Figure 3c, a decrease in the applied voltage (AV) in Figure 3d, an increase in the feed flow rate (FR) in Figure 3e, and decrease of the cell volume (CV) in Figure 3f. Additionally, the optimization process revealed an increase in freshwater productivity ( Figure 5), due to the increase of the capacitance and the flow rate with a higher applied voltage. Figures 6 and 7 show a decrease in the lowest concentration point and pool water concentration due to the increase of capacitance and applied voltage with the decrease of the spacer volume, cell volume, and flow rate. Figure 8 shows an increase in the salt ions removal due to the increase of the capacitance and flow rate along with the decrease of spacer volume, cell volume, and applied voltage. optimization process (Figures 3a and 4) that would be related to corresponding decision variables such as a decrease of spacer volume (SV) in Figure 3b, increase of the capacitance (C) in Figure 3c, a decrease in the applied voltage (AV) in Figure 3d, an increase in the feed flow rate (FR) in Figure 3e, and decrease of the cell volume (CV) in Figure 3f. Additionally, the optimization process revealed an increase in freshwater productivity ( Figure 5), due to the increase of the capacitance and the flow rate with a higher applied voltage. Figures 6 and 7 show a decrease in the lowest concentration point and pool water concentration due to the increase of capacitance and applied voltage with the decrease of the spacer volume, cell volume, and flow rate. Figure 8 shows an increase in the salt ions removal due to the increase of the capacitance and flow rate along with the decrease of spacer volume, cell volume, and applied voltage. optimization process (Figures 3a and 4) that would be related to corresponding decision variables such as a decrease of spacer volume (SV) in Figure 3b, increase of the capacitance (C) in Figure 3c, a decrease in the applied voltage (AV) in Figure 3d, an increase in the feed flow rate (FR) in Figure 3e, and decrease of the cell volume (CV) in Figure 3f. Additionally, the optimization process revealed an increase in freshwater productivity ( Figure 5), due to the increase of the capacitance and the flow rate with a higher applied voltage. Figures 6 and 7 show a decrease in the lowest concentration point and pool water concentration due to the increase of capacitance and applied voltage with the decrease of the spacer volume, cell volume, and flow rate. Figure 8 shows an increase in the salt ions removal due to the increase of the capacitance and flow rate along with the decrease of spacer volume, cell volume, and applied voltage. Table 5 summarizes the results of the single objective-based optimization for multiple constrained operating parameters. In this table, the optimized performance functions result obtained through RMO are compared with the optimized result obtained through GA of our previous paper [43] and with the performance values without optimization. It can be observed from the table that the improved result is obtained through RMO as compared with the result obtained from the GA of Matlab. For instance, compared to the results obtained using GA [43], specific energy consumption (J/g) decreased by 5.6%, freshwater productivity (L) increased by 25%, salt ion adsorption increased by 32%, pool water concentration (ppm) decreased by 5.6%. The lowest concentration point (mM) decreased by 65%.
The simultaneous optimization of all of the parameters is shown in Figure 9. The optimization results, in general, improve the process in terms of an increase of the capacitance, spacer volume, and flow rate along with the decrease of applied voltage and cell volume. Table 6 compared the result of Table 5 summarizes the results of the single objective-based optimization for multiple constrained operating parameters. In this table, the optimized performance functions result obtained through RMO are compared with the optimized result obtained through GA of our previous paper [43] and with the performance values without optimization. It can be observed from the table that the improved result is obtained through RMO as compared with the result obtained from the GA of Matlab. For instance, compared to the results obtained using GA [43], specific energy consumption (J/g) decreased by 5.6%, freshwater productivity (L) increased by 25%, salt ion adsorption increased by 32%, pool water concentration (ppm) decreased by 5.6%. The lowest concentration point (mM) decreased by 65%.
The simultaneous optimization of all of the parameters is shown in Figure 9. The optimization results, in general, improve the process in terms of an increase of the capacitance, spacer volume, and flow rate along with the decrease of applied voltage and cell volume. Table 6 compared the result of Figure 8. The cost function variation, best values of decision variables during the optimization process of salt ion adsorption. C is the capacitance, CP is the center point, FR is the flow rate, SV is the spacer volume, AV is the applied voltage, and CV is the cell volume. Table 5 summarizes the results of the single objective-based optimization for multiple constrained operating parameters. In this table, the optimized performance functions result obtained through RMO are compared with the optimized result obtained through GA of our previous paper [43] and with the performance values without optimization. It can be observed from the table that the improved result is obtained through RMO as compared with the result obtained from the GA of Matlab. For instance, compared to the results obtained using GA [43], specific energy consumption (J/g) decreased by 5.6%, freshwater productivity (L) increased by 25%, salt ion adsorption increased by 32%, pool water concentration (ppm) decreased by 5.6%. The lowest concentration point (mM) decreased by 65%.
The simultaneous optimization of all of the parameters is shown in Figure 9. The optimization results, in general, improve the process in terms of an increase of the capacitance, spacer volume, and flow rate along with the decrease of applied voltage and cell volume. Table 6 compared the result of the optimized performance of CDI obtained from RMO with the result derived from Pareto optimal solution set of GA for four different cases. The result of GA was taken from Table 6 of our previous published paper [43]. In general, the results obtained by the RMO is better than the best results obtained (case No.2) by the GA. Even the energy consumption per liter (Y 5 ) and freshwater recovered (Y 6 ) were also optimized through multi-objective optimization in RMO, which was not possible in GA multi-objective optimization because comparatively fewer decision variables are utilized in Y 5 and Y 6 . Moreover, another advantage of RMO is that the optimal parameters are generated automatically, and there is no need for manual sorting that was done in the case of GA multi-objective optimization [43]. Therefore, it can be stated that RMO is better to obtain the goal-based single optimal solution.
The performance metrics used in this study are specified in the CV process of CDI. Therefore, this was slightly different from Hawks et al. [51] who defined generalized performance metrics (productivity, volume-averaged salt removal, volumetric energy consumption, and water recovery ratio) of CDI system for comparison with other desalination technologies.
Although fuzzy modeling is an effective and accurate method that proved a success in several applications, the main limitations of it that its accuracy depends on the accuracy and number of the experimental trails done. Therefore, some variations can be found from one study to another. However, fussy modeling is an acceptable and reliable modeling tool for most of the researchers which is clear from the exponential growth in its application in different fields. the optimized performance of CDI obtained from RMO with the result derived from Pareto optimal solution set of GA for four different cases. The result of GA was taken from Table 6 of our previous published paper [43]. In general, the results obtained by the RMO is better than the best results obtained (case No.2) by the GA. Even the energy consumption per liter (Y5) and freshwater recovered (Y6) were also optimized through multi-objective optimization in RMO, which was not possible in GA multi-objective optimization because comparatively fewer decision variables are utilized in Y5 and Y6. Moreover, another advantage of RMO is that the optimal parameters are generated automatically, and there is no need for manual sorting that was done in the case of GA multi-objective optimization [43]. Therefore, it can be stated that RMO is better to obtain the goal-based single optimal solution. optimization process. C is the capacitance, CP is the center point, FR is the flow rate, SV is the spacer volume, AV is the applied voltage, and CV is the cell volume. Figure 9. The cost function variation, best values of decision variables during the multi-objective optimization process. C is the capacitance, CP is the center point, FR is the flow rate, SV is the spacer volume, AV is the applied voltage, and CV is the cell volume.
Conclusions
Improving the performance of the capacitive deionization desalination (CDI) system was the essential objective of this research. Using the experimental data, a mathematical model was proposed and applied for the CDI system. RMO is used for optimizing the performance based on the different operating and structural parameters of the CDI system, such as spacer volume, capacitance, applied voltage, flow rate, and cell volume. The performance of the CDI system is evaluated through six different parameters: lowest concentration point, pool water concentration, energy consumption per liter, energy consumption per gram, salt ion adsorption, and freshwater recovery. Two different optimization techniques were considered: single-objective and multi-objective functions. The obtained results by RMO optimizer were compared with those obtained by the genetic algorithm (GA). The result showed a performance improvement from 5.6% to 65% in the case of single-objective optimization. Similarly, in the case of multi-objective optimization, RMO shows overall improved results as compared to GA. Furthermore, RMO optimization also overcame the deficiencies surfaced during the GA optimization. Thus, the above-discussed findings demonstrated the effectiveness of optimization techniques in exploring all possible conditions and determining the best one for the operation of the CDI system.
|
2020-08-13T10:05:24.868Z
|
2020-08-10T00:00:00.000
|
{
"year": 2020,
"sha1": "af34c4c53d38703136d6aa7f6450c1d950ef50d0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9717/8/8/964/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "dc54dbc5791b289a0d39a8e5988715d91627551a",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
251461014
|
pes2o/s2orc
|
v3-fos-license
|
Adaptively Center-Shape Sensitive Sample Selection for Ship Detection in SAR Images
With the wide application of synthetic aperture radar in maritime surveillance, a ship detection method has been rapidly developed. However, there is still a key problem common in most methods, i.e., how to select positive and negative samples. The mainstream MaxIoUAssign has inherent problems, such as a fixed threshold and rough classification, resulting in the low quality of the positive samples. To solve these problems, we propose a new sample selection method called adaptively center-shape sensitive sample selection. The proposed method introduces shape similarity between proposal boxes and ground truth as one of the evaluation criteria and collaborates with intersection of union (IoU) to measure the quality of the proposal boxes. Meanwhile, the center distance between proposal boxes and ground truth is used to control the influence degree of IoU and shape similarity. In this way, the quality score of the proposal boxes can be determined through IoU, shape similarity, and center position, making sample selection more comprehensive. Additionally, to avoid a fixed threshold, the standard deviation of the quality score is used as a variable to form the adaptive threshold. Finally, we conducted extensive experiments on the benchmark SAR ship detection dataset (SSDD) and high-resolution SAR images datasets (HRSID) datasets. The experimental results demonstrated the superiority of our method.
I. INTRODUCTION
S YNTHETIC aperture radar (SAR) is a high-resolution image radar. As an active microwave imaging sensor, its microwave imaging process has a certain penetration effect on ground targets and is less affected by the environment. Thus, it can effectively detect various hidden targets. At the same time, its all-weather advantages enable it to complete exploration missions in all extreme conditions. Because of these characteristics, SAR has been widely used in ship detection [1], [2], [3], [4], [5], [6].
Traditional SAR image ship detection methods mainly infer the ship's location and classification by observing the difference between the hull and background. There are three methods based on: 1) statistical features; 2) threshold; 3) transformation. For example, Iervolino and Guida [7] considered the marine clutter and signal backscattering in SAR images and proposed a generalized likelihood ratio test detector. Lang et al. [8] proposed a spatial enhanced pixel descriptor to realize the spatial structure information of the ship target and improve the separability between the ship target and ocean clutter. Leng et al. [9] defined the area ratio invariant feature group to modify the traditional detector. Among them, the constant false alarm rate [10], [11], [12] detection method and its improved version are the most widely studied. However, the traditional SAR ship detection method is not very reliable, and it is difficult to achieve accurate detection based on the difference between the hull and background.
Recently, convolutional neural networks (CNNs) have also been developed in object detection owing to the enhancement of deep learning and graphic processing unit (GPU) computing capability. Meanwhile, the detection performance of the SAR ship based on deep CNNs has been significantly improved. In particular, an accurate location is of great significance to SAR ship detection.
Currently, the precise location work mainly focuses on improving the network model, such as proposing a better network architecture or better strategy to extract reliable local features to obtain more accurate boundary regression. Specifically, these works are reflected in the category of the object detection algorithm. The first work divides the algorithm into anchor-based and anchor-free algorithms, which improve detection performance by constantly improving the design of the framework. The second work divides the algorithm into one-stage and two-stage by adjusting the training strategy.
The difference between the anchor-based and anchor-free algorithms lies in the generation method of the proposal boxes. The former generates some proposal boxes based on the anchor. The anchor needs to be manually designed according to the statistical characteristics of the datasets. Current mainstream anchor-based object detection algorithms include Faster R-CNN [13], RetinaNet [14], and you only look once (YOLO) [15], which search proposal boxes through the anchor and finally determine the target position. Then, the latter generates proposal boxes based on key or central points, which tries to eliminate an artificial anchor setting to reduce artificial interference. Current mainstream anchor-free algorithms include fully convolutional one-stage object detector (FCOS) [16], CornerNet [17], and CenterNet [18]. In addition to the above differences, their training strategies are pretty much the same, i.e., the proposal boxes will be divided into positive and negative samples using the sample selection method. Finally, positive and negative samples are used for the regression of ground truth. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ Fig. 1. Proposal boxes around ship targets. The red and blue rectangles are large and small ship targets, respectively. The orange and green rectangles represent proposal boxes and ground truth, respectively. Small targets correspond to fewer proposal boxes, which will be difficult to detect.
The above process demonstrates that if the selected positive samples are very close to ground truth in center distance and shape, boundary regression will converge faster and the prediction accuracy will be higher. For example, the anchor-free FCOS algorithm distributes anchor points evenly in the image according to CNN's downsample rate, and each anchor point predicts ground truth within a certain range. Once the target center point is close enough to an anchor point, the anchor point generates proposal boxes. The advantage of the FCOS algorithm is that the proposal box is closer to ground truth in center distance, but its shape is not accurate. Additionally, the anchor-based RetinaNet algorithm uses an artificial anchor as a proposal box to obtain positive samples with good position and shape. However, the anchor does not necessarily cover targets well, and many small targets correspond to fewer proposal boxes, which will be difficult to detect, as shown in Fig. 1.
The main difference between one-stage and two-stage algorithms is whether proposal boxes carry out subsequent second processing. In the one-stage algorithm, the proposal boxes do not perform preliminary screening, but they are directly used for selecting the sample, leading to the low quality of the positive samples in location and shape. Current mainstream one-stage object detection algorithms include single shot multibox detector (SSD) [19], RetinaNet, and YOLO. By contrast, the two-stage algorithm first filters out some proposal boxes without appropriate positions and shapes and then uses the remaining proposal box for sample selection. Current mainstream two-stage object detection algorithms include Cascade Region-CNN (Cascade RCNN) [20], Libra RCNN [21], and region-based fully convolutional networks (R-FCN) [22]. The former training speed is relatively fast. Meanwhile, the latter has a slow speed but relatively high detection accuracy. This is because, after two-stage, the positive sample is closer to ground truth in position and shape, making up for the low quality of the positive sample. However, the influence of different sample selection methods on detection performance is almost not discussed in either single-stage or two-stage detection algorithms. Our experimental results showed that different sample selection methods influenced the model to select the best-quality positive samples.
In this article, we analyze the anchor-based and anchor-free algorithms, and one-stage and two-stage algorithms. Consequently, we conclude that each algorithm focuses on how to acquire high-quality proposal boxes. Thus, assuming that each algorithm can obtain high-quality proposal boxes, the performance gap between different algorithms will be reduced. However, although improving the network model can improve the quality of the proposal boxes, it will also bring problems, such as the network architecture being difficult to unify and a complex model. Additionally, more precise predictions generally require more model parameters and training time. Thus, it is not the most economical.
A widely overlooked improvement is how to effectively select the positive and negative samples from the mixed proposal box. As long as a remarkable selection strategy is used to select high-quality proposals, it is not necessary to make laborious changes to the network structure. Currently, the mainstream sample selection method is MaxIoUAssign, but this method can only roughly evaluate the quality of the proposal boxes. Max-IoUAssign is not fully competent because of the fixed threshold value and complex proposal box distribution. In view of these situations, Zhang et al. [23] proposed an adaptive training sample selection (ATSS) method to investigate the differences between anchor-based and anchor-free algorithms. It adaptively adjusts the threshold according to the statistical characteristics of the proposal box intersection of union (IoU). Additionally, Zhu et al. [24] proposed the auto-assign that adopts a confidence weighting module to modify the positive and negative confidences of the locations in the spatial and scale dimensions. Zhang et al. [25] proposed a free anchor that adopts a learning-to-match approach and selects positive and negative samples through network training, thus eliminating manual design. Kim and Lee [26] proposed probabilistic anchor assignment that fits a Gaussian mixed distribution according to the training state of the model and uses the distribution to adaptively separate proposal boxes into positive and negative samples. However, these methods do not consider the validity of the IoU-based evaluation criteria, which is the problem identified in this article.
We found that using IoU to evaluate proposal boxes is very rough. So, IoU does not describe the importance of a proposal box uniquely and does not effectively describe some of the situations that often occur in a sample selection, as shown in Fig. 2(a). Intuitively, because B is more like the ground truth, we should choose B over A. However, their IoU is near, which means that they are the same. Fig. 2(b) shows that A is the proposal box completely covered by ground truth. Because A only contains a part of the object, it is not easy to predict the entire target. However, the proposal box B consists of part of ground truth and background. It is difficult to predict the whole object based on part of ground truth, but background information can assist the network model to accurately predict the coordinates. Therefore, A should be abandoned, and B should be selected as a positive sample. However, A will have a larger IoU than B. These situations result in the suboptimal performance of the model. To select high-quality positive samples from the proposal box, we proposes a novel sample selection strategy called adaptively center-shape sensitive sample selection (AC4S). Compared with ATSS, autoassign, and other methods, our method not only relieves the disadvantages of the conventional MaxIoUAssign method but also does not add new hyperparameters and does not need to modify the network structure. First, it uses the shape similarity and IoU between the proposal boxes and the ground truth as the evaluation criteria of sample quality. Compared with the MaxIoUAssign method, our method can refine the evaluation of sample quality, thus improving the quality of positive samples. Second, in order to balance the influence of shape similarity and IoU, we introduce the center distance between the proposal box and the ground truth as the weight factor. Additionally, owing to the few positive samples of small targets, we adopted adaptive thresholds to increase the number of positive samples of small targets and reduced the number of positive samples of large targets. Furthermore, we conducted extensive experiments on the benchmark SAR ship detection dataset (SSDD) and high-resolution SAR images datasets (HRSID) datasets. The experimental results verified the effectiveness of the proposed method.
The main contributions of our work can be summarized as follows.
1) By observing the experimental phenomena of the current mainstream sample selection methods, we conducted a detailed analysis and found that the IoU-based evaluation criteria in sample selection are rough and the samples corresponding to different sizes of targets are unbalanced. 2) To solve the common problems in the current mainstream positive and negative sample selections, we propose the AC4S method. By using basic data from datasets, such as center location and shape similarity, the proposed method can select high-quality positive samples from a large number of proposal boxes without increasing model parameters. At the same time, fixed thresholds were replaced with adaptive ones to balance the samples of different targets.
3) We conducted extensive experiments on the benchmark SSDD and HRSID datasets to prove the effectiveness of the proposed method. The experimental results confirmed that the proposed method is effective. The rest of this article is organized as follows. Section II illustrates the proposed method in detail. Next, the experimental results on several dataset and the corresponding analysis are provided in Section III. Finally, Section IV concludes this article.
II. METHODOLOGY
This section introduces the proposed AC4S method, which is divided into three components: 1) center-distance evaluation criteria; 2) shape-similarity evaluation criteria; 3) adaptive threshold. First, we introduce the current mainstream MaxIoUAssign method and the method proposed. Second, we analyze IoU. Next, we introduce the construction of the centerdistance evaluation criteria. Then, we present the shapesimilarity evaluation criteria. Finally, we introduce the structure of the adaptive threshold.
A. MaxIoUAssign
MaxIoUAssign method is one of the most widely used positive and negative sample selection methods. It is based on a fixed threshold, i.e., the IoU threshold between proposal boxes and ground truth. First, IoU between proposal boxes and ground truth is calculated one by one, and ground truth corresponding to the maximum IoU is taken as the corresponding target of the proposal box. Once the maximum IoU is greater than the fixed IoU threshold, it is regarded as a positive sample of the target; otherwise, it is a negative sample.
This method is generally suitable for most methods, including Faster-RCNN, YOLO and RetinaNet. However, this approach also has some inherent shortcomings. First, the quality of proposal boxes is not solely determined by IoU, which leads to the fact that even if some proposal boxes have the same IoU, it does not mean that they all have the same quality. Therefore, we should consider the quality of proposal box from many aspects.
Additionally, the proposal box corresponding to a small target is less than that of a large target. Thus, the IoU of the corresponding proposal box is inevitably low, so the fixed IoU threshold is not very friendly to a small target, and it may even have no positive samples. In this case, some small targets will not be detected because they cannot participate in the training, directly leading to the algorithm being not sensitive to small targets. Therefore, an appropriate sampling method should be adopted to compensate for the imbalance between large and small targets.
In this article, we used a Faster RCNN as the baseline method to verify the effectiveness of the proposed method, and its structure is shown in Fig. 3. As a two-stage target detection method, we first use the MaxIoUAssign method through RPN to extract RoI and obtain more accurate proposal boxes in the first stage. Next, RoI uses MaxIoUAssign again to extract positive and negative samples to calculate the loss function in the second stage. It is worth noting that the target of MaxIoUAssign in the first stage is an anchor, which is manually set. Because the anchor in different positions has the same aspect ratio, it leads to failure to reflect the role of shape similarity. In view of this phenomenon, we do not improve MaxIoUAssign in the first stage, but we focus on the second stage. In the second stage, RoI will have an irregular position and shape similarity after the adjustment in the first stage. Therefore, the proposed method can be used to replace MaxIoUAssign in the second stage.
B. Analysis of IoU
To illustrate the problems with MaxIoUAssign, we explain how it works and why it is important. IoU is calculated according to the location information of the proposal box (x 1 , y 1 , x 2 , y 2 ) and ground-truth(x 1 , y 1 , x 2 , y 2 ). Its calculation formula is as follows: where min(·) and max(·) represent the maximum and minimum values, respectively. ↑ indicates copying the numerator above. To explore how the IoU function is affected by center distance and shape, ( where the results of the transformation represent the central x, y coordinates, width, and height of a box. We continue to transform the components in formula (1) as shown There are two possible values of min(x + w, x + w) and max(x − w, x − w). We present all possible situations to analyze how the center distance and shape similarity affect this formula. All possible situations are as follows: In the case of formulas (3) and (6), the central x, y coordinates of the proposal box and ground truth will not participate in calculating IoU. This detailed analysis shows that in the case of formulas (3) and (6), the center points of the proposal box and ground truth have no obvious positional relationship. At this time, IoU can no longer meet the needs of positive and negative sample selections. Therefore, the proposed method adds an evaluation criterion to compensate for the above vacancy. It is obvious that the smaller the center distance L cen is, the faster the boundary regression will converge.
We continue to study the influence of shape on IoU. For the convenience of our calculation, by assuming that L cen has reached the optimal: (x − x ) = (y − y ) = 0, then formula (1) will be converted to formula (7). The results in the following: Formula (7) shows that IoU is determined by w w and h h , namely, shape similarity L shape , indicating that L shape is an important criterion for evaluating the quality of proposal boxes. Therefore, to obtain higher quality positive samples, L shape is taken as an important evaluation criterion.
We comprehensively evaluates the quality of proposal box from three aspects of L cen , L shape , and IoU.
For large targets, there are many proposal boxes that meet the requirements of the IoU threshold, and their center point and shapes are rich. In general, the larger the L cen is, the more attention is paid to L shape . On the contrary, when L cen is small, comprehensive consideration should be given to the IoU of the proposal boxes because shape similarity will be less important. For small targets, owing to the small number of its proposal boxes, the method of using a fixed threshold will lead to the imbalance of samples corresponding to small targets, affecting the training of small targets. Therefore, a method to balance the size of the target sample should be considered.
To solve the above problem, we propose the AC4S method, and its process is shown in Algorithm 1. We inherited and transcended MaxIoUAssign method. We also investigated the influence of center distance and shape similarity on experimental results.
C. Center Distance
Center distance is a measure of the difference between the positions of two boxes. Considering the boundary regression task of target detection, the closer the center point of the proposal boxes is to the center point of ground truth, the closer the predicted value is to 0, making it easier for the boundary regression to converge to the label. Therefore, when selecting positive and negative samples, center distance is an important criterion for evaluating positive and negative samples. In particular, the proposal boxes around ground truth must be considered. We designed an evaluation function as a criterion to calculate the distance between two center points. Its form is shown in formula (8). Intuitive understanding is shown in Fig. 4.
Here, L cen represents center distance. We use the L cen evaluation criteria to select proposal boxes. Fig. 5 shows that the selected proposal boxes are concentrated near the label.
The value of the evaluation function is always greater than or equal to 0 and less than 1, which meets our basic properties for an evaluation function. Fig. 5 shows that the L cen is closer to 1 when the center point is closer to the label. It is worth noting that when the center point of the proposal box is not in ground truth, L cen is set to 0. We make up for the situation in formulas (3) and (6) by setting the center-distance evaluation criteria, which play an important role in selecting high-quality positive and negative samples.
D. Shape Similarity
To determine the shape distribution of the proposal boxes, we selected three images with different target characteristics from the SSDD dataset to observe their distribution, including small, large, and dense targets. Their statistical characteristics regarding L shape are shown in Fig. 6. Faster RCNN collected a total of 600 RoIs, and it can be seen from Fig. 6 that different targets vary greatly. In A, RoI's L shape is mainly concentrated in 0.2-0.5. In B, RoI's L shape is mainly concentrated in 0-0.3, and in C, RoI's L shape is mainly concentrated in 0.6-1. These results showed that the shape of different targets has different influences on sample selection. Therefore, it is necessary to take L shape as a separate evaluation criterion for the sample selection strategy.
An anchor can roughly cover all targets in the image by setting different positions and aspect ratios, and each target can usually find the anchor with a close distance. Therefore, even if the center point is very close to the center of ground truth, it cannot be directly regarded as a positive sample. We also need to pay attention to another important factor, shape similarity, which refers to height and width ratios between the proposal box and ground truth.
Shape similarity is also important for selecting positive and negative samples. From boundary regression loss function (9), it can be found that the model tries to predict the ratio of height Δh and width Δw between the proposal boxes and ground truth. As Δh and Δw usually carry out zero initialization, it can be determined from formula (9) that if log w w and log h h are small, they converge very quickly and are more stable after convergence.
Loss wh reg = smooth_l 1 Δw, log (9) It should be noted that in order to make the loss function converge faster, we must make L shape consistent with Loss wh reg . Therefore, referring to the structure of the boundary regression loss function, the evaluation function we designed is shown Here, we use sqrt to slow down the drastic changes caused by the product. This evaluation method can limit L shape within the range of 0 to 1. As shown in formula (10), when the shape between the proposal boxes and ground truth is similar, the L shape value will approach 1; conversely, it will be near 0. Therefore, the performance of the proposal boxes in shape similarity can be evaluated using this method.
E. Quality Score
We study the performance of L shape and L cen on different targets. We conducted experiments on small, large, and dense targets on the SSDD dataset, and the experimental results are shown in Fig. 7. A total of 600 RoIs were collected using Faster RCNN. L shape and L cen of each target were used as coordinate axis labels. Mask area is used as the proposal region for positive samples because the algorithm usually adopts the targets with large L shape and L cen as positive samples. However, the RoI of different targets in this region varies greatly, which is not conducive to the balance of training samples. Therefore, it is not enough to select samples only from L shape and L cen . Thus, we evaluate the factors influencing the IoU, L shape and L cen functions. Specifically, when the center point of the proposal box is close to ground truth, a large weight is added to L shape . When the center distance between the two boxes is far, we assign a large weight to IoU to comprehensively consider the position and shape of the proposal box. Therefore, we directly take L cen as the weight. Then, our quality score (QS) evaluation function is shown as follows: To further explore the difference between the proposed method and MaxIoUAssign, we conducted an experiment on the SSDD dataset, and the results are shown in Fig. 8. This figure shows the results obtained using QS and IoU for the same L shape and L cen , respectively. When L shape and L cen are small, the difference between IoU and QS is not large because when L cen is 0, QS will degenerate into IoU. With the increase in L shape and L cen , the difference between IoU and QS gradually becomes larger. The gap boundary between high-quality and low-quality RoIs becomes clearer by replacing IoU with QS, facilitating the separation of positive and negative samples.
F. Adaptive Threshold
To eliminate artificial interference and balance the positive and negative samples of different sizes, we adopt an adaptive threshold to replace a fixed threshold. Fig. 9 shows that the median of QS is generally within the range of 0 to 0.4. The score distribution of large targets is scattered, and the standard deviation is large. Thus, there are more proposal boxes with a high score. Therefore, the threshold value should be appropriately increased to obtain fewer positive samples. Small targets have small QS, concentrated distribution, and small standard deviation, so proposal boxes with a high Fig. 9. Distribution of QS on small, large and dense targets in the image. Black dots represent RoI data. The black line in the box represents the median line of QS. 25%-75% represents the RoI range corresponding to a score in 25%-75% range. score will become fewer, and the threshold value should be appropriately reduced to obtain more positive samples. Therefore, we consider using the standard deviation of QS as the adaptive factor of the threshold, and the details are shown in formula (12). The threshold will be adjusted adaptively according to the differences of the target proposal boxes so that small targets can select high-quality positive samples thre = α + std(score). (12) Here, α represents a hyperparameter, and std(·) represents the standard deviation.
III. EXPERIMENT
In this section, to verify the validity of the proposed method, we conducted extensive experiments on SSDD and HRSID datasets. First, we introduced the dataset, evaluation criteria, and experimental environment. Then, to compare the differences between MaxIoUAssign and our method, we conducted experiments on the SSDD dataset and analyzed their differences. Next, we performed ablation experiments to explore the setting of the hyperparameters in the evaluation criteria. Finally, the proposed method was compared with several state-of-the-art methods on the SSDD and HRSID datasets.
A. Dataset
To prove the superiority of this method, we conducted extensive experiments on the SSDD and HRSID datasets.
SSDD is the first SAR ship dataset established in 2017. It has been widely used by many researchers since its publication and has become the baseline dataset for SAR ship detection. The SSDD dataset contains many scenarios and ships and involves various sensors, resolutions, polarization modes, and working modes. Additionally, the label file settings of this dataset are the same as those of the mainstream PASCAL visual object classes (VOC) dataset, so training of the algorithms is convenient.
In using the SSDD dataset, researchers used to randomly divide training, validation, and test datasets. These inconsistent divisions often result in the absence of common evaluation criteria. As researchers gradually discovered this problem, they began to establish uniform training and test datasets. Currently, 80% of the total dataset are training datasets, and the remaining 20% are test datasets. There are 1160 images in the SSDD dataset. Therefore, the number of images in the training dataset is 921, and the number of images in the test dataset is 239. For further refinement, images whose names end with digits one and nine are set as test datasets. In this way, the performance of various detection algorithms can be evaluated in a targeted way.
The HRSID dataset is a dataset released by University of Electronic Science and Technology of China UESTC in January 2020. HRSID is used for ship detection, semantic segmentation, and instance segmentation tasks in high-resolution SAR images. The dataset contains 5604 high-resolution SAR images and 16 951 ship instances. Its label file settings are the same as those of the mainstream of the Microsoft common objects in context (MS COCO) dataset.
B. Evaluation Criteria
To evaluate the detection performance of the algorithm model, we adopted the evaluation criteria AP , AP 50 , AP 75 , AP s , AP m , and AP l in the MS COCO dataset. Average Precision (AP ) is the area under the accuracy-recall curve. AP is calculated by precision and recall, where precision and recall are shown in formula (13). It is important to note that AP is the mean value with IoU = 0.50 : 0.05 : 0.95 (primary challenge measure), AP 50 is the AP with IoU = 0.5 (PASCAL VOC measure), and AP 75 is the AP with IoU = 0.75 (Strict measure). AP s , AP m , and AP l represent AP of small target, medium target and large target respectively, where small target with an area less than 32 2 pixels, medium target with an area between 32 2 pixels and 96 2 pixels, and large target with an area greater than 96 2 pixels P = T P T P + F P × 100% (13) Here, T P (true positive) is the number of ships correctly detected, F P (false positive) is the number of ships incorrectly classified as positive, and F N (false negative) is the number of ships correctly classified as negative. AP is defined as where P represents precision and R represents recall. AP is equal to the area under the curve. In addition, floating point operations (FLOPs) and Params are adopted in this article to evaluate the computational performance and the training parameters. FLOPs can be used to measure the complexity of the model. At the same time, Frames Per Second (FPS) is adopted in this paper to evaluate the running speed. FPS is used to evaluate the number of images processed per second or the time required to process an image to assess the detection speed. The shorter the time, the faster the speed.
C. Experimental Settings
All experiments were implemented in PyTorch 1.6.0, CUDA 11.2, and cuDNN 7.4.2 with an Intel intel(R) xeon(R) silver 4110 CPU and an NVIDIA Geforce TITAN RTX GPU. The PC operating system is Ubuntu 18.04. Table I presents the computer and deep learning environment configuration for our experiments.
The algorithm model in this article is based on the MMDetection framework. We trained the proposed method based on Faster RCNN using the stochastic gradient descent algorithm for 12 epochs, with a total of two images per small batch.
The initial learning rate was set to 0.01, the weight decay was 0.0001, and the momentum was 0.9. Our code is available at https://github.com/LITTERWWE/AC4S.
D. Ablation
After the analysis in Section II, we determined three influencing factors that distinguish positive and negative samples: 1) IoU ; 2) L cen ; 3) L shape . In this section, we will study the influence of different influencing factors on the experimental results.
1) Selection of Weight: To verify the influence of different parameters on function construction, we set different weights for formula (16) on the SSDD dataset for the ablation experiment. Additionally, instead of using the adaptive threshold, we fixed the threshold at 0.5. The detection performance of the algorithm is presented in Table II score First, the fifth row of Table II shows the experimental results of the original algorithm Faster RCNN. It can be clearly seen from Table II that after adding L shape , a part of the hyperparameter setting can achieve better results than the original Faster RCNN algorithm in AP 75 , AP s , AP m , AP l , and Recall. We can also see that the detection performance of L cen is better than manual settings and the original Faster RCNN algorithm. Finally, to further illustrate the superiority of the L cen method to manual settings, we draw a P R curve, as shown in the Fig. 10. L cen TABLE IV COMPARISON OF TRAINING AND INFERENCE TIME TABLE V DETECTION RESULTS (black line) is superior to other manual setting methods and the original Faster RCNN algorithm at different recall rates. Meanwhile, when recall is greater than 0.8, our curve will decline more smoothly. These show that L cen can replace the manual setting and original Faster RCNN algorithm.
2) Effect of Adaptive Threshold: Because the fixed threshold is not friendly to the small target, adaptive positive and negative sample selections are performed in this article. Our adaptive threshold is similar to the method in ATSS. However, the difference is that ATSS takes the sum of the mean and standard deviation of IoU as the adaptive threshold. Meanwhile, the proposed method only uses standard deviation because the mean score is very small, leading to a small adaptive threshold and an excessive number of positive samples, thus affecting the training results.
We conducted extensive experiments on the SSDD dataset. The experiment was divided into two parts: 1) The first part did not use standard deviation; 2) the second part used standard deviation. The experimental results are shown in Table III. In the first part, the threshold was set from 0.6 to 0.8 due to the large mean value of L shape . In the second part, we lowered α after using the variance because the variance was in the range of 0∼0.4.
By comparing the effects of the two parts, we found that AP 50 was higher when variance was used as the adaptive threshold. In particular, when α was equal to 0.5, our method had improved We performed a detailed analysis of the experimental results. Because a small target has a small standard deviation, the proposed adaptive threshold can relatively reduce the threshold of the small target and can increase the number of positive samples with the small target. Additionally, with a large standard deviation for a large target, increasing the threshold of the large target can help the large target select positive samples of higher quality. Therefore, the detection performance can be improved through an adaptive threshold. According to the above analysis, our adaptive threshold method can play an effective role in sample selection. In the following work, thre is set to 0.55 and variance is used. Using this configuration, we achieved 96.3% AP on the SSDD dataset.
E. MaxIoUAssign versus Our Method
We selected images from three scenarios, as shown in Fig. 11, where Figs. 11(1), (2), and (3) represent small, large, and dense targets, respectively. We observe the differences between the proposed method and MaxIoUAssign of RoIs on these images in three forms, as follows. Fig. 11(b) represents the scatter diagram of QS (red sphere) and IoU (blue sphere) at different positions of RoI in Faster RCNN. X and Y represent the coordinates of RoIs on the image, and Z represents the QS or IoU value. As the figure shows, the red sphere's maximum value is larger than that of the blue sphere, while the minimum value is almost the same. This phenomenon shows that, on the one hand, high-quality RoIs show higher QS scores than IoU. On the other hand, low-quality RoIs performed almost identically on QS and IoU. It will lead to some high-quality RoIs standing out when QS is used because the gap between them and low-quality RoIs becomes larger, thus facilitating the screening of high-quality RoIs. Then, our method creates a clear dividing line between RoI QSs, which makes it easier for the model to select high-quality RoIs.
The phenomenon mentioned above may be difficult to see in scatter plots, so we smoothed the scatter diagram, and the result is shown in Fig. 11(c). In the figure, the colored surfaces are the IoU distribution surfaces of all RoIs. In contrast, the gray surfaces are the QS distribution surfaces of RoI. It is evident from the figure that the QS surface is significantly higher than the IoU surface at the center point. In addition, the closer to the central point, the more significant the gap between QS and IoU. However, the farther away from the center, the smaller the gap between QS and IoU, and finally almost overlapped.
It should be noted that the value range of QS and IoU is [0-1]. Although the difference between QS and IoU seems not evident in the Fig. 11(c), taking Fig. 11(1)(c) as an example, the difference between QS and IoU at the central point is between 0.15-0.2. This gap may seem small visually, but it is enough for the neural network to perfectly select high-quality RoI, thus making the model training more effective.
Finally, look at the heat map. The darker the color, the larger the QS or IoU. Take Fig. 11(1)(d) and (1)(e) as an example. Fig. 11(1)(d) is the proposed method. The color at the center point is red, while the position away from the center point is white. Fig. 11(1)(e) is the MaxIoUAssign method. The color at the center point is red, and the position away from the center point is light red. The contrast of color shows that our method will make the sample quality appear more transparent.
F. Training and Inference Times
In order to prove that the proposed method hardly reduces the training speed and inference time while improving the detection performance, two recognized indicators, FLOP and Params, are adopted to evaluate the computational performance and complexity of the model. As for the training time, we counted the training time of our method and the baseline method Faster RCNN on SSDD and HRSID. For the inference time, FPS was used as the evaluation criterion. The results are shown in the Table IV. As seen from the table, Flops and Params corresponding to the two methods are the same. Meanwhile, on the SSDD and HRSID datasets, the training time of our method is approximately 30 s longer than that of the baseline method, which is almost negligible. Experimental results show that the proposed method has little effect on the training time and does not increase the model's computational load and training parameters.
In addition, there is almost no difference in FPS between the two methods, which indicates that the inference time of the two methods is almost the same, proving that the proposed method hardly affects the inference time.
G. Experiment on SSDD
To prove the advancement of our method, we conducted extensive experiments on SSDD dataset. In this section, we still Table V shows the test results on SSDD. As can be seen from the table, compared with the original Faster RCNN algorithm, our method improves recall by 1.5%, AP 50 by 1.7%, AP s , AP m , and AP l by 1.1%, 1.7%, and 17.3%, respectively. In addition, by comparing our method with the current mainstream algorithms in target detection, we find that our method is almost superior to other algorithms. Specifically, compared with dynamic RCNN, our method improves recall by 2.1% for AP 50 , 0.8% for AP s , and 0.4% for AP l . Compared with Cascade RCNN, our method improved recall by 2.9%, AP 50 by 5.2%, AP s , AP m and AP l by 1.7%, 3.6% and 8.2%, respectively. Compared with NAS FCOS, our method improved recall by 9.0%, AP 50 and AP 75 by 11.5% and 1.7%, AP s , AP m and AP l by 3.0%, 3.6% and 8.2%, respectively. Fig. 12 also shows that the proposed method is superior to the other algorithms under different recalls. When a recall is greater than 0.8, the decline in the proposed method is more stable than that of dynamic RCNN and NAS FCOS. The first row in Fig. 13 shows that the proposed method can detect the target accurately and does not have the problem of repeated detection compared with Faster RCNN and NAS FCOS. Meanwhile, the second row in the figure shows that the proposed method avoids repeated detection and error detection compared with Faster RCNN and Dynamic RCNN. The third row shows that although none of the three methods can completely detect the ship, the error rate of the proposed method is lower than that of other algorithms. The above analysis demonstrates that all algorithms have problems of repeated, error, and missed detections, but the detection accuracy of the proposed method is relatively high. Therefore, the proposed method can achieve a better detection effect than the dynamic RCNN and NAS FCOS algorithms.
H. Contrast Experiment With Other Sample Selection Methods
To comprehensively evaluate the performance of the method in sample selections, we compared it with other sample selection methods on the HRSID dataset, including ATSS and AutoAssign algorithms. The results are shown in Table VI. Because AutoAssign is based on FCOS, we did not migrate it to Faster RCNN. ATSS and the proposed method were applied to the Faster RCNN algorithm. These methods focus on sample selection, but their operation and ideas are different, so it has the significance of comparison.
As presented in Table VI, compared with the Faster RCNN algorithm, our AP 50 and AP 75 improved by 0.9 and 0.4, respectively. Although the effect on HRSID was not as obvious as that on SSDD, it still improved the original algorithm. Additionally, compared with the mainstream ATSS method, the proposed method improved AP 50 , AP 75 , AP s , and AP m by 8.1%, 0.9%, 6.9%, and 3.4%, respectively, in the HRSID dataset. However, compared with the current mainstream AutoAssign, although ours method was 1.3% lower in AP 50 , AP 75 was 5.8% higher, AP s was 5.2% higher, and AP m was 0.3% higher. Moreover, the algorithm complexity of AutoAssign is much higher than that of the proposed method. To sum up, our proposed method is advanced in sample selection.
I. Experiment on HRSID
To verify the robustness of the proposed algorithm, we conducted extensive experiments on the HRSID dataset. In this section, we still take anchor-based Faster RCNN as the baseline and compare it with other two methods based on CNN: 1) Dynamic RCNN; 2) NAS FCOS. Table VII presents the test results on the HRSID dataset. As presented in Table VII, compared with the original Faster RCNN algorithm, our method improved recall, AP 50 , AP s , AP m , and AP l by 1.5%, 1.7%, 1.1%, 1.7%, and 17.3%, respectively. Additionally, our method is superior to current mainstream algorithms in target detections. Specifically, compared with Dynamic RCNN, our method improved recall by 2.1% for AP 50 , 0.8% for AP s , and 0.4% for AP l . Compared with NAS FCOS, our method improved recall, AP 50 , AP 50 , AP s , AP m , and AP l by 9.0%, 11.5%, 1.7%, 3.0%, 3.6%, and 8.2%, respectively.
To intuitively observe the effect of the proposed method, we marked the detection results in the image, as shown in Fig. 14. HRSID is larger than the SSDD dataset, and there are more dense small targets, so the detection is more difficult, and the detection effect is worse. However, the proposed method is still superior to the other three detection algorithms. Fig. 14 shows that other algorithms often have an error and repeated detections for dense small targets, but the proposed method performs better than them. Fig. 15 shows that the results of the proposed method are better than those of other algorithms at different recall rates, and our curves can almost cover other curves. Therefore, the above analysis demonstrates that the proposed method can also show relatively good effects on the HRSID dataset.
IV. CONCLUSION
In this article, we proposed a new sample selection algorithm for SAR ship detection. To select high-quality proposal boxes whose shape is similar to ground truth, we retained IoU and introduced shape similarity as the evaluation criterion of sample quality. Center distance was used as a weight to balance IoU and shape similarity, which was conducive to obtaining proposal boxes of higher quality. Furthermore, to avoid the fixed threshold, the standard deviation of QS was taken as the variable to regulate the threshold, which promoted the balance of samples. The experimental results showed that the proposed AC4S can effectively improve the performance of target detection and is better than other algorithms.
|
2022-08-10T15:05:48.711Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "873caa8a5123dbab75dffc124ce67bca4eb6a2d5",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/4609443/4609444/09852278.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "f9448e6e02ee7a85c32cf68f18ed93de3326fbfb",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
7797372
|
pes2o/s2orc
|
v3-fos-license
|
Self-assembly of a ‘‘double dynamic covalent’’ amphiphile featuring a glucose-responsive imine bond†
Glucose binding via boronate ester linkages selectively triggers imine bond formation between 4-formylphenylboronic acid and octylamine, leading to the formation of vesicular aggregates in aqueous solutions. This "double dynamic covalent assembly" allows the facile selective sensing of glucose against the otherwise serious interferant fructose, without the need to resort to synthetic effort.
The use of dynamic covalent bonds in the construction of complex molecular assemblies is a rapidly expanding area of research. 1 Compared with noncovalent interactions that are weak and always exchanging, dynamic covalent bonds can function effectively in highly competitive media leading to significantly more stable assemblies that can be further stabilised ''temporarily'' (e.g. stabilising a hydrazone by increasing medium pH 2 ) or ''permanently'' (e.g. reducing an imine to an amine 3 ). Differing from the ''permanent'' covalent bonds used in organic synthesis, dynamic covalent bonds allow component exchange and can be highly responsive to environmental conditions such as temperature, 4 pH, 5 phase separation 6 and molecular recognition events. Of particular interest is the responsiveness of dynamic covalent bonds to molecular recognition events. Many examples have been reported in which receptor structures were optimized through evolution of a library of assembling components in the presence of the substrate of interest as the template. 7 While reported examples have focused on optimisation of the receptor structure from possible library members, little attention has been paid to the effect of substrate binding on the extent of dynamic covalent bond formation. In principle, substrate binding should be able to amplify the formation of originally weak dynamic covalent bonds that assemble the receptor. If molecular recognition between the receptor and the substrate occurs via another dynamic covalent bond (instead of commonly employed noncovalent interactions), a molecular assembly involving receptor assembly and receptor-substrate binding would form that results in simultaneous stabilisation of two or more dynamic covalent bonds. This could be an attractive step towards the creation of complex structures with potential applications such as sensing and drug delivery. Herein we report such a system, in which formation of an imine bond occurs to a small extent without a bound substrate, but is significantly and selectively amplified by glucose binding to an aldehyde moiety via boronate ester linkages to form a glucose bound supramolecular assembly (Fig. 1). The ''dynamic covalent amphiphile'' formed between 4-formylphenylboronic acid (4FBA), octylamine (C8AM) and glucose self-assemble into vesicular aggregates in aqueous solutions, allowing selective glucose sensing simply by mixing commercially available reagents. It has been well-established that monoboronic acids have an intrinsic preference for binding fructose selectively amongst the common monosaccharides, due to the abundance of its boronic acid-accessible b-furanose form. 8 Selective sensing of glucose can be achieved by using diboronic acids that chelate glucose via binding two cis-diol moieties of its a-furanose forms. 9 It has also been reported recently that glucose can induce aggregation of simple boronic acids due to its ability to crosslink two boronic acid molecules. 10 We hypothesized that a glucose selective sensor can be as simple as an amphiphilic boronic acid, where the hydrophobic group can be attached to a hydrophilic boronic acid via a dynamic covalent linkage, preferably an imine bond 11 due to its rapid kinetics. Glucose binding was expected to induce amphiphile aggregation, and as a result ''indirectly'' amplify the imine bond formation that is responsible for assembling the amphiphilic boronate ester. To test this idea, we chose to use simple components, 4-formylphenylboronic acid (4FBA) and octylamine (C8AM) (Fig. 1). The ability of 4FBA to form an imine bond with C8AM and a boronate ester linkage with saccharides in aqueous solutions has been confirmed by 1 H NMR studies (Fig. S4, ESI †).
To allow imine bond formation while ensuring watersolubility of all components, we carried out the self-assembly studies at pH 10.5 (with 100 mM sodium carbonate buffer). Under these conditions, C8AM (pK a 10.65 12 ) is partially protonated and maintains water solubility at 3 mM. 4FBA (pK a 7.4 13 ) exists completely in its anionic form which maximizes its saccharide binding affinity. When 4FBA (3 mM) and C8AM (3 mM) were mixed at pH 10.5, the solution remained clear and transparent (Fig. 2a). In the presence of glucose (5 mM), however, the solution becomes increasingly turbid over the course of 30 min, indicating that amphiphile aggregation took place (Fig. 2a). With galactose (5 mM) used as the saccharide component, a lower degree of turbidity was observed, whereas with fructose (5 mM) the solution remained transparent (Fig. 2a).
To further investigate amphiphilic aggregation, we employed Nile red, a hydrophobic environment-sensitive fluorescent dye. In aqueous solutions Nile red is non-fluorescent, but in the presence of amphiphile aggregates (e.g. micelles and vesicles), Nile red can partition into the hydrophobic region of the aggregates so becoming strongly fluorescent. Mixtures of 4FBA (3 mM) and C8AM (3 mM) in the absence and presence of varying concentrations of saccharides were incubated for 30 min, treated with a methanol solution of Nile red, and subject to fluorescence measurements. The results are shown in Fig. 2b. Very weak fluorescence from Nile red was observed without saccharides or with fructose (0.1-10 mM), confirming that little or no amphiphile aggregation occurred. In contrast, the presence of glucose and galactose (to a lesser extent) led to dramatic enhancement of Nile red fluorescence. These results agree well with those of the turbidity assay, confirming that under the described conditions little or no aggregation occurred without saccharides or with fructose, but glucose and galactose induced self-assembly of the amphiphile formed between C8AM and 4FBA. Assembly of a control compound 4-formylbenzoic acid with C8AM was also examined using the Nile red assay, which showed no saccharide-dependence in the amphiphile aggregation (Fig. S3, ESI †). This confirmed the role of saccharide binding to the boronic acid group in the 4FBA/C8AM system. It is well known that the a-furanose forms of glucose and galactose can simultaneously bind two boronic acid moieties whereas the b-fructofuranose can only bind a single boronic acid moiety. Therefore binding of glucose and galactose can lead to formation of ''Gemini-type'' amphiphiles, which have a higher ability of aggregation compared to ''single-tail'' amphiphiles formed with 4FBA and C8AM, or additionally fructose (Fig. 1). This explains why induction of aggregation was observed only with glucose and galactose. The weaker ability of galactose to induce aggregation is probably due to the unfavorable orientation of two cis-diol moieties in the a-galactosefuranose as compared with those in a-glucofuranose. 14 It should be noted that although the fluorescence intensity (which depends on the amount of Nile red) leveled off at 5 mM of glucose and galactose, the formation of amphiphile aggregates is still far from saturation, as will be demonstrated in the imine formation study below.
Since glucose and galactose (to a lesser extent) induce aggregate formation, it is expected that they also influence the equilibrium of imine bond formation which should depend on the aggregation process. It has been reported by van Esch and coworkers that amphiphile aggregation can drive imine bond formation, 15 and this is likely to be true for this system as well. We employed 1 H NMR spectroscopy to measure imine bond formation. Characteristic imine proton NMR resonances at 8.3 ppm were observed in the absence and presence of saccharides, providing direct evidence of the imine bond formation (Fig. S6-S9, ESI †). Although using conventional liquid-state NMR techniques, the NMR signals from the aggregates cannot be quantified due to broadening ( Fig. S7 and S9, ESI †), indirect measurement of the percentage of imine formation is possible by calculation of 4FBA consumption by integration of its 1 H NMR resonances. To enable this calculation, we added N,N-dimethylformamide (DMF, equal amount to 4FBA and C8AM) as an internal reference. By comparing integrations of the 1 H NMR signals of aldehyde (CHO) and DMF, the percentage of imine formation was calculated and is summarized in Table 1.
Interestingly, imine bond formation was indeed found to be enhanced dramatically by glucose binding and to a lesser extent by galactose binding, whereas not as significantly with fructose that cannot promote amphiphile aggregation and its effect is likely due to a minor influence on the intrinsic reactivity of the aldehyde group. These results can be explained by (i) amphiphile aggregation shifts the equilibrium of the imine bond formation in the forward direction, and (ii) binding of glucose and galactose promotes amphiphile aggregation due to the formation of ''Gemini-type'' amphiphiles. Notably, glucose binding via the boronate ester linkage exerted an influence on the imine bond despite the spatial separation between the two dynamic covalent bonds. This is possibly because of the supramolecular aggregation that requires and stabilises both the imine bond and the boronate ester linkages (with ''divalent'' binder glucose). This ''indirect'' interplay is conceptually distinct from the known synergistic binding of 2-formylphenylboronic acid (2FBA) to an amine and a cis-diol component, 16 which is due to the cis-diol binding making the boron center more acidic 17 thus enhancing the boron-nitrogen interaction.
The aggregates formed with glucose were further characterised by transmission electron microscopy (TEM) and dynamic light scattering (DLS) techniques ( Fig. 2c and d). The spherical morphology and dark exterior shown by the TEM image revealed that the aggregates formed are vesicles. DLS measurements revealed an average hydrodynamic diameter (D h ) of 678 nm. The anionic nature of the aggregates resulting from the anionic boronate head groups (Fig. 1) was supported by the negative value of the measured zeta potential of À33.3 mV.
This system may be used for glucose sensing via the appearance of the solution turbidity which can be detected by the naked eye or quantified by measuring light scattering using an absorption or fluorescence spectrometer. Alternatively, the incorporation of Nile red allows sensitive fluorescence sensing of glucose at sub-mM concentrations. We were interested in testing the ability of this glucose sensing ensemble to tolerate the presence of saccharide interferents. Promisingly, the presence of 0.2 mM of fructose or galactose resulted in little interference with sensing of 1 mM glucose (Fig. S14, ESI †), a significant improvement compared with other reported systems based on self-assembly, 10b,c although fructose and galactose at higher concentrations did lead to significant interference. Note that the boronic acid component 4FBA has a 24-fold fructose/glucose binding selectivity. 13 The improvement of glucose selectivity demonstrated in the ensemble highlights the role of two synergistically acting dynamic covalent bonds coupled to supramolecular polymerization.
In summary, we have demonstrated in a simple system that the equilibrium of a dynamic covalent bond that assembles a receptor can respond to a molecular recognition event via a different dynamic covalent bond, and such an assembly has been used for sensing applications. A mixture of 4FBA, C8AM and glucose formed a dynamic ''Gemini-type'' amphiphile that self-assembled to form vesicular aggregates which features simultaneous formation of an imine bond and boronate ester linkages with glucose. Interestingly, there is a large spatial separation between the two dynamic covalent bonds, and their mutual influence is made possible because of the amphiphile aggregation and multivalent binding with glucose. Our study also relates to the interesting question of integrating dynamic covalent chemistry to supramolecular polymerization. 18 The reported system allows glucose sensing simply by mixing commercially available reagents, representing the first example that the intrinsic fructose over glucose selectivity of boronic acids can be overcome without resorting to synthesis. This suggests that the structural complexity required for creating selective synthetic receptors or other functional materials can be achieved by in situ dynamic covalent assembly of simple components. This work was supported by the NSF of China (grants 21275121, 21435003, 91427304, 21521004), and the Program for Changjiang Scholars and Innovative Research Team in University, administrated by the MOE of China (grant IRT13036). XW thanks the China Scholarship Council and the University of Southampton for a PhD studentship. PAG thanks the Royal Society and the Wolfson Foundation for a Research Merit Award. We thank Wenqiang Zhang and Hualu Zhou (Xiamen University) for their help with TEM, and Dr Neil J. Wells (University of Southampton) for performing 11 B NMR.
|
2018-04-03T03:38:16.651Z
|
2016-05-19T00:00:00.000
|
{
"year": 2016,
"sha1": "2124534037b907c270f72dd2bc95ad4c7f594edb",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2016/cc/c6cc03167f",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "00afecee50b4ac993662e7d785ba7485c41257b8",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
210997626
|
pes2o/s2orc
|
v3-fos-license
|
High cholera vaccination coverage following emergency campaign in Haiti: Results from a cluster survey in three rural Communes in the South Department, 2017
Oral cholera vaccine (OCV) has increasingly been used as an outbreak control measure, but vaccine shortages limit its application. A two-dose OCV campaign targeting residents aged over 1 year was launched in three rural Communes of Southern Haiti during an outbreak following Hurricane Matthew in October 2016. Door-to-door and fixed-site strategies were employed and mobile teams delivered vaccines to hard-to-reach communities. This was the first campaign to use the recently pre-qualified OCV, Euvichol. The study objective was to estimate post-campaign vaccination coverage in order to evaluate the campaign and guide future outbreak control strategies. We conducted a cluster survey with sampling based on random GPS points. We identified clusters of five households and included all members eligible for vaccination. Local residents collected data through face-to-face interviews. Coverage was estimated, accounting for the clustered sampling, and 95% confidence intervals calculated. 435 clusters, 2,100 households and 9,086 people were included (99% response rate). Across the three communes respectively, coverage by recall was: 80.7% (95% CI:76.8–84.1), 82.6% (78.1–86.4), and 82.3% (79.0–85.2) for two doses and 94.2% (90.8–96.4), 91.8% (87–94.9), and 93.8% (90.8–95.9) for at least one dose. Coverage varied by less than 9% across age groups and was similar among males and females. Participants obtained vaccines from door-to-door vaccinators (53%) and fixed sites (47%). Most participants heard about the campaign through community ‘criers’ (58%). Despite hard-to-reach communities, high coverage was achieved in all areas through combining different vaccine delivery strategies and extensive community mobilisation. Emergency OCV campaigns are a viable option for outbreak control and where possible multiple strategies should be used in combination. Euvichol will help alleviate the OCV shortage but effectiveness studies in outbreaks should be done.
Introduction Cholera remains a significant problem globally, with 42 countries reporting a total of 172,454 cases, including 1304 deaths, in 2015 and periodic epidemics. There are three WHO pre-qualified oral cholera vaccines (OCVs) available: Dukoral, Shanchol and the most recent addition Euvichol [1], which was prequalified in 2015. All three vaccines use two-dose regimes. To mitigate shortages, a global stockpile of OCVs was created in 2013 for use in emergencies, with 2,242,800 doses shipped in 2015. There is growing international experience of using mass OCV as an outbreak control measure. Previous campaigns in Haiti achieved high uptake [2][3][4] and demonstrated effectiveness [5,6].
In response to the increased incidence of cholera observed in the aftermath of Hurricane Matthew on October 4, 2016, a two-dose OCV campaign was conducted by the Ministry of Health Public Health and Population (MSPP), targeting residents aged over 1 year in 16 Communes in the Departments of Sud and Grande Anse. This was the first campaign to use the recently pre-qualified OCV, Euvichol. Médecins Sans Frontières (MSF) supported the vaccination campaign in three Communes of the Sud Department: Chardonnières, Côteaux, and Port-à-Piment. They delivered the first dose in all three Communes in November and December 2016 (about 4 Weeks after the hurricane at Chardonnières and Port-à-Piment and 8 weeks after at Côteaux) and provided logistical support to the MSSP for the second dose campaign in May 2017 (seven months after the hurricane). Door-to-door and fixed-site strategies were employed for both doses and mobile teams delivered vaccines to hard-to-reach communities, sometimes reachable only by foot. Vaccination coverage estimates using administrative data (based on the number of doses used divided by historical population denominators) suggested that the coverage was 61.5% in Chardonnières, 62.7% in Côteaux and 63.1% In Port-à-Piment for the first dose, however there were concerns about the reliability of the denominator given the likelihood of population movements following the hurricane. Hurricane Matthew left about 1.4 million people in need of humanitarian aid and led to significant population displacement [7,8]. A reliable population-based assessment of the campaign performance was still lacking.
The objective of this study was to estimate the post-campaign vaccination coverage and acceptability in the communes of Chardonnières, Côteaux, and Port-à-Piment in order to evaluate the campaign, inform control measures and guide future outbreak control strategies.
Sampling and study population
We employed a cluster survey design using random GPS points [9][10][11]. The study area included the three communes of Port-à-Piment, Côteaux and Chardonnieres. The study population included all individuals eligible for vaccination (those aged over one year) who were living in the selected households during the month of the first or second dose campaign. The sample size calculation was done separately for each commune based on the narrowest age band of 1-4 years. It was set to achieve 10% precision at 95% confidence, assuming 70% coverage, a design effect of three, and 10% non-participation. This gave a sample size target for each commune of 290 children. Based on the average number of children aged 0-4 years per household in the Demographic and Health Survey 2012 [12] we required 725 households. A onestage cluster sampling design was used with a cluster defined as the group of five households closest to each GPS point randomly drawn in georeferenced polygons of inhabited areas, meaning 145 clusters were required per Commune. Only GPS points falling on a roof or within 10 meters of a roofed structure were kept.
Data collection
The data were collected during face-to-face interviews by trained local investigators using a standardized questionnaire in Creole or French. They collected information on vaccination status, socio-demographic status and reasons for non-vaccination. Vaccination history was based on self-report and checked against vaccination cards. Participants were shown pictures of the administration of the vaccine and the vaccination card to aid recall. When a member of the household was absent, the head of the household or the responsible adult answered the questions on their behalf and showed their vaccination cards when possible. If the household was empty or there was no adult present, a second visit was organized during the same day. If no adult was present during the second visit, the next closest household to the GPS point was selected. If five households could not be identified the GPS point was discarded and a reserve point was used. Data were entered directly onto electronic tablets using KoBo Collect. Data collection lasted from 16 June 2017 to 1 July 2017.
Statistical analysis
For each Commune, we calculated overall and dose-specific vaccination coverage and 95% confidence intervals (95% CI). The variation of the vaccination coverage with age by sex was estimated by logistic regression using cubic splines and the 95% CI envelopes were estimated by bootstrap. Every calculation took into account the sampling method and included a finite population correction. The geographical distribution of the vaccination coverage was assessed using a general additive model, doing a binomial regression weighted by the household size. The vaccination coverage at the household level was the dependent variable, and the location of the household was the independent variable included as a smoothing spline term. We plotted the vaccination coverage alongside the standard error as an indicator of the uncertainty in the estimates. Data analysis was performed on R 3.3.4 (The R Foundation for Statistical Computing).
Ethical considerations
This survey was conducted as part of the public health response to the cholera outbreak, in order to assess coverage and inform control measures. A formal agreement was obtained from the Ministry of Public Health for the implementation of all the components of this survey. Approval from an ethical review committee was not required. Verbal informed consent was received from participants before starting the questionnaire and documented directly on the digital form. All data were collated and analysed anonymously and no identifiable information was collected other than household coordinates.
Characteristics of participants
The majority of GPS points represented households suitable for inclusion with just 14/435 (3%) needing to be replaced. Eight households (one in Port-à-Piment, five in Côteaux, and one in Chardonnières) did not consent to participate in the survey. Among the three Communes of Chardonnières, Côteaux and Port-à-Piment, the number of households recruited was 688, 709 and 703 respectively (total 2100), and the number of individuals recruited was 3081, 3109 and 2896 (total 9086). In the northern zone of Chardonnières house density was so low there that no cluster was selected. The age-sex distribution of the study cohort closely resembled the national population estimates [12].
Vaccination coverage
Self-reported coverage for at least one dose ranged from 91.8% (87.0-94.9) in Côteaux to 94.2% (90. 8-96.4) in Chardonnières (Fig 1). Self-reported coverage for two doses ranged from 80.7% (76.8-84.1) in Chardonnières to 82.6% (78.1-86.4) in Côteaux. Card-confirmed coverage for at least one dose ranged from 50.8% (46.7-54.9) in Port-à-Piment to 57.3% (53.0-61.6) in Côteaux (Fig 1). Card-confirmed coverage for two doses ranged from 23.5% (19.9-27.5) in Chardonnières to 36.1% (32.2-40.1) in Côteaux. Coverage was similar across age groups, with self-reported coverage for at least one dose ranging from 91.5% (86.8-94.7) among 15+ year olds in Côteaux to 96.1% (92.5-98.0) in 5-14 year olds in Port-à-Piment. The drop-out rate was similar in the three communes ranging from 5.3% (3.3-7.3) in Côteaux to 7.4% (4.9-9.8) in Chardonnières. Coverage was similar across both genders (Fig 2). For the first dose, adolescent and young women in Côteaux and Port-à-Piment had a slightly lower coverage than men of the same age. For the second dose, the young girls in Chardonnières had a lower coverage than boys of the same age. There was low uptake in the first dose for very young children of both genders, due to ineligibility. Despite high coverage overall, there was some spatial variation in coverage (Fig 3). Note, the northern zone of Chardonnières was excluded from interpolation because the area is sparsely populated and no households were sampled there.
Population movements
The vast majority of participants in the three communes were already living there at the time of the first dose campaign: 98.6% (97. 6
Preferential vaccine delivery strategy used by the participants
Both door-to-door and fixed-site strategies were widely used. In Côteaux, the majority of participants reported receiving the vaccine from a fixed site: 56.3% (52.5-60.0%) for dose one and 56.4%
Source of information
Information on the vaccination campaigns was mainly obtained through criers, with 52.6% (Table 1).
Reasons for non-vaccination
Among those who did not receive vaccination, the most frequently reported reason was absence/non-availability due to work or illness. In Chardonnières, Côteaux and Port-à-Piment this reason was given by 55.
Discussion
Vaccination coverage was high, with Chardonnières, Côteaux and Port-à-Piment reporting 80.7%, 82.6% and 82.3% receiving both doses and 94.2%, 91.8% and 93.8% receiving at least one dose respectively. Coverage was similar across each age group and between males and females, though there was some small-area spatial variation. The main reason for non-vaccination was absence due to work or illness. Both door-to-door and fixed-site strategies were widely used to access vaccination and most people heard about the campaign through local criers. Adverse events were uncommon. This was a well-powered study with a robust sampling strategy and high levels of participation. Adult males had a similar vaccination coverage to females in the same age group, in contrast with previous experience that they are harder to reach because of their occupations [4,13,14]. A decline in economic activity following Hurricane Matthew in these agricultural communes could have explained this to some extent and perhaps contributed to high coverage. The six month delay for the second dose would have meant reduced protection, but also offered more time to mobilise the community to promote completion of vaccination. The spatial variation observed in coverage is to be interpreted with caution where populations are very sparsely populated, such as the northern part of Chardonnières. Some spatial variation is difficult to completely avoid, especially in contexts with hard-to-reach areas.
The use of a dual door-to-door and fixed-site strategies varied somewhat by Commune but both were widely utilised. This finding highlights the usefulness of a mixed approach that offers more opportunities to access the vaccine. Door-to-door vaccination may have been particularly important in areas with limited accessibility to healthcare facilities and other fixed vaccination sites. The success of the criers in communicating the vaccination campaigns highlights the importance of community engagement and mobilisation. The survey coverage estimate was substantially higher than administrative coverage estimates. This was likely due to the limitations of the denominator data, which was not census based but used cluster survey methods, and population movements [12]. The population denominator may have overestimated true population size post-hurricane. This highlights the limitations of using such administrative data to evaluate vaccination coverage and the need for up-to-date population denominators and specific vaccination coverage surveys where necessary. Most study participants were already resident at the time of the first dose campaign suggesting there was no large inward population movement between the first dose and the time of the survey.
There is growing experience of using mass vaccination in cholera outbreaks, enabled by the creation of the global stockpile. Where there are shortages, modelling suggests even a single dose has demonstrable efficacy and may be important in outbreak control [15]. This high-coverage two-dose campaign likely contributed to preventing cholera cases in the aftermath of Hurricane Matthew, [16], however protection is unlikely to last beyond three years [17] and the effect of a six-month delay between doses is not known. Vaccine effectiveness remains to be estimated, and further studies will be important to fully evaluate this intervention. There was a low rate of adverse events as has been seen with Shancol [18,19].
Limitations
The main limitation of this study is the self-reported vaccination status. The card-confirmed vaccination status is also reported but this likely underestimates true coverage because cards are often misplaced. Relying on self-reported status could lead to an overestimation of coverage, as people may prefer to report that they have vaccinated their families when asked. On the other hand, relying on self-report could lead to an underestimation, as people may have forgotten about the vaccination, or confused it with other vaccination campaigns. Self-report for cholera vaccine is probably more reliable than for other vaccines as it is administered orally, unlike most other vaccines. In this case, it was also perhaps more memorable for having been offered in exceptional circumstances during an outbreak alongside a high-profile campaign [20]. Another limitation is that only the head of the household responded to the questionnaire on behalf of their family and other household members may not have been present during the visit to confirm the information. It is not clear how this limitation would have affected coverage estimates. Finally, no information could be collected on households that had completely left the area, since at least one member needed to be present to answer the questions, but again we do not know whether those families that left are more or less likely to have been vaccinated.
Conclusion
High vaccination coverage was achieved in this campaign. The use of a dual strategy to deliver the vaccines and extensive community mobilisation made it possible to achieve a high coverage in this rural setting with limited accessibility. This experience supports the use of mass vaccination during outbreaks in similar settings utilising multiple delivery strategies and community engagement as a feasible control measure. The addition of Euvichol to the stockpile should help alleviate shortages and extend the range of situations where vaccination can be considered as an intervention. Further studies are needed to assess the effectiveness of Euvichol in outbreak control. Where administrative data is limited or where population data is unreliable, cluster surveys provide an effective method to assess vaccination coverage.
|
2020-02-02T14:03:48.073Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "325afeb980641bc9565f79167e6550a299fd57e0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pntd.0007967",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8c5339a1133656e810b7bd8a6e4571ebb89abc63",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
255942963
|
pes2o/s2orc
|
v3-fos-license
|
Changes to teaching and learning about medication administration within a diploma of nursing course due to the Covid-19 pandemic: A staff and student multi-method evaluation
Aim This project aimed to evaluate student and staff satisfaction with, and perspectives on, changes to delivery and format of the Medication Unit of Competency within a Diploma of Nursing Program. Background Medication safety is an integral component of learning for pre-registration nursing students. The COVID-19 pandemic required rapid changes to be made to the medication competency unit being taught to students within a Diploma of Nursing, pre-registration course. Changes to sequencing of theory, mode of education delivery, class sizes, available resources and learning support were required. Design A multi-method evaluation process was conducted. The project is reported as per SQUIRE-EDU guidelines. Methods Focus groups and survey data were obtained from staff and students in December 2020, to evaluate responses to the changes. Student exam results and the number of learning support consultations accessed were also assessed to identify impact of changes. Results Staff and student evaluation identified mixed responses but overall, the change in sequence of theory and mode of delivery was positively received. Crude comparisons of final assessment results revealed improved marks compared to the previous cohort. The addition of an online medication resource was utilised by most students. The agility of staff in responding to the challenges was highlighted in the staff focus group and additional, flexible learning support was favourably received by students. Conclusions Despite the need for rapid changes being made to the course, positive responses were received from both staff and students. Some students preferred the traditional sequencing of learning as they felt it matched their learning style. An added online resource and extra learning support supported student self-efficacy development of medication competency; however further research is needed to ascertain any associations. The online resource is still being utilised within course curriculum.
Introduction
The onset of the Covid-19 pandemic toward the end of 2020, impacted nursing education in many ways. Nursing staff from nonclinical and/or non-critical areas were rapidly upskilled to return to clinical care (Marks et al., 2021). Furthermore, in some contexts, nursing students were deployed to extended clinical placements (Swift et al., 2020). Health directives on social distancing, restrictions to students' clinical placement experiences (Dewart et al., 2020) and strict lockdowns were challenges that nursing students and educators had to and continue to manage (Carolan et al., 2020). A need emerged to pivot rapidly and provide flexible education delivery modes to manage challenges effectively. This paper reports on the student and staff experience of rapid changes made to a medication competency unit of teaching, undertaken as part of the pre-registration, Diploma level nursing program in one context. Within Australia, a Diploma level nursing student graduates after an 18-month full time program and can then apply for registration as an Enrolled Nurse. The level of education is similar to that of a Licenced Practical Nurse in America, or a Nursing Associate role in the UK (Nursing and Midwifery Council, 2022). Diploma level, pre-registration student nurses are often referred to as 'hands on' learners with their training usually undertaken in a vocational setting (Akhter et al., 2021). In comparison, a Bachelor of Nursing student is required to successfully complete a minimum of three years full time study to apply for registration as a Registered Nurse and have broader knowledge and skills. Despite these differences, both Enrolled and Registered graduate nurses will have responsibility to prepare and administer different types of medications to patients, within their scope of practice (Nursing and Midwifery Board of Australia, 2022). Therefore, regardless of learning pathway, medication knowledge and skills are critical requirements for all nursing students, especially considering reports on the numbers of medication errors and adverse drug reactions (Roughead et al., 2016).
The original design of the Diploma of Nursing medication unit in our context, acknowledged Diploma-level student learning needs, providing a multi-modal delivery format where students were required to attend lectures, take part in simulation (SIM) activities and attend clinical placements. SIM learning is a mandatory physical learning component with associated assessment and a final learning level requirement is completed in the clinical environment and fully supervised performing applicable skills and knowledge. In response to challenges presented during the COVID-19 pandemic, curriculum changes were needed that not only addressed student needs but also complied with Government and health regulations and directives.
To respond to the challenges faced due to the pandemic, nurse educators within the Diploma level course at our centre adapted alternative methods of curriculum delivery. The original education format delivered theoretical content in a face-to-face, classroom setting first, then provided simulation training, followed by clinical placement. Changes to the program involved reversing the sequence of learning i.e., SIM learning followed by theoretical content delivered in a classroom setting; use of online platforms (via Zoom); addition of an online learning resource and utilising smaller student group sizes for each learning activity.
The new sequencing of learning content for the medication competency unit utilised principles of a flipped classroom approach but also exposed students to digital content to augment their learning and individual self-efficacy for medication management. 'Flipped learning' is an educational method that emerged in the early 2000 ′ s, whereby students are responsible for pre-learning educational content prior to actively participating in discussion with their educator in a classroom situation (Betihavas et al., 2016). While flipped classroom strategies appeared to promote learning, (Hamdan et al., 2013) there is mixed results of its effectiveness in undergraduate Bachelor level nursing courses in relation to student satisfaction and academic outcomes (Betihavas et al., 2016). Two studies revealed nursing students had better achievement on exams in pharmacology courses using the flipped versus traditional classroom approach (Munson and Pierce, 2015;Sisk, 2011). However, five studies within a systematic review (including 934 undergraduate and postgraduate nursing students), reported that while some students were satisfied with the flipped approach, there was no confirmation it improved academic performance (Betihavas et al., 2016). This project aims to evaluate and report on staff and student perspectives of learning sequence and other course delivery changes as a response to challenges and restrictions resulting from the Covid-19 pandemic.
Aim
More specifically, this project aimed to answer the question, "What was the impact on students and staff, of changes to the Medication unit of competency in response to Covid-19 restrictions."
Curriculum outline
The Diploma of Nursing (pre-registration) program is conducted within a nationally accredited, independent, hospital-based Registered Training Organisation in Queensland, Australia. The course facilitates acquisition of skills, knowledge, attitudes and behaviours required to become an Enrolled Nurse within Australia. Students complete their studies within one facility but can undertake placement experiences across local or regional clinical facilities. After successful completion of the 18-month Diploma course, students can apply for National Registration to practice as an Enrolled Nurse through the Nursing and Midwifery Board of Australia (NMBA) (Nursing and Midwifery Board of Australia, 2022) and can pathway into a Bachelor of Nursing course with some advance standing credit. The complete course is comprised of 20 core units and five elective units (or subjects). The unit on administering and monitoring medicines and intravenous therapy, is therefore, one of twenty-five units of competency within the program. Historically, cohort sizes averaged 100 students.
Changes to course delivery
Due to Covid-19, the full cohort of 80 students enrolled since September 2020, were allocated into smaller groups than previous cohorts and exposed to a different sequence and mode of learning than previous cohorts had followed. The changed sequencing of learning content for the medication competency unit utilised principles of a flipped classroom approach (Betihavas et al., 2016) and incorporated varying modes of content delivery including face-to-face (theory), an online platform (e.g. Zoom); simulation (SIM) activities in the clinical simulation lab as well as learning during clinical placement, when possible. Changes were made throughout the unit, to facilitate meeting social distancing requirements, such as spacing desks and chairs appropriately within the classroom, providing more teaching sessions with smaller class sizes, and consideration of student numbers for scheduling of class and clinical placements. Extra learning support sessions were scheduled for individual students to meet with educators via Zoom or face-to-face if extra explanation or help was required. Mask-wearing was mandated according to Government requirements, for both students and staff. Additionally, a new digital content resource was added to augment student learning. Med+Safe® is an Australian designed software program, endorsed by the Australian College of Nursing, that provides a learner centered comprehensive program for the accrual of knowledge, skills, and confidence for numeracy in nursing. The program is currently used by many universities and hospitals throughout Australia. Table 1 outlines changes to teaching and learning approaches for each of the three student groups.
Data collection and analysis methods
As this project was evaluating an existing service and using a convenience sample, no formal sample size calculation was required. Data were collected from students via: • Student satisfaction surveya 13-item anonymous survey developed by the project team to assess student responses to questions about changes in the sequence of content delivery as well as the incorporation of the online resource to augment their usual learning. As this was an evaluation specific to this unit and context, no prior validated survey was available that would have been suitable. Response format for each question was a 5-point Likert scale with responses ranging from Strongly Agree to Strongly Disagree. Quantitative data was analyzed using basic descriptive statistics, including counts, percentages and means. A hard copy of the survey was made available to students as they came to their last theory class for the unit. An administration staff member, who was not involved in any education activities with the students, collected and collated the responses for the project evaluation team. • Two semi-structured focus groups were undertaken in December 2020 to gather student perceptions on how the changes in delivery sequence and the introduction of a digital platform impacted their learning of medication administration. The focus groups were scheduled for two days towards the end of the unit, where there were approximately half of the full cohort (80 students) attending each day for other classes. All students were invited to attend, but reminded that participation was voluntary, and lunch was provided. The focus groups took place in a large room where students were able to move their seats to comply with social distancing and were facilitated by the primary author who was the Nurse Researcher at the time and not involved in the education program at all. The discussions were audio-recorded, transcribed verbatim by the facilitator and the text was examined carefully for recurrent phrases or concepts arising from the text. • Student marks from online medication calculation and written assessments, undertaken throughout and at the end of the unit (six assessments in total), were recorded. A crude comparison was undertaken, of overall marks against the previous year cohort (pre-Covid-19) who received the traditional education mode and delivery sequence.
Evaluative data were collected from staff via: • One staff focus group was conducted at the end of the semester (Jan 2021). An invitation to this focus group was extended to course educators and SIM tutors, learning support staff, and education managers involved in the unit. The focus group was facilitated by the Nurse Researcher and took place in a room within the education facility. The discussions were audio recorded and transcripts of the conversation were analyzed for recurring concepts.
Ethical considerations
A research protocol and application for ethical exemption was submitted to the hospital Human Research Ethics Committee (HREC). The project was determined to be an improvement/evaluation project and not research therefore an exemption was approved (Project 69657 -EXMT/MML/69657). Participants were reminded that their decision to take part in any aspect of the study was voluntary and that they could withdraw at any time or choose not to participate, without penalty. Confidentiality was maintained as no identifiable participant details were collected.
Survey data
A total of 28 (35.3 % of total cohort) students completed the online survey. Of these responses, 18 students (n = 18; 64.29 %) stated that they had no prior healthcare experience. However, half of the participants (n = 14; 50 %) reported having some nursing experience prior to the course. Students were aged from 19 to 51 years, with most being between 21 and 30 years of age. Most students had completed high school as their highest level of education (n = 16; 27.14 %),with five (17.86 %) having a Diploma level qualification from a field other than nursing, and the remaining students (n = 5; 17.86 %) had other formal education qualifications (e.g., undergraduate degree).
Regarding the changes to sequencing of the unit, a range of responses were received. Overall, 89.2 % (n = 25) of students agreed, strongly agreed or were neutral in response to being comfortable with the change in sequencing supporting their learning. Conversely, three students disagreed with this item, and three more strongly disagreed, identifying 21.4 % (n = 9) of students discontented with the changes. No negative responses were received regarding the addition of an online resources supporting their learning. All but one student (96.4 %; n = 27) felt that the use of the online medication resource increased their confidence in medication administration. Learning support was favourable reported with 25 students (89.2 %) either agreeing or strongly agreeing that the Learning support session were helpful in learning about medication administration. The remaining three response for this item were neutral. Three students (10.7 %) reported that they were not aware of the extra learning support sessions. Full Survey results are presented in Table 2.
Focus group responses
Each focus group was well attended with approximately 46 (58.9 %) students in total. The demographic consists of male and female students from a range of ethnic backgrounds including Aboriginal and Torres Strait Islander students. No time limit was placed on the focus groups, but each group lasted approximately 25 min which provided time to address required items as well as some engaged and free-flowing discussion from the students. Responses to impact of and satisfaction with course changes and the added medication resource are presented below.
Impact of changes in learning sequence
Responses to the change in sequence delivery were mixed, but most students seemed to agree that the changes did not impact them negatively. Some students identified that their individual learning style was such that they preferred learning theory first and then doing a practical component. Other students identified that they were 'hands-on' learners and therefore it didn't bother them to be doing a SIM class first prior to their theory. The concept of 'cementing' learning was raised by some students who did a SIM class before their theory lessons. Some students were concerned about 'missing out' on content by not having a theory lesson first however, this anxiety was relieved by the tutors who reassured students that everyone was exposed to the same content by the end 1, weeks 1-3) From week 4 -online theory on Wed, SIM on Thurs and campus classes on Friday until the end of term 1 (Christmas break) When Group 1 is in campus class, Group 2 is taught online. When Group 2 is in campus class, Group 1 is taught online. All students attend SIM on the same day but are divided into 3 smaller groups to reduce numbers in each learning space.
of the unit. Having options such as extra learning support sessions, additional learning resources or the ability to discuss concerns or knowledge gaps with educators were reported by the students as useful to assist them throughout the changes. Examples of responses from the groups supporting these concepts can be seen in Table 3.
Satisfaction with medication online resource
Regarding use of the online program, some students admitted that they did not access the program at all. However, data analytics identified that individual students accessed the program between 1 and 900 times, prior to their exams. Students appreciated that the online exercises were close to practice and allowed for repetition to support mastery of skills. Students reported that the program could easily be integrated with their work, family or personal responsibilities, although some students only used the program in-class. Repeated comments were received that the program helped to build confidence with calculations, and medication safety practices, such as checking expiry dates. Student responses toward utility and impact of the online resource are seen in Table 4.
Student exam marks
Sixty (n = 60) students sat the final exam at completion of this unit. Of these, 47 (78.3 %) received 100 % on the exam on the first go or after a verbal challenge. Two students did not attend the exam and the remainder (n = 11; 18.3 %) were offered a learning support session prior to resitting their exam. A crude comparison against the previous cohort identified improvement in overall pass marks against the previous cohort and less learning support sessions accessed throughout the semester.
Staff focus group responses
One focus group lasting 45 min was held with five (n = 5) teaching staff to discuss the impact of the curriculum changes. Additional written feedback was received from one other staff member who was unable to make it to the focus group but was engaged the unit and wanted to express thoughts on the changes. The main concepts that arose for the focus group are presented below.
Agility and flexibility
A recurrent theme was the agility of staff in responding to the challenges faced by the pandemic for education delivery. As stated, "Covid came along which meant we had to reduce class sizes drasti-cally…" However, staff agreed that offering a variety of resources and learning modes for students was successful in supporting students throughout changes to the unit.
"One group was doing worksheets, two groups were doing hands-on stuff and we would rotate them around…and they would have someone in that space to support them." Although this had an impact on staff workload, and there was heightened stress about uncertainties, staff agreed that the changes were necessary to ensure students felt supported throughout the semester. As stated by one staff member, "One thing we worked hard on was educator responses, to be about the student experience." Additionally, it was noted, "We had to offer double the workshops to accommodate for the scheduling and the scheduling was a lot more complex than it would usually be, however I think that once we put that effort in, from an administrative point of view, the students didn't miss out in terms of learning support offeringsit looked the same as it would have for any other semester." Maintaining student progress and wellbeing through such a turbulent time was reported as a priority. Staff were also required to deliver/ model more skills demonstrations within class time, especially for students who attended the simulation learning prior to the theory. Flexibility was required to be able to work across different learning environments and staff reported this as a positive change, as it helped with maintaining and updating their own clinical and teaching skills. The addition of the online educational program was seen as part of a range of strategies designed to enhance learning opportunities. ( " Collectively, with everything that was offered, this [the online resource] just augmented their learning."
Ongoing positive changes
Staff also discussed that although they were nervous about managing all the changes so rapidly and unsure about student impact, on reflection, they agreed that having smaller class sizes and incorporating the online resource were changes that they hoped would continue in the future.
"We went from having 45 in the SIM lab and we immediately reduced it to 20 in the SIM lab and we are unlikely to go back to 45 because Covid showed us the benefit of having the smaller sizes." When asked about the overall impact to student engagement and experience following the changes to the unit, the response from staff was that students did not seem to be impacted negatively, as stated, "They (the students) knew why we were doing what we were doing but usually you would expect a small number of people to come to you and say, 'I've got issues with my group, can you swap me?", and we've had none of that. Which is a clear indication to me that it's been managed very well by the educators." Positive responses and gratitude that had been expressed from the student focus groups, was also conveyed to staff during this session.
Discussion
The Covid-19 pandemic resulted in significant changes across nursing and within nursing education, internationally. The changes to learning and teaching in the Diploma of Nursing course at our setting, presented opportunities for re-thinking and re-imagining teaching and learning approaches, which were overall, positively received. Through providing additional workshops and adding in an online resource that students could access at their convenience, students still felt supported in their learning and their final academic results supported the decisions made by the teaching staff. Overall, it appeared that having a variety of learning options was the most positive aspect for students, as this allowed student to select an approach that matched their learning style.
The ability and success of transforming curriculum quickly to meet challenges from Covid-19, was dependent on motivated and flexible staff members and some of the changes made are now embedded within the course. The need to pivot and present alternate and flexible learning and teaching experiences has been reported on in the literature regarding nursing education throughout the pandemic (Leaver et al., 2022). In our context, some staff were reluctant at first to go ahead with the changes so quickly, as there has been no precedent for changing the sequence of learning within this cohort. Being able to adapt to a positive mindset was necessary and on reflection most students and staff felt the changes were handled in a positive and supportive manner. Studies conducted of nursing students during the pandemic identified higher levels of anxiety and stress (Fitzgerald and Konrad, 2021;Majrashi et al., 2021). Concerns about passing their courses amidst all the disruptions along with being separated from usual support networks and social activities have also been reported on (Swift et al., 2020). Increased connections between staff and students, clear communication and being able to provide flexible learning support are strategies which provided reciprocal benefit to both staff and students.
On further reflection, the changes made seemed to contribute toward developing student self-efficacy development for medication administration. An individual's perceived self-efficacy reflects belief in their capability for a task. Bandura's self-efficacy construct (Bandura, 1977(Bandura, , 1997 promotes four sources for developing individual self-efficacy for a specific behaviour, namely mastery opportunities, timely and Table 3 Example verbatim student responses to change in sequence of content delivery.
Focus Group 1
Focus Group 2 "I just like the in-class stuff first because I like to plan everything, I like to know what I'm going to learn." "Well I was always theory first and then SIM, it's easier to learn the material first and then practice." "It felt like we were learning something new straight away so everyone's paying more attention and then we go back to theory and it would be more familiar" "… I personally liked having the prac first, the SIM, and then when I was actually reading and doing my theory.
It then made more sense to me. I prefer that." "It was much better to do the practical side before the theory" "It was just a bit weird because you didn't know what to do and the other people knew what to do." "It was just easier to understand what was being spoken about, if they said, you know, syringes or vessel or this gauge needle, you understood" "…if I've read it, I know about it, I go in there and I know what I'm doing. Whereas, and I will say it is exacerbated by other people seeming to know and me not knowing, that also kind of knocks your confidence, because you're like, oh hang on a minute, what am I not knowing." "I think with the change in the learning, to me it didn't really matter so much because the educators were very knowledgeable and helped us in SIM and provided that pick up where we might have learnt more around theory and they just helped us a bit more on those SIM days so to me it didn't really matter." "I was in a group that got split into two other groups and so originally, we were doing theory first and then SIM, and then we were doing SIM and then theory. Personally, I want theory first and then SIM, …you've got two groups where some people seem to know stuff and some people don't. I found that quite hard." "Our first couple of weeks, so we broke up into three sessions, so we did hands-on in the lab and then we did a classroom half an hour … and then we did a third one with someone else, but I liked the SIM being split in halfyou know people doing half prac, half hands-on, I thought that was really cool the way those first couple of weeks were run before going out on placement…" "I didn't mind either way. Like there was always an opportunity to catch up if you felt you didn't know the theory or the prac so it kind of didn't really bother me, it interchanged a bit. I think there was always an opportunity to ask questions or read your theory in your spare time so I think for a few of us it didn't make much difference which way we did it." "I just like the in-class stuff first because I like to plan everything, I like to know what I'm going to learn." "I think in the SIMs you were always a little more focused and engaged than going into a classroom as well." "So you had your SIM and then your theory which was great because you learnt it that way and then the next week it was kind of cemented." "It's easier, theory first then SIM."
Table 4
Example verbatim responses to use of online MedSafe™ program.
Focus group 1 Focus Group 2 "I thought it was great…" "The videos … and the resource materials teach you step-by-step what to do and how to do itit was amazing!" "I really liked how if I didn't understand something, like the drip rates, you know calculating them, then you had to physically go in there and look at a few examples and do the calculations and it really helped…" "It was great. With drawing up the syringes and stuff. 'cause we had heaps of worksheets and stuff and we'd been supplied with that but I feel like it really bridged that gap for me." "I feel like it's a confidence thing too, like I know there were a couple of people who weren't confident enough to put their hand up and say, "I don't understand' in the middle of the class but then later on at home they could go to MedSafe and do it and understand it more … it's easier for those people who don't have the confidence to speak up in class." "It also gives you a lot of information if you do get it wrong. It tells you this is how you do it, this is where you went wrong, so it's good feedback." "I'm more of a practical learner and when you get home and trying to do stuff … I could just whip out my laptop and … just do a few of those things and do practical learning …" "It was also extremely convenient [laughs], …I would just go on it during times at work when it wasn't busy, so it was just really useful for filling in the gaps." constructive feedback, vicarious learning (role modelling) and an awareness of ones' own reaction to the learning experience. Comments received from students suggest that the delivery of content in this unit supported these sources, regardless of the sequence of delivery. One student stated, ""we weren't just drawing up one injection a day we were drawing up like five or six injections in a day so you were going from not knowing how to do the skill to knowing how to … so next time you walked in there I…felt comfortable enough to just go and get what I needed and draw up…", which identifies opportunity for improving mastery. Other students specifically reported a change in their level of self-confidence for preparing and delivering medications before and after the unit. Additionally, staff provided regular feedback which is another way to support self-efficacydevelopment, as adjustments can be made as feedback is provided (Bandura, 1977) . Some students even reported how they felt at the beginning and toward the end of the course which highlighted an awareness of their own reaction to the changes and their individual self-efficacy development. It is feasible that each of these opportunities for developing self-efficacy were influential toward the student's overall improved academic performances; however further specific research on this is needed to determine any associations.
There are several limitations that must be discussed following this evaluation. The methods of data collection did not capture all the reasons for student performance in the unit. For example, we did not measure student mathematical ability which may have influence over their overall marks. Additionally, other impacts from Covid-19 and coping mechanisms were not examined also which may have impacted student and staff engagement with the unit, positively or negatively. As reported by Moxham et al (Moxham et al., 2022). coping is an individual experience and educational institutions must recognise that although many students might enact strategies to manage additional and unexpected stressors such as with the pandemic, some students will require additional support and understanding.
A further limitation to the evaluation was the low response rate to the student survey. Student survey responses can be variable and it is reported that online surveys are more amenable to higher response rates, however inconsistencies do arise (Nulty, 2008). Closer follow-up of the response rate, offering an online option, sending out reminders or extending the response time frame are all strategies that could be employed to promote higher response rates to future evaluation surveys (Nulty, 2008) Finally, the evaluation was of one unit in a single setting and therefore our experience may not be representative of other centres who may have made similar changes to education delivery due to the impact of Covid-19.
Conclusions
The ability to make rapid changes to curriculum that supported students but still met the needs imposed by the pandemic, was challenging. However overall, changes that were made elicited positive responses from both staff and students. Some students preferred the traditional sequencing of learning as it matched their identified learning style. The addition of an online resource and extra learning support sessions were utilised by most students and appeared to support student self-efficacy development of medication competency, however further research is needed to clearly identify any associations. A flexible and motivated teaching team using a student-centred approach was integral to supporting student success in the course.
Finding sources
No external funding was obtained for this project.
CRediT authorship contribution statement
MAR: concept and design, acquisition of data, data analysis, write up and critical revisions, approval of manuscript to be published. KJ: concept and design, interpretation and analysis of data, critical revisions to manuscript, approval of manuscript to be published. SG:, interpretation and analysis of data, critical revisions to manuscript, approval of manuscript to be published.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2023-01-18T14:01:58.099Z
|
2023-01-17T00:00:00.000
|
{
"year": 2023,
"sha1": "b7a4354905b27688053acd060e98db32201e4966",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.nepr.2023.103547",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b45f730e1cedf7f970ffb90112ee968fc75b3c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5688243
|
pes2o/s2orc
|
v3-fos-license
|
7B2 facilitates the maturation of proPC2 in neuroendocrine cells and is required for the expression of enzymatic activity.
The prohormone convertase PC2, which is thought to mediate the proteolytic conversion of many peptide hormones, has recently been shown to interact with the neuroendocrine-specific polypeptide 7B2 in Xenopus intermediate lobe (Braks, J. A. M., and G. J. M. Martens. Cell. 78:263. 1994). In the present work we have stably transfected neuroendocrine cell lines with rat 7B2 constructs and found that overexpression of 27 kD 7B2 greatly facilitates the kinetics of maturation of proPC2, both in AtT-20/PC2 cells and in Rin5f cells. The half-life of conversion of proPC2 was reduced from 2.7 to 1.7 h in AtT-20/PC2 cells stably transfected with 27 kD 7B2 cDNA. The previously proposed "chaperone" domain was not sufficient for this facilitation event; however, a construct corresponding to the 21-kD 7B2 protein (which represents the naturally occurring maturation product) functioned well. A 7B2 construct in which maturation of 27 kD 7B2 to its 21-kD form was blocked was unable to facilitate maturation of proPC2. To correlate effects on PC2 maturation with the actual generation of PC2 enzymatic activity, a similar transfection of 21 kD 7B2 was performed using CHO cells previously amplified for the expression of proPC2. Enzymatic activity cleaving the fluorogenic substrate Cbz-Arg-Ser-Lys-Arg-AMC was highly correlated with the expression of immunoreactive 21 kD 7B2 in the conditioned medium; medium obtained from the parent cell line was completely inactive. Enzymatic activity was identified as PC2 on the basis of inhibition by the carboxy-terminal peptide of 7B2, which has previously been shown to represent a potent and specific PC2 inhibitor. Taken together, our in vivo results indicate that the interesting secretory protein 7B2 is a bifunctional molecule with an amino-terminal domain involved in proPC2 transport as well as activation.
Xenopus intermediate lobe (Braks, J. A. M., and G. J. M. Martens. Cell 78:263. 1994). In the present work we have stably transfected neuroendocrine cell lines with rat 7B2 constructs and found that overexpression of 27 kD 7B2 greatly facilitates the kinetics of maturation of proPC2, both in AtT-20/PC2 cells and in Rin5f cells. The half-life of conversion of proPC2 was reduced from 2.7 to 1.7 h in AtT-20/PC2 cells stably transfected with 27 kD 7B2 cDNA. The previously proposed "chaperone" domain was not sufficient for this facilitation event; however, a construct corresponding to the 21-kD 7B2 protein (which represents the naturally occurring maturation product) functioned well. A 7B2 construct in which maturation of 27 kD 7B2 to its 21-kD form was blocked was unable to facilitate mat-uration of proPC2.
To correlate effects on PC2 maturation with the actual generation of PC2 enzymatic activity, a similar transfection of 21 kD 7B2 was performed using CHO cells previously amplified for the expression of proPC2. Enzymatic activity cleaving the fluorogenic substrate Cbz-Arg-Ser-Lys-Arg-AMC was highly correlated with the expression of immunoreactive 21 kD 7B2 in the conditioned medium; medium obtained from the parent cell line was completely inactive. Enzymatic activity was identified as PC2 on the basis of inhibition by the carboxy-terminal peptide of 7B2, which has previously been shown to represent a potent and specific PC2 inhibitor. Taken together, our in vivo results indicate that the interesting secretory protein 7B2 is a bifunctional molecule with an amino-terminal domain involved in proPC2 transport as well as activation.
B
IOLOGICALLY active peptide hormones secreted from neuroendocrine cells are derived through the processing of prohormones through a series of posttranslational modifications. This process begins with cotranslational secretion of precursor molecules into the lumen of the endoplasmic reticulum. The precursor undergoes proteolytic cleavage, oligosaccharide addition and other required refinements during transport through the secretory pathway before release as bioactive peptide. For most polypeptide hormones, proteolytic cleavage occurs at paired basic residues; this cleavage is mediated by a subset of enzymes in the subtilisin-like enzyme family, known as the prohormone convertases (PCsl; for review see Hutton, 1992;Seidah and Chretien, 1992;Steiner et al., 1992). A1-Please address all correspondence to I. Lindberg, Department of Biochemistry and Molecular Biology, Louisiana State University Medical Center, 1901 Perdido Street, New Orleans, LA 70112. Tel.: (504) 568-3370. email: ilindb61sumc.edu.
1. Abbreviations used in this paper: AMC, aminomethylcoumarin; PC, prohormone convertases; POMC, proopiomelanocortin; 7B2 CT peptide, human 7B2~55q85. though much information is available regarding the biochemistry and distribution of the PCs, the biosynthesis and regulatory aspects of these enzymes are not fully understood. The removal of the pro sequence in PC1 appears to occur rapidly and autocatalytically Lindberg, 1994;Milgram and Mains, 1994;Zhou and Lindberg, 1993;Goodman and Gorman, 1994). The processing of proPC2, on the other hand, is considerably slower than that of proPC1 Guest et al., 1992;Shen et al., 1993;Zhou and Mains, 1994), and a substantial amount of proPC2 is stored as such in many cell lines Zhou and Mains, 1994). Since PC2 is thought to cleave intermediates produced by PC1 during peptide processing (Benjannet et al., 1991;Thomas et al., 1991;Breslin et al., 1993;Zhou and Mains, 1994), the regulation of availability of active PC2 could represent an important controlling step in peptide hormone production.
The 7B2 protein, which was first isolated from porcine and human pituitary glands over a decade ago, is selectively distributed in the central nervous system and in endocrine tissues (Hsi et al., 1982;Seidah et al., 1983;Iguchi et al., 1984). The predominant form of this protein stored in neuroendocrine tissues is a 21-kD species. Recent research has shown that in Xenopus intermediate lobe, newly synthesized 27-kD 7B2 can be coimmunoprecipitated with proPC2 using PC2 antiserum . In line with the idea that 7B2 represents a PC2-binding protein, the amino-terminal region of 7B2 (residues 1-90 of mature 7B2) shares weak amino acid sequence similarities with members of the 60-kD subclass of molecular chaperones, such as human, wheat, and E. coli chaperonin-60 (Braks and Martens, 1994). Interestingly, the carboxy-terminal portion of 7B2 is distantly related to a family of subtilisin inhibitors known as the potato inhibitor I family, and our in vitro experiments have shown that intact 27 kD 7B2, but not the processed 21-kD product, represents a potent and specific inhibitor of PC2 . We have recently demonstrated that the inhibitory activity of 27 kD 7B2 resides entirely within the carboxyl-terminal 31 amino acid peptide removed upon maturation of 7B2 to its 21-kD form Van Horssen et al., 1995). Taken together, these observations suggest an important role for 7B2 in the maturation of proPC2 as well as in the regulation of PC2 activity. To clarify the nature of the interaction of 7B2 and PC2, we have transfected various rat 7B2 constructs into two neuroendocrine cell lines and examined the kinetics of pro-PC2 maturation. In addition, we have used PC2-expressing AtT-20 and CHO cells to directly demonstrate a role for 7B2 in the generation of enzymatically active PC2.
The resulting PCR products were digested with BamHI and HindIII and ligated into pCEP4.
The second and third primers contained the desired mutations (elimination of the pentabasic processing site). The first and second primers were used as a pair in one reaction; and the third and the fourth in a separate reaction in the first round of PCR. In the second round of PCR, the resulting two individual fragments were purified and mixed, and then amplified using the first and the fourth primers. This fragment was then digested with BamHI and HindIII, and ligated into pCEP4. All inserts derived from PCR were verified by DNA sequencing.
Cell Culture, Transfection, and Selection
An AtT-20 cell line stably expressing PC2 (Zhou and Mains, 1994) was kindly provided by Dr. R. E. Mains (Baltimore, MD); this cell line served as the host for transfection of 7B2 cDNAs described below. Data were confirmed in Rin/PE cells (a derivative of the rat insulinoma Rin5f which has been stably transfected with rat proenkephalin cDNA; Lindberg, I., unpublished results). Rin5f cells were obtained from Dr. Gary Thomas (Portland, OR).
All cell culture media were obtained from GIBCO-BRL (Gaithersburg, MD); ceils were cultured at 37°C in an atmosphere of 5% CO2. Culture medium used for AtT-20/PC2 ceils consisted of DMEM high glucose medium containing 10% Nuserum, 2.5% FBS and 200 ~g/ml G418, while Rin/PE cells were cultured in low glucose DMEM containing 10% FBS and 500 ~g/ml G418. CHO/PC2 cells, which represent a CHO cell line amplified for the expression of mouse PC2 using the dihydrofolate reductase-coupled method (Shen et al., 1993), were grown as previously described.
Transfection of each cell line was accomplished with Lipofectin (GIBCO/BRL). Briefly, ~1 × 106 cells in a 10-cm plate were used for each transfection. The cells were incubated in 3 ml Optimem (GIBCO/ BRL) containing 30 Ixg vector DNA and 30 Ixg Lipofectin for 5 h at 37°C followed by the addition of 7 ml growth medium containing 100 izg/ml hygromycin (Sigma Chem. Co., St. Louis, MO). Approximately 2 wk later, 10-30 hygromycin-resistant colonies were picked and screened for 7B2 expression either by Western blotting, by radiolmmunoassay or by radiolabeling and immunoprecipitation. Generally, the first experiment employed the two highest-expressing clones, while confirmatory experiments employed only the highest-expressing clone. 7B2-expressing cell lines were found to be quite stable and could be passaged for three months without apparent alteration in expression levels.
Metabolic Labeling and Immunoprecipitation
Half a million cells/weU in a six-well plate were labeled with [35S]methionine-labeling mix (Amersham Corp., Arlington Heights, IL) in methionine-deficient DMEM, or with Trans-label Mix (ICN Biomedicals, Costa Mesa, CA) in methionine-and cysteine-deficient DMEM. The cells were pulsed for 10 or 20 rain and chased for the indicated times before being subjected to immunoprecipitation. Cells were boiled for 5 rain in 0.1 ml boiling buffer (50 mM Na-phosphate, pH 7.4, 1% SDS, 50 mM 15 mercaptoethanol, and 2 mM EDTA). These samples were then diluted with 0.9 ml AG buffer (0.1 M Na-phosphate, pH 7.4, 1 mM EDTA, 0.1% Triton, 0.5% NP-40, and 0.9% NaC1) for immunoprecipitation. The samples (0.5 ml each) were preincubated with 0.1 ml 20% Protein A-Sepharose CL-4B (Pharmacia, Sweden, previously hydrated and washed three times with AG buffer) for 1 h, and then centrifuged. Antiserum (5 ILl) was then added to the supernatant, along with 25 g.l of I0 mM PCMS (p-chloromercuriphenylsulfonic acid) and 25 izl of 100 mM PMSF, and incubated for either 6 h or overnight at 4°C. Protein A-Sepharose (100 Ixl of 20%, hydrated and washed three times with AG buffer) was added and the samples rocked at 4°C for 1 h. The samples were then washed two times with AG buffer, once with 0.5 M NaCI in PBS, and once with PBS. Immunoprecipitates were resuspended in Laemmli sample buffer and analyzed using either 8.8% (for PC2) or 15% (for 7B2) SDS-PAGE. The quantitative nature of the immunoprecipitation was verified by immunoprecipitating spent extracts; no further 7B2 nor PC2 forms were recoverable after a second round of immunoprecipitation. A similar procedure was used in the coimmunoprecipitation of 7B2 and proPC2, except that cells were lysed by scraping into AG buffer and were frozen and thawed once at-20°C; samples were not boiled before immunoprecipitation. All labeling experiments/immunoprecipitations were repeated at least once, on separate preparations of cells.
The procedure for SDS-PAGE (8.8% gel for analysis of PC2 immunoprecipitations, 15% for 7B2 immunoprecipitations) has been previously described (Shen et al., 1993). After electrophoresis, gels were treated with Entensify (DuPont, New England Nuclear, Wilmington, DE) following the manufacturer's recommendation before fluorography. Quantitation of radioactivity within each band was carried out using a Phospholmager and ImageQuant software (Molecular Dynamics, Sunnyvale, CA).
7B2 Radioimmunoassay
Polyclonal antiserum to the sequence 7B223.39 (Iguchi et al., 1983) conjugated to keyhole limpet hemocyanin was raised in rabbits (Hazleton JRH, Denver, PA). 7B223.39 was purchased from Peninsula (Belmont, CA) for use as standard and as radiolabel. The peptide was iodinated using chloramine T and purified on a C-18 Sep-Pak cartridge. The radioimmunoassay was carried out overnight in duplicate using antiserum diluted 1: 30,000, 10,000 cpm of iodinated peptide, and samples in a total volume of 300 ~1 RIA buffer (0.1 M sodium phosphate, pH 7.4, 50 mM NaCI, containing 0.1% BSA and 0.1% sodium azide. 50-1xl duplicate samples of conditioned medium were diluted 1:1 with RIA buffer and heated to 100°C The Joumal of Cell Biology, Volume 129, 1995 for 2 min to destroy potential proteinase activity before radioimmunoassay. Standards were prepared in Optimem (GIBCO/BRL): RIA buffer in a 1:1 ratio. Free radiolabel was separated from bound using polyethylene glycol precipitation (as described in Mathis and Lindberg, 1992). The range of the standard curve was 1-500 fmol and the IC50 was 42 fmol.
Collection of Conditioned Medium from AtT-2OIPC2 and CHO Cells and Enzyme Assay
Subclones of AtT-20/PC2 cells expressing either no 7B2, 21 kD 7B2, 27 kD 7B2, the pentabasic blockade mutant, and the amino-terminal domain were subcultured at 500,000 cells per well in a 6-well plate. The following day, the wells were washed twice with 5 ml of Optimem and then incubated with 1 ml of Optimem containing 100 I~g/ml aprotinin (Miles) overnight. The conditioned medium was removed from each well, centrifuged to remove any floating cells, and 35-50-~1 aliquots then subjected to radioimmunoassay for 7B2 (in duplicate) and enzyme assay for PC2 (in duplicate). Specificity of the enzymatic reaction was monitored in separate reactions containing 100 IxM synthetic 7B2 carboxy-terminal peptide (human sequence, residues 155-185), which represents a PC2-specific inhibitor Lindberg et al., 1995). The experiment was repeated once with similar results.
For the clonal comparison study, various CHO/PC2-7B2 clones were isolated and subcultured to approximately similar confluence (70-80%) in a 12-well plate. After two 2-ml washes with Optimem, 1 ml of Optimem containing 100 ~g/ml aprotinin was placed on each well and the plate was returned to the incubator for 16 h. The following morning, medium was collected from each well, centrifuged at low speed to remove any floating cells, and subjected to radioimmunoassay for 7B2 and enzyme assay.
For the comparison of the two best 7B2-expressing CHO clones with the parent cell line, 35-txl duplicate aliquots of 6 h conditioned medium (80% confluent 35-mm dish with 1 ml Optimem, containing 100 ~g/ml aprotinin) were incubated in the reaction mixture described above (with or without 100 IxM 7B2 CT peptide) for 16 h. The production of free aminomethylcoumarin (AMC) from clones expressing 7B2 was linear over time. To observe potential activation of proPC2 in the control CHO/PC2 cell line by exogenous 7B2, recombinant rat 7B2 was prepared by bacterial expression and included in the medium as described above at a concentration of 100 i~g/ml (5.4 I~M) during a 16-h incubation at 37°C; a parallel control culture received an equivalent amount of bovine serum albumin. The following day the medium was removed, centrifuged, and tested for enzymatic activity. The experiment was repeated once with similar results.
The assay for PC2 was carried out using 35 p~l of each conditioned medium sample in a total volume of 50 ~1, containing 0.1M sodium acetate, pH 5.0, 5 mM calcium, 0.1% Brij 35, 2 p~g of bovine serum albumin, an inhibitor mix (1 p~M pepstatin A, 100 p.M tosyllysyl chloromethylketone; 100 p~M tosyl phenylalanylchloromethyl ketone, and 1 ~M E-64) and a final concentration of 200 IxM Cbz-Arg-Ser-Lys-Argaminomethylcoumarin substrate. The liberation of the highly fluorescent product AMC was monitored by fluorescence spectroscopy (380 excitation, 460 emission). 106 cells (the limit of detection of the assay), while the best transfected cell line contained 120 fmol 7B2/106 cells. Metabolic-labeling experiments with [35S]methionine were carried out on these cells as well as the parent AtT-20/PC2 cell line. In agreement with the results of Braks and Martens (1994), when cell extracts were immunoprecipitated with PC2 antiserum, 27 kD 7B2 was observed to coimmunoprecipitate with proPC2 ( Fig. 1). After excision of the bands corresponding to proPC2 and 7B2 and estimation of the radioactivity in each, the molar ratio of proPC2 and 7B2 was found to be approximately 1.1:1.0 when the number of methionines in each molecule was taken into account (Table I).
7B2 Facilitates proPC2 Maturation in AtT-20/PC2 and RinPE Cells
We next performed a kinetic analysis of proPC2 maturation in control and 7B2-transfected AtT-20 cells. To avoid the possibility of conversion of proPC2 to PC2 during the overnight immunoprecipitation, we harvested the cells in the presence of SDS (Milgram and Mains, 1993), boiled, and then diluted this material into nonionic detergent for immunoprecipitation. Fig. 2 A shows the results of an experiment in which AtT20/PC2-r7B2 or control AtT-20/ PC2 cells were pulsed for 20 min with [35S]methionine, then either terminated (lane 0) or further incubated for 1, 2, 3, or 4 h in the presence of unlabeled methionine. This figure demonstrates that the presence of 27 kD 7B2 dramatically facilitated the maturation rate of proPC2. Using phosphorimage analysis, we estimate that the half-life of the newly synthesized proPC2 in 7B2-expressing cells was 1.7 h, whereas in parent cells the half-life was 2.7 h. Similar
Results
AtT-20/PC2-7B2 Cells Express 7B2 which Coimmunoprecipitates with proPC2 Braks and Martens (1994) have demonstrated colmmunoprecipitation of 27 kD 7B2 with proPC2 in the Xenopusintermediate pituitary. To investigate the nature of this association in a cell culture system, which offers more experimental control, we stably transfected PC2-overexpressing AtT-20 cells ; kindly provided by R. E. Mains) with a hygromycin resistance-conferring construct containing rat 7B2 cDNA. 7B2-expressing clones were selected by Western blotting, which revealed normal cleavage of the transfected 27-kD 7B2 precursor to the 21-kD product (results not shown). Using radioimmunoassay, the expression level of endogenous 7B2 in the parent cell line was estimated to be at or less than 10 fmol/ Table I. results were obtained in one other independently derived clone.
We also found that the rate of secretion of mature PC2 was increased in cells transfected with 7B2 (Fig. 2 B). Despite this increased secretion, analysis of steady-state labeling of control and 7B2-expressing cells revealed increased storage of mature PC2 in 7B2-expressing lines (Fig. 2 C). However, analysis of the kinetics of processing of newly synthesized proopiomelanocortin (POMC) in AtT-20/PC2-7B2 cells did not reveal any detectable differences from the parent cell line (Mains, R. E., personal communication).
The AtT-20/PC2 cell line has been obtained through artificial engineering of AtT-20 cells with PC2 c D N A (Zhou and Mains, 1993). To extrapolate these results to a cell line which naturally expresses PC2, we performed a similar overexpression of 7B2 in RinPE cells, a derivative of the rat insulinoma cell line Rin5f which has been stably transfected with rat proenkephalin c D N A (Lindberg, I., unpublished results). The parent cell line is known to synthesize large quantities of PC2 (Shen et al., 1993). Fig. 3 shows that the presence of transfected 27 kD 7B2 in these cells was also able to facilitate the maturation of proPC2. No Braks and Martens (1994) have previously proposed that the binding of proPC2 to 7B2 is mediated by an amino-terminal domain weakly homologous to a chaperonin-related domain; this domain was proposed to extend to residue 90. To examine whether this region was sufficient to account for the effects on proPC2 transport, we constructed an expression vector containing this region for transfection of AtT-20/PC2 cells. Clones expressing this construct were selected using a radioimmunoassay against an amino-terminal epitope of 7B2. Fig. 4 depicts the maturation of proPC2 in 1-90 7B2 and 27 kD 7B2 cells as well as in the cells. AtT20/PC2-7B2 or control AtT-20/PC2 cells were pulsed for 20 min with [35S]methionine, then either terminated (lane 0) or chased for 1, 2, 3, or 4 h. Cell extracts were boiled for 5 min in the presence of 1% SDS and 50 mM-mercaptoethanol. These samples were then diluted 10× with 1% NP-40 for immunoprecipitation with PC2 antiserum. A shows that the rate of proPC2 processing was increased in 7B2-transfected cells. B depicts the effect of 7B2 expression on the secretion of mature PC2. C shows the ratios of intracellular proPC2 to mature PC2 at 6 h in a steady-labeling experiment. had no effect on PC2 maturation in AtT-20/PC2 cells. AtT20/PC2 (lanes I and 4), AtT20/PC2-7B2 (lanes 2 and 5), and AtT20/PC2-NT 7B2 (lanes 3 and 6) were pulsed for 20 min (lanes 1-3) or pulsed and then chased for 2 h (lanes 4--6). The samples were boiled as described before immunoprecipitation. Truncated 7B2 (NT-7B2) could not be coimmunoprecipitated with PC2 (not shown). parent cell line; no detectable effects of 1-90 7B2 were observed. In independent experiments, the remaining carboxy-terminal region of 7B2 (7B295.185) was similarly transfected into AtT-20 cells; again, no effect on proPC2 maturation was observed (results not shown). Using PC2 antiserum, no coimmunoprecipitation of either 7B21.90 or 7B295.185 with PC2 was observed (data not shown). Similarly to the result obtained using intact 7B2, no effects on POMC processing were detected in either the 7B21.90 or the 7B295.185-expressing cell lines (Mains, R. E., personal communication).
Structure-Function Analysis of 7B2 Indicates a Requirement for the Full 21-kD Protein
The mature form of 7B2, 21-kD 7B2, represents the predominant form of this peptide stored within neuroendocrine cells (Iguchi et al., 1984;Hsi et al., 1984;Ayoubi et al., 1990;Paquet et al., 1994). We therefore transfected a construct containing this region into AtT-20/PC2 cells. Fig. 5 depicts the maturation of proPC2 in cells containing this construct. The results of this 2-h pulse-chase experiment show that the 21-kD 7B2 protein is able to efficiently facilitate proPC2 cleavage. Further analysis revealed that in cells transfected with 21 and 27 kD constructs, the kinetics of proPC2 cleavage and the kinetics of secretion of PC2 are identical (results not shown). These results suggest that the functional region of 7B2 which accomplishes the
Blockade of Cleavage of 27 kD 7B2 Results in a Molecule which Cannot Facilitate proPC2 Maturation
To investigate whether the cleavage of 27 kD 7B2 to 21 kD 7B2 and the carboxy-terminal peptide is obligatory for the facilitatory function of 7B2, and the role the carboxyterminal peptide plays in the binding of 7B2 to PC2/ proPC2, we mutated the normal cleavage site (residues 151-155) from RRKRR to SQNSN and transfected AtT20/ PC2 cells with the construct. Fig. 6, panel A (denatured immunoprecipitate using 7B2 antiserum) shows that this 7B2 mutant protein is expressed in transfected cells (lane 0); it was not processed intracellularly but instead was rapidly secreted. At 2 h no intracellular 7B2 remained (lane 2; no visible band). Fig. 6, panel B, which represents a nondenaturing coimmunoprecipitation of medium obtained at 2 h of chase (7B2 antiserum), shows that the mutant protein was secreted not as the 21-kD form, but as the 27-kD form, supporting the idea that the normal cleavage site was blocked. This figure further shows that this mutant 7B2 protein can efficiently coimmunoprecipitate proPC2. It is interesting that both proPC2 as well as the unprocessed mutant 27-kD 7B2 were secreted intact into the growth medium (panel B); the secretion of proPC2 was not observed in experiments with other 7B2-expressing cell lines. It is likely that this proPC2/noncleavable 7B2 complex cannot be further processed by the cellular machinery and is secreted as such. Fig. 6, panel C depicts a pulse-chase experiment of PC2 maturation in AtT-20/ PC2-blockade mutant cells showing that this unprocessable 7B2 mutant cannot facilitate the maturation of proPC2. We conclude that proteolytic processing of 27 kD 7B2 is required for its facilitatory function.
Kinetics of 7B2 Processing in AtT2OIPC2-7B2 Cells
While the maturation of proPC2 in AtT-20/PC2-TB2 cells is rather slow (with a half time of ~1.7 h in 7B2-overexpressing cells), the conversion of 27 kD 7B2 to 21 kD 7B2 occurred much more quickly. After 30 min of chase, virtually all of the newly synthesized 27 kD 7B2 had been cleaved (Fig. 7, panel A). Furthermore, the secretion of 21 kD 7B2 was also much faster than that of PC2; even at 20 min, 21 kD 7B2 was detectable in the medium (Fig. 7, panel B). At 120 min of chase, however, ~1/3 of the newly synthesized 21 kD still remained inside the cells (Fig. 7, panel A), as judged from phosphorimage analysis.
7B2 Is Synthesized and Secreted More Rapidly than PC2
To calculate the relative synthesis rates of 7B2 and PC2, AtT-20/PC2-7B2 cells were labeled with [35S]methionine for 20 min. The cells were then extracted using 1% SDS and 50 mM 13-mercaptoethanol and boiled. Half of the extract was immunoprecipitated with 7B2 antiserum, and the other half immunoprecipitated with PC2 antiserum. The resulting samples were then resolved by SDS-PAGE and visualized by fluorography. The 27/21-kD 7B2 bands and the proPC2 band were excised, the radioactivities mea- sured, and the molar ratio of the two molecules was calculated, taking the number of methionines in each molecule into account (the 21-and 27-kD forms of 7B2 have the same number of methionines). These results revealed that the synthesis rate of 27 kD 7B2 is greater than that of PC2; for every 3.6 _+ 0.3 7B2 molecules synthesized, only one proPC2 molecule was produced (mean _+ SD of three determinations). However, in a 6-h steady labeling experiment, the intraceUular molar ratio of all forms of these two molecules (predominantly 21 kD 7B2 and proPC2) was only 1.4:1.0. Therefore, relative to PC2, 7B2 must be disproportionately rapidly secreted.
Secretion of PC2 Activity from AtT-20/PC2 Cells Occurs only in Cells Transfected with 21 or 27 kD 7B2
The above results demonstrate that 7B2 interacts with proPC2 to facilitate the maturation of this enzyme precursor. However, these results do not reveal whether this proteolytic processing event is associated with the production of active enzyme; indeed, the lack of effect on POMC processing could be interpreted as implying that 7B2 overexpression does not result in the net generation of active PC2. We therefore measured the release of active PC2 into the medium from AtT-20/PC2 cell lines expressing the various forms of 7B2. We were not able to measure significant levels of activity in short-term basal medium or medium from phorbol ester-stimulated cells; however, overnight-conditioned medium yielded measurable levels of PC2 activity (Fig. 8). Only cells transfected with 27 or 21 kD 7B2 were able to secrete active PC2 into the medium. The origin of this enzymatic activity as PC2 was confirmed through its ability to be blocked by the PC2-specific inhibitor, h7B2155.185 .
In CHO/PC2 Cells, Expression of 2I kD 7B2 Results in the Generation of Ezymatic Activity
To confirm the above results in a constitutive cell line which potentially offered a greater ability to measure PC2 activity, we tested the effect of 7B2 expression on the production of enzymatically active PC2 from CHO cells amplified for mouse PC2 expression using the dihydrofolate reductase-coupled method (Shen et al., 1993). These cells have previously been shown to produce large quantities of proPC2 which is slowly secreted into the medium and is enzymaticaUy inactive (Shen et al., 1993). CHO/PC2 cells were transfected with the expression vector containing 21 kD 7B2 described above and clones selected on the basis of secretion of immunoreactive 7B2. Table II shows an analysis of cleavage of the fluorogenic substrate Cbz-Arg-Ser-Lys-Arg-aminomethyl coumarin by medium conditioned for only 6 h by control CHO/PC2 cells or by two in- Figure 7. Kinetics of maturation and secretion of 7B2. AtT20/ PC2-7B2 cells were pulsed with [35S]methionine for 10 min and chased for the times indicated. A shows that the conversion of 27 kD 7B2 to the 21-kD form is completed within 30 min; B shows the kinetics of secretion of 7B2 into the medium; secretion of mature 7B2 (21 kD) occurred within 20 min. dependent clones expressing 21 kD 7B2. As expected from previous results, medium obtained from non-7B2-transfected CHO/PC2 cells was enzymatically inactive; however, medium derived from each of the two 21-kD 7B2expressing clones exhibited extremely high levels of activity. A high degree of correlation of secreted PC2 activity with 7B2 expression was observed using independent clones (Fig. 9). Incubation of CHO/PC2 cells overnight with 100 izg/ml recombinant rat 21 kD 7B2 did not result in the generation of detectable enzymatic activity in the conditioned medium (not shown).
Discussion
The proteolytic activation of PC2 has been the subject of several previous investigations. Studies using oocyte extracts and site-directed mutants of proPC2 have shown that the conversion of proPC2 (75 kD) to mature PC2 (64 kD) is autocatalytic, and that it is apparently mediated through an intermolecular reaction which is extremely slow in this system (Matthews et al., 1994;Shennan et al., 1995). In previous experiments using a CHO cell overexpression system, we were unable to demonstrate any autocatalysis of proPC2, and proPC2 was largely secreted as an intact, inactive form (Shen et al., 1993). Efforts to activate proPC2 in vitro, for example using recombinant PC1, were not successful (Zhou, Y., and I. Lindberg, unpublished results). These findings strongly implied that PC2 required a separate cofactor, either for activation or activity; or, as suggested by Braks and Martens (1994), that PC2 required the presence of another protein, namely 7B2, for proper folding. The results presented above provide experimental support for the involvement of 7B2 in proPC2 activation. Our data demonstrate that 27 kD 7B2 can be coimmunoprecipitated with proPC2 using anti-PC2 antiserum in extracts of 7B2-overexpressing, PC2-containing cells, confirming the similar association reported by Braks and Martens (1994) in Xenopus pituitary. In addition, this association is correlated with a profound acceleration of maturation of proPC2 to mature PC2 in cells expressing 7B2, with a half-life decreasing from ~2.7 h-l.7 h. The observation of stoichiometric binding of proPC2 to 27 kD 7B2 further supports the idea that 7B2 represents a PC2binding protein involved in the maturation of proPC2 to PC2.
We found that proPC2 required 2-3 h for half-maximal conversion to mature PC2 in AtT-20/PC2 cells; these kinetics are in good agreement with those obtained by Zhou and Mains (1994) for the same cell line. Similarly slow processing of proPC2 was observed in other neuroendocrine cell lines and in rat pancreatic islets (Guest et al., 1992;Shen et al., 1993;Benjannet et al., 1993;Shennan et al., . Cells were plated at 500,000 cells/well in a six-well plate; 2 d later, serum-free medium was placed on the cells and assayed in duplicate the following day for the expression of enzymatic activity. The total amount of immunoreactive 7B2 in the conditioned medium was 440 fmol (control); 790 fmol, 27 kD 7B2; 770 fmol, 21 kD 7B2; 452 fmol, blockade mutant (block); and 530 fmol, amino-terminal domain (amino). Variance between duplicates was always less than 5 %. 1995). (An exception may exist for Xenopus intermediate lobe, in which proPC2 appears to be cleaved much more rapidly than in neuroendocrine cell lines and in pancreatic islets; Braks and Martens, 1994). Previous studies from several laboratories have shown that the subcellular site of conversion of proPC2 to mature PC2 is likely to be a late secretory granule compartment such as the trans-Golgi network or immature secretory granules (Guest et al., 1992;Shen et al., 1993;Zhou and Mains, 1994;Braks and Martens, 1994;Shennan et al., 1995). Similarly, 27 kD 7B2 is converted to the 21-kD form in a brefeldin-sensitive, late secretory pathway cellular compartment, most probably by furin (Paquet et al., 1994). The fate of the remaining carboxy-terminal 31 amino acids of 7B2 which represents the inhibitory peptide is uncertain. This peptide could be recovered intact from AtT-20 cells (Paquet et al., 1991); however, it may also be proteolyzed in certain tissues since Sigafoos et al. (1993) have described the isolation of a carboxy-terminal fragment of this peptide from bovine adrenal medullary granules.
Confirming previous results obtained by Braks and Martens (1994) and Paquet et al. (1994), we observed rapid maturation of newly synthesized 27 kD 7B2 (tl/2 = 15 min). In AtT-20/PC2 cells, secretion of newly synthesized 7B2 into the medium was also unexpectedly rapid, in contrast to results obtained using vaccinia virus vectors (Paquet et al., 1994) and Xenopus intermediate lobe . We found that following only 20 min of chase, newly synthesized 21 kD 7B2 could already be detected in the medium, and by 120 min, only a third of the labeled 7B2 protein remained inside the cells; * 70-80% confluent 35 nun wells of two 21-kD 7B2-transfected cell lines and the parent, nontransfected cell line were incubated with Optimem for 6 h. The conditioned medium was then removed, centrifuged, and assayed in duplicate for PC2 activity as described in the text. The specific PC2 inhibitor used was h7B2155.lSS . Control reactions containing only Optimem (not conditioned by cells) contained 58 pmol AMC. Variance between duplicates was always less than 5%.
this remaining 7B2 may represent the pool which binds to proPC2. Expression of 7B2 also increased the rate of secretion of mature PC2 from the cells; however, not all of the additional mature PC2 produced by 7B2 expression was secreted since the steady state intracellular content of mature PC2 was slightly, but consistently, increased in 7B2-expressing cells.
Based upon homology with chaperonins, Braks and Martens (1994) suggested that the first 1-90 amino acids of 7B2 would represent the putative "chaperone domain." Our data showing that expression of this protein is not sufficient to facilitate proPC2 cleavage indicates that further structural information must be required. We found that expression of the first 151 amino acids, which encode the 21-kD natural processing product of 7B2, were sufficient for the facilitation of proPC2 processing. These data indicate an affinity of this truncated protein for proPC2, and support in vitro experiments which show that recombinant 21 kD 7B2 can bind to proPC2 immunoprecipitates . Our data also show that expression of 21 kD 7B2 is sufficient for the actual activation of PC2, as demonstrated by PC2 activity experiments using AtT-20/ PC2 and CHO/PC2 cells in which only cells which have been cotransfected and express 7B2 secrete active PC2. Which enzyme actually performs the activation of proPC2, or whether proPC2 is autocatalytically activated, is as yet Figure 9. Secretion of PC2 activity is correlated with the expression of 21 kD 7B2 in CHO/PC2 ceils. Overnight-conditioned medium samples obtained from 11 independent clones of CHO/PC2 cells stably transfected with 21 kD 7B2 cDNA were assayed for both 7B2 immunoreactivity and for PC2 activity. The correlation coefficient r 2 was 0.90. unclear. Since postsynthesis activation of CHO cellsecreted proPC2 with recombinant 21 kD rat 7B2 was not successful, it appears that 7B2 must be present intracellularly, either during synthesis of proPC2 or shortly thereafter (however, it should be noted that postsynthesis activation was attempted only at neutral pH). These CHO/ PC2-7B2 cells express considerably more proPC2 (expressed via the DHFR-coupled amplification) than 7B2 (expressed through simple transfection and selection). However, enzyme activity produced by CHO/PC2-7B2 cells has proven to be sufficient for the purification and characterization of active recombinant PC2, and such studies are now in progress (Lamango, N., and I. Lindberg, unpublished results).
The carboxy-terminal peptide of 7B2 has been shown to be inhibitory to PC2 activity as well as to proPC2 activation Lindberg et al., 1995). The fact that both the 21-kD and the 27-kD 7B2 proteins exhibit a similar facilitatory function on proPC2 processing leads to the question of the normal role of the carboxy-terminal peptide in the processing of proPC2. When processing of 7B2 to the 21-kD protein and the carboxy-terminal peptide was blocked by mutation of the normal pentabasic processing site, the mutated protein bound to proPC2, but did not facilitate proPC2 processing. Moreover, expression of the blocked 7B2 protein prevented the cleavage of the PC2 proregion since in these cell lines proPC2 was secreted intact into the medium (a process which was never otherwise observed). It appears that the removal or cleavage of the carboxy terminal peptide is required for the autocatalytic cleavage of proPC2, most probably because the carboxy-terminal peptide occupies the active site of proPC2 (thus preventing autoactivation). We observed that the binding of cleavage-site blocked 7B2 to proPC2 is tighter than that of 21 kD 7B2 (unpublished results), indicating that the presence of the inhibitory carboxy-terminal peptide (which has been shown to have a strong affinity for proPC2; Lindberg et al., 1995) increases the affinity of 7B2 for proPC2. The mechanism for removal of the 7B2 carboxy-terminal peptide is unclear at present.
Based on the results discussed above, we propose the following model for the interaction of PC2 with 7B2 in AtT20/PC2-7B2 cells (see Fig. 10 Figure 10. Proposed mechanism of interaction of PC2 and 7B2. (1) Soon after synthesis, a portion of the available 27 kD 7B2 and proPC2 bind each other in the endoplasmic reticulum; this binding results in the translocation of the complex to the TGN/secretory granules. The presence of the carboxyl-terminal inhibitory peptide of 27 kD 7B2 prevents the premature activation of proPC2 before arrival of the complex in the Golgi apparatus.The remainder of the 27 kD 7B2 (i.e., the portion not bound to proPC2) is quickly transported through the secretory pathway, with a half-time of conversion of less than 15 min, and is secreted.
(2) In the TGN/immature secretory granule compartment(s), the 7B2 within the proPC2/27-kD 7B2 complex is rapidly cleaved to 21 kD 7B2 and the carboxy terminal inhibitory peptide. The resulting complex, with the various molecules weakly associated, is now competent for cleavage to the active form of PC2, possibly by autoactivation (Matthews et al., 1994;Shennan et al., 1995). The rate of activation of each molecule of proPC2 may depend on the rate of dissociation/cleavage of its associated 7B2 carboxyterminal inhibitory peptide.
(3) Mature PC2 resulting from the above process is then free to cleave prohormone substrates/intermediates.
No effects on POMC processing were observed in cell lines expressing either intact 27 kD 7B2, 21 kD 7B2, or the carboxy-terminal domain alone (Mains, R. E., unpublished data). The lack of effect on peptide processing of the 21-and 27-kD 7B2 constructs, both of which increased the production of mature PC2, may be explained by positing that POMC cleavage is already maximal in the parent cell line, and that it cannot be increased beyond this maximum by the introduction of further active enzyme. This interpretation is supported by the finding of almost complete cleavage of ACTH to a-MSH-sized peptides in AtT-20/PC2 cells . The expression of endogenous 7B2 in these cells is apparently sufficient to support this high rate of cleavage. The lack of effect on POMC processing of cell lines overexpressing a carboxyterminal inhibitory domain (residues 95-185) was expected since expression of this construct had no effect on the kinetics of maturation of proPC2.
In conclusion, it is likely that complex regulatory mechanisms limit the rate of peptide production, involving control of the availability of active PC2 enzyme molecules in the correct subcellular compartment and regulation of PC2 enzyme activity though the association/dissociation of inhibitor molecules from activated and zymogen forms of PC2. Several questions regarding the interaction of PC2 and 7B2 remain to be answered, for example the site and mechanism of dissociation of the inhibitory 7B2 carboxyterminal peptide from proPC2. Future pulse-chase studies employing antisera to this portion of the 7B2 molecule will provide a better understanding of these cellular events and may one day provide a practical basis for the manipulation of peptide hormone levels. by a Research Scientist Development Award from The National Institute of Drug Abuse.
Received for publication 2 February 1995 and in revised form 28 March 1995.
|
2014-10-01T00:00:00.000Z
|
1995-06-02T00:00:00.000
|
{
"year": 1995,
"sha1": "f27457682fae012b6b41a2c2a166e1e02651a1a3",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/129/6/1641/1264528/1641.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f27457682fae012b6b41a2c2a166e1e02651a1a3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
126288125
|
pes2o/s2orc
|
v3-fos-license
|
Classification of Mental Stress Levels by Analyzing fNIRS Signal Using Linear and Non-linear Features
Background: Mental stress is known as one of the main influential factors in development of different diseases including heart attack and stroke. Thus, quantification of stress level can be very important in preventing many diseases and in human health. Methods: The prefrontal cortex is involved in body regulation in response to stress. In this research, functional near infrared spectroscopy (fNIRS) signals were recorded from FP2 position in the international electroencephalographic 10–20 system during a stressful mental arithmetic task to be calculated within a limited period of time. After extracting the brain’s hemodynamic response from fNIRS signal, different linear and nonlinear features were extracted from the signal which are then used for stress levels classification both individually and in combination. Results: In this study, the maximum accuracy of 88.72% was achieved in classification between high and low stress levels, and 96.92% was obtained for the stress and rest states. Conclusion: Our results showed that using the proposed linear and nonlinear features it is possible to effectively classify stress levels from fNIRS signals recorded from only one site in the prefrontal cortex. Comparing to other methods, it is shown that the proposed algorithm outperforms other previously reported methods using the nonlinear features extracted from the fNIRS signal. These results clearly show the potential of fNIRS signal as a useful tool for early diagnosis and quantify stress.
Introduction
Today, almost all humans are familiar with the term "stress", as it has become an inseparable part of human life.Stress refers to conditions or emotions in which the person perceptually believes that the sum of wants and expectations of them is beyond the facilities, resources, and abilities at their disposal.Hans Selye, the father of stress, defines stress as the body's or mind's nonspecific response to any need to change. 1 Stress is controlled in the human body through activation of hypothalamic-pituitary-adrenal axis and limbic system, which cause secretion of stress hormones (adrenaline and cortisol) in the bloodstream.Circulation of this hormone in the body via the bloodstream causes different physiological changes.Heart rate increases compared with the normal state, thus more blood is pumped towards the muscles and other organs.Blood pressure rises and the respiration rate increases.The small air tubules in the lungs dilate, causing the lungs to pull in more oxygen to the body by each respiration.The heightened oxygen that enters the brain causes increased level of consciousness.Also, adrenaline hormone causes release of glucose and fat of its temporary stored regions in the body such as the liver into the bloodstream.This provides all parts of the body with the required energy, preparing the body to react to stress.The limbic system which includes hippocampus, amygdala, and several other regions including basal ganglia, the prefrontal cortex which lies near this system and has an effective communication with it.Each part of the limbic system plays a special role in controlling the HPA axis function.The prefrontal cortex is involved in regulating the body during stress.Studies indicate that the role of the prefrontal cortex of the brain is remarkably complex.For example, the great damages limited to the right prefrontal cortex causes diminished cortisol responses to the stress, while the left-side damages have no effect on its secretion.Various studies have also shown that there is a direct relationship between the activity of the right prefrontal cortex and stress. 2,3tress affects the human health.5][6] Animal and human studies have indicated the harmful effects of glucocorticoids (cortisol and corticosterone are the most important glucocorticoids in the body of humans and many animals) on PFC functions. 7hey have identified it as region in the brain, which has an active reaction against stress.2][13][14] Accordingly, if one can inform a person about development of stress state in them, through training stress-reduction techniques, it is possible to prevent its damaging effects on health.
The level of stress is clinically measured by questionnaires and interviews, though it is absolutely dependent on the person. 15On the other hand, physical and physiological changes associated with stress are also used as the objective indicators of stress.For example, physically stress causes pupil dilation, blink rate, and facial gestures. 16,17][21] In measuring the level of stress using physical indicators or indicators related to the autonomic nervous system, typically to enhance the accuracy and efficiency, several indicators are used concurrently, causing increase in mobility constraints and complexity of the equipment.
The human brain plays a major role in stress response and thus by processing its recorded data, one can discuss the level of stress. 224][25][26] However, considering the type of equipment for recording it, this method has some limitation in the real world since the person cannot perform their daily activities freely.
Functional near infrared spectroscopy (fNIRS) is a relatively new noninvasive neuroimaging technique to detect hemodynamic changes in the brain cortex.Today, this technique has attracted a great deal of attention thanks to its various advantages compared to other techniques (e.g.][29] fNIRS is based on the fact that activation of a certain part of the brain results an increase in oxygen consumption in that region, which is accompanied by enhanced total blood flow, regional blood volume, and regional blood oxygenation. 30,31This leads to a change in the concentration of the local oxygenated haemoglobin (oxy-Hb) and deoxygenated haemoglobin (deoxy-Hb). 32nce oxy-Hb and deoxy-Hb have special optical properties in the near-infrared light range (700-900 nm), the changes in the concentration of these chromophores during neurovascular coupling can be detected using fNIRS technique noninvasively.Choosing appropriate wavelengths with regards the absorption coefficients of oxy-Hb and deoxy-Hb, it is possible to calculate variations in the concentrations of these chromophores using a modified Beer-Lambert law (MBLL). 33,34n this research, fNIRS signals recorded from the prefrontal cortex of the brain are used for stress level quantification.Considering the special features of fNIRS system and recording signals from the prefrontal cortex of the brain, use of this technique to measure stress in the real world becomes possible to a good extent.In different studies in which fNIRS has been used, typically linear analysis of this signal has been employed.In this research, nonlinear analysis of fNIRS signal has also been performed and its effect on classification of stress levels has been examined.Our goal in this research is to utilize fNIRS signal to classify stress levels and employ nonlinear analysis of the signal, which causes its enhanced efficiency, which have not been dealt with in previous research.Furthermore, given the features of fNIRS system, quantifying the level of stress without making many limitations in the real world becomes possible.
In the rest of this paper, first, the information related to the participants in the task, the designed stress task, and fNIRS signal recording are described.After that, signal preprocessing to extract the brain hemodynamic response, the extracted features, the manner of selecting the optimal feature subsets, and data classification are provided.Finally, the obtained results are discussed.
Participants
In this research, 50 healthy male volunteers participated (mean age = 22.6 ± 3.21 years old).None of them had special physical or psychological diseases including neurological diseases or respiratory or cardiovascular diseases.They neither used medications that affect the brain.All of the volunteers before the task stated their consent in a written form.Before performing the task, the level of stress was examined using STAI questionnaire, to ensure that the person had a normal level of stress.
The Stress Task and Procedure
The stress task designed in this research is based on the Montreal Imaging Stress Task (MIST). 22In this task, the subjects sit on a chair in front of a monitor and respond to the mathematical statements appearing on the monitor, which include sum and subtraction of several singledigit numbers within a limited period of time (e.g., 4+8-6+3-7+5).The result is a single-digit number, and they select the correct response through clicking by a mouse.The graphic representation of the task, which has been Mental Stress Levels by Analyzing fNIRS Signal journals.sbmu.ac.ir/Neuroscience http designed using Visual Basic software, is as Figure 1.In Part (A), the mathematical statements to which the subject should respond are presented.In Part (B), 0-9 numbers exist, which should be selected by the volunteer through clicking on the intended number.In Part (C), the result of the subject response appears as "correct", "wrong", and "time over".In Part (D), the time considered for response to each statement is shown as a graphical incremental bar.This task has three different phases: training, rest, and stress.The explanation of each phase is as follows: Training phase: in this phase, the subjects respond to 10 mathematical statements without any time limitation, and the response time to each individual statement will be recorded by software.This phase has 2 main objectives: (1) familiarizing the subject with the graphical user interface of the task and its procedure, (2) calculating the average time required by the subject to respond, in order to further apply time limitation in the stress phase.
Stress phase: in this task, 2 different levels of stress, low and high, are induced in the subject.Here, the subjects respond to the mathematical statements with the time limitation applied.For the low and high stress levels, this time limitation is 90% and 80% respectively of the time averaged in the training phase.The time limitation is shown to the subject through filling a graphical incremental bar and playing a tic-tic sound through headphone, whose frequency grows over time.The result of response to each statement is demonstrated to the subject in a written form.
Rest phase: before performing each of the stress phase stages, the subjects undergoes rest state for 1 minute, such that they are requested to sit comfortably on a chair in a quiet room, to whom a relaxing music is played through the headphone on their ears.Figure 2 reveals the block diagram of different stages of the task.
Data Acquisition
During the stress task, fNIRS signal has been recorded using the fNIRS system designed and developed in the NIR Laboratory of the School of Electrical and Computer Engineering, within the University of Tehran. 35This system operates with 2 wavelengths of 730 and 850 nm, with a sampling frequency of 3.8 Hz.In this research, a detector has been placed 1.5 cm (near channel) away from the light source and the other detector is placed 3 cm (far channel) away from the light source.The system electrodes were put on the forehead at FP2 position in the international electroencephalographic 10-20 system (Figure 3).
Pre-processing
In the fNIRS signals obtained at the wavelengths of 730 and 850 nm, trend removal from the signal was performed by calculating the output of a moving averaging filter in 120-s windows before and after a temporal sample and subtracting the filter output from the initial signals.
Thereafter, the OXY-Hb and Deoxy-Hb signals was extracted by applying modified Beer-Lambert Law.The high-frequency noises in the signal are removed from the OXY-Hb and Deoxy-Hb signals using a 6th order Butterworth low-pass filter with cutoff frequency of 0.9 Hz.Mmotion artifact present in the signals, which develops due to head movements and displacement of electrodes, is removed using wavelet-based method proposed by Molavi and Dumont. 36ne of the main problems of fNIRS signal is extraction of hemodynamic response associated with brain activity from this signal.This is because the frequency of variations related to the effect of respiratory system and Mayer wave interferes with the frequency of variations related to the brain activity.In this research, to remove the physiological interferences from fNIRS signal, a new method which we presented it in the previous research has been used. 37In this method, using simultaneous recording of the near and far channels, hemodynamic changes related to the brain activity are extracted.Figure 4 demonstrates a sample of preprocessed oxy-Hb and deoxy-Hb signal, recorded in the stress task process.
Feature Extraction
The features that are extracted from fNIRS signal are typically the linear features in the time domain including mean, variance, kurtosis, and skewness.In this research, the low and high stress levels, this time limitation is 90% and 80% respectively of the time averaged in the training phase.The time limitation is shown to the subject through filling a graphical incremental bar and playing a tic-tic sound through headphone, whose frequency grows over time.The result of response to each statement is demonstrated to the subject in a written form.
Rest phase: before performing each of the stress phase stages, the subjects undergoes rest state for 1 min, such that they are requested to sit comfortably on a chair in a quiet room, to whom a relaxing music is played through the headphone on their ears.
Data Acquisition
During the stress task, fNIRS signal has been recorded using the fNIRS system designed and developed in the NIR Laboratory of the School of Electrical and Computer Engineering, within the University of Tehran. 35This system operates with 2 wavelengths of 730 and 850 nm, with a sampling frequency of 3.8 Hz.In this research, a detector has been placed 1.5 cm (near channel) away from the light source and the other detector is placed 3 cm (far channel) away from the light source.The system electrodes were put on the forehead at FP2 position in the international electroencephalographic 10-20 system (Figure 3).journals.sbmu.ac.ir/Neuroscience http in addition to the linear features, nonlinear features including approximate entropy, 38 fractal dimension, 39 detrended fluctuation analysis, 40 and recurrence quantitative analysis (recurrence rate, determinism index, entropy index, laminarity) 41 which are rarely observed in other studies, have also been extracted.As in previous research it has been shown that the effect of hemodynamic changes of the brain is more evident in OXY-Hb signal, here the expressed features have been extracted only from OXY-Hb signal.
In this research, overall 50 signals have been recorded from 50 subjects.After classifying and labeling each signal, the mentioned linear and nonlinear features have been extracted from the parts related to low and high stress as well as rest and baseline states of each signal.Thereafter, in order to make the features independent of each individual, each feature extracted in the rest and stress stages has been divided by the value of that feature obtained from the baseline state.
The Method for Selecting the Optimal Feature Subset and Classification Selecting the optimal feature subset out of all features causes reduced error of the results of data classification.In this research, to find the optimal feature subset, mutual information method has been used. 42n this method, the features are ranked based on the extent of dependence on the output class.Therefore, our goal in selecting the optimal feature subset is to find the first n features in the ranking, helping in acquisition of the best classification results.For data classification, considering the limited number of data, LOO method and SVM classifier have been used.Figure 5 demonstrates the block diagram of the method for selection of the optimal feature subset and classification.In this method, all data is divided into 3 categories of training, selection and test data.In this classification, one data is eliminated as the test data (according to LOO method).Out of the remaining data, 80% are considered as training and 20% are regarded as selection data.Thereafter, the following stages are performed: In this method, the features are ranked based on the extent of dependence on the o class.Therefore, our goal in selecting the optimal feature subset is to find the first n featu the ranking, helping in acquisition of the best classification results.For data classification, 1.The features are ranked based on MI method using the training data.2. The classifier is trained based on the first n features of ranking nv (with n varying between 1 and the total number of features).Indeed, at this stage, n trained classifiers are obtained.3. Using the classifiers developed in stage 2, the selection data are classified based on the number of different features.4. According to LOO method, stages 1-3 are performed for all data.Thereafter, the obtained results are average based on the number of different features, and the first n features in the ranking, where the result of selection data classification has been maximized in terms of them, are selected as the optimal feature subset.5.After specifying the optimal feature subset in the stage 4, the test data are classified.
Results
The aim of this research is to classification between stress and rest state as well as between different levels of stress using fNIRS signal and investigate the effect of nonlinear features in this regard.For this purpose, first classification was performed based on the linear and nonlinear features Mental Stress Levels by Analyzing fNIRS Signal journals.sbmu.ac.ir/Neuroscience http separately, and then according to their combination in terms of different signal lengths.Next, the obtained results were compared with each other in order to determine the effect of nonlinear features as well as the minimum signal length, per which the best result is obtained in the classification.
In order to differentiate between the rest and stress states, all stress levels were considered as active state, while the rest states were regarded as inactive state.The features were extracted from active and inactive signals at the signal lengths of 5, 15, 25, and 35 seconds after initiation of each stage, in order to obtain the most suitable signal length for classification of these 2 states.Table 1 shows the result of the classification of rest (inactive) and stress (active) states.In this table, the results of the classification have been presented based on linear and nonlinear features separately as well as in combination.The maximum classification accuracy has been obtained as 95.34%, 74.66%, and 96.92% within the signal length of 15 seconds for linear and nonlinear features and their combination, respectively.The results of this table show that as the signal length exceeded 15 seconds, the classification accuracy has declined.The results of Table 1 clearly reveal that in separation of the active and inactive states, linear features can far better detect this differentiation than nonlinear features, such that the combination of linear and nonlinear features have not been able to improve the results significantly.The optimal feature subset, based on which the maximum classification accuracy was obtained in differentiating between stress and rest states (in terms of combination of linear and nonlinear features), their rank in the ranking involves mean, slow, skewness, and approximate entropy, respectively.The optimal feature subset also confirms the fact that in differentiating between active and inactive states, linear features are more influential.This is because out of the four selected features, first three features in the ranking are linear features, with only the fourth being nonlinear.Nevertheless, considering the difference in the apparent form of the signal in the active and inactive states, it is not unexpected, as the signals in these 2 states have significant differences in terms of slope and mean.Therefore, our method has functioned properly in selecting the optimal feature subset, which has chosen the mean and slope features in the first ranks.
Usage of fNIRS signal alone to detect stress such that was results can be used for being compared with this study was not found in other investigations.Al-Shargie et al 43,44 employed simultaneous recording of fNIRS and EEG signals to detect stress.Their maximum accuracy in differentiating between rest and stress states using EEG and fNIRS signals has been 91.7% and 84.15%, respectively, and using the combination of the 2 signals, it has increased to 95.1%.Considering the limitation in use of fNIRS signal in the previous researcher, for better comparison, we have used studies in which EEG signal has been used for stress detection.Jun et al 45 and Smitha et al 25 using EEG signal, which has been recorded as 14 channels, the classification accuracy of the inactive state (rest) and active state (stress) has been 96 and 85.17%, respectively.In this research, only using fNIRS signal, the results obtained from the active and inactive states classification are more superior than the findings in other research.
Table 2 indicates the results obtained from classification of high and low stress levels.As the results in the table show, the maximum classification accuracy per linear, nonlinear, and combined features within the signal length of 35 seconds is 79.11%, 83.48%, and 88%, respectively.According to the results in the table, the maximum classification accuracy of different stress levels has been obtained through combining the linear in nonlinear features.The optimal feature subset on which this result is derived include of approximate entropy, kurtosis, fractal dimension, mean, detrended fluctuation analysis, recurrence rate, and the entropy related to recurrence quantitative analysis.The results of this table indicate that the results obtained from linear and nonlinear features separately do not have a considerable superiority to each other.However, their combination has presented a significant improvement in the results for classification between low and high stress levels.
Jun et al 45 induced high and low stress levels in the volunteers using mathematical and stroop task.During the task, EEG signals were recorded as 14 channels, and based on the features extracted from the recorded signals, the best result in classification between low and high stress levels has been 75%.Hou et al 46 acted in the same way as Jun et al 45 in terms of signal recording and the stress task and the best result in classifying high and low stress levels 47 the stress task has been based on driving in the real world, which induces stress at high and low levels.In that research, ECG, EMG, GSR, and respiration signals were recorded during the task.Based on the features extracted from the signals and their combination, an accuracy of 97% was achieved in classifying high and low stress levels.Although this result has been remarkable, the notable point is that in that research several different signals have been used simultaneously, causing extreme mobility constraints for the person, making it impossible to be used in the real world, where the person should perform activities freely and without limitation.
Discussion
In this research, using fNIRS signal alone, high and low stress levels as well as rest states were classified.fNIRS signal was recorded from 50 volunteers during stress task.For nonlinear processing of fNIRS signal, in addition to linear features, the nonlinear features of the signal were also extracted and used in the classification process.Furthermore, the minimum signal length in achieving the maximum classification accuracy was also examined.Based on the results mentioned in the results section, it is observed that in the majority of works performed in this regard, multichannel signal recording or different types of signal recording have been used, which causes limitations for its application in the real world.
In this study, the maximum accuracy of 88.72% was achieved in classification between high and low stress levels, and 96.92% was obtained for the stress and rest states.By comparing the results obtained in this research with other similar studies, it can be stated that here only by recording fNIRS signal from a specific site in the brain prefrontal cortex as near and far channels, we have been able to achieve better results compared to other similar investigations.These results clearly indicate the potential of fNIRS signal as a useful tool for early diagnosis of stress in the human in the real world.
Figure 1 .
Figure 1.The Graphical Representation of the Stress Task.
Figure 2 .
Figure 2. The block diagram reveals the different stages of the task.
Figure 3 .
Figure 3.The schema of the location of the source (red) and optical detectors (blue) given the position of FP2 in recording EEG signals given international 10-20 system (yellow).
Figure 2 .
Figure 2. the block diagram reveals the different stages of the task.
Figure 4 .
Figure 4.The Signals of Changes in OXY-Hb and Deoxy-Hb Concentrations of the Brain Hemodynamic Response, Recorded Based on the Stress Task.The vertical lines separate different stages of the task.
Figure 5 .
Figure 5.The General Block Diagram of the Stages of Selecting the Optimal Feature Subset and Classification.
Figure 5 .
Figure 5.The general block diagram of the stages of selecting the optimal feature subset and classification
Table 1 .
The Classification Accuracy of the Stress (Active) State Compared to the Rest (Inactive) State
Table 2 .
The Classification Accuracy of High and Low Stress States
|
2019-04-22T13:12:32.962Z
|
2018-08-24T00:00:00.000
|
{
"year": 2018,
"sha1": "2b1b1c97569adaa7da49ca40254ee6d37a5b11ab",
"oa_license": "CCBY",
"oa_url": "http://journals.sbmu.ac.ir/Neuroscience/article/download/21016/33",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2b1b1c97569adaa7da49ca40254ee6d37a5b11ab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
256228582
|
pes2o/s2orc
|
v3-fos-license
|
The effects of phytosomal curcumin supplementation on clinical symptoms, and inflammatory and oxidative stress biomarkers in patients with migraine: A protocol for a randomized double-blind placebo-controlled trial
Objective: Migraine is one of the most common diseases. Curcumin with anti-oxidative and anti-neuroinflammatory properties might have beneficial effects in migraine patients. This study will be conducted to evaluate the effects of a phytosomal preparation of curcumin on clinical signs, oxidative stress, and inflammatory parameters in patients with migraine. Materials and Methods: This is a randomized, double-blind, placebo-controlled, clinical trial in which, 60 patients with migraine will be assigned to receive a daily dose of 250 mg of phytosomal curcumin for 8 weeks (intervention group) or 250 mg maltodextrin as a placebo for the same duration (control group). Before and after the study, frequency, duration, and severity of the attacks, quality of life and sleep, mood status, high-sensitivity C-reactive protein (hs-CRP), Nitric Oxide (NO), and oxidative stress factors will be measured. Conclusion: It seems that phytosomal formulation of curcumin (a solid dispersion preparation of curcumin with phosphatidylserine) with high bioavailability, can cross the blood-brain barrier (BBB) and result in decreased neuroinflammation, oxidative stress, and neurotoxicity. This way, phytosomal curcumin might lead to reduction of headaches and other complications of migraine and increase the quality of life of patients with migraine.
Introduction
Migraine as a chronic neurovascular disorder is one of the most common diseases and is considered a primary cause of disability (Agosti, 2018). It is estimated that the prevalence of migraine is about 12-14% globally and it is more prevalent in women and 35-45 year old individuals (Economics, 2018;Lipton et al., 2007). Based on Global Burden of Diseases, Injuries, and Risk Factors (GBD) studies, in 2016, about three billion people suffered from migraine or tension-type headaches. On the other hand, it has been shown that the disability weight of migraine is much higher than tension-type headache, as migraine caused 45.1 million years of life lived with disability (YLDs) while tensiontype headache caused only 7.2 million YLDs. Women between ages 15 and 49 years were categorized as the most important age group with migraine headaches (Stovner et al., 2018). Although it was recently reported that the prevalence of migraine is lower in Asian countries than European countries, North America and Australia (Karimi et al., 2020), the prevalence of this disease is about 14% in Iran, which is higher than its global range (Farhadi et al., 2016).
Moderate or severe pain has occurred in about 90% of the migraineurs, and during headache attacks, the ability to function is reduced in about 75% of patients, and others (25%) need bed rest during the attacks (Lipton et al., 2007). Migraine attacks negatively affect the life quality, and the productivity in both private and social life (Lantéri-Minet et al., 2011;Farhadi et al., 2016). It was also revealed that panic and anxiety disorders are dramatically more prevalent among migraineurs compared with others (Sareen et al., 2006;Jette et al., 2008). With the increasing frequency of migraine attack episodes, the anxiety rates notably increase. Indeed, findings from a very recent systematic review showed that a major comorbidity of migraine is anxiety, with a mean of ~43% of patients who experienced comorbid symptoms (Karimi et al., 2020). Another salient fact about migraine is its large economic impact as studies in Europe showed that the average per-person costs for migraine direct and indirect costs were €1222 in a year with a total of €111 billion per year for 27 EU nations (Linde et al., 2012). In Australia, it is estimated that the cost of migraine was approximately $35.7 billion in 2018 (Economics, 2018).
It is proposed that the central nervous system (CNS) is where migraine is originated, while some other evidence suggests that neurovascular and metabolic changes in the brain with dysfunctional intracranial and extracranial blood vessels trigger migraine (Gerring et al., 2018). Nevertheless, the etiology and pathophysiology of migraine is too complex and not fully understood, and to date, the main cause of migraine attack is unclear (Tajti et al., 2011). Diverse parameters have been considered migraine triggers such as stress, neuroendocrine imbalances, hormones, oxidative stress, inflammation, too much or too little sleeping, and unhealthy diet and allergenic foods (Hauge et al., 2010;Finocchi and Sivori, 2012;Theeler et al., 2010;Kursun et al., 2021;Borkum, 2016). Recent evidence suggests that inflammation through activation of nociceptive sensory neurons has a crucial role in the pathology of pain initiation as well as pain persistence (Sommer and Kress, 2004). In this regard, an accumulating body of evidence declared that stress, oxidative, and inflammation play a significant role in migraine attacks (Borkum, 2016;Kursun et al., 2021;Edvinsson et al., 2019). The immune system through production of inflammatory factors, namely, cytokines, including tumor necrosis factor (TNF), interleukin (IL) 1 (IL-1), and adiponectin increase headache (Kursun et al., 2021). Injection of TNF induces headache, while TNF antibody decreases pain in humans (Bruno et al., 2007). Pro-and antiinflammatory cytokines increase in the plasma during migraine attacks. Higher level of TNF-α and IL-6 was observed in migraine patients compared with healthy individuals during and between attacks (Yilmaz et al., 2010).
Although migraine is becoming one major health issue all around the world, to date, there is no exclusive and comprehensive treatment approach, medical care, or pharmacological agents for its prevention or treatment (Katsarava et al., 2018).
Up to the present time, pharmacological care for the treatment of migraine headaches includes triptans, ergot derivatives, and analgesics (NSAIDs). Some other oral drug, originally produced for treatment of epilepsy, depression or high blood pressure, are also applied for the prevention of migraine attacks. Finally, botulinum toxin A injection was approved in 2013 as a preventive therapy in migraine patients who are non-responders to oral preventive medications. Considering several adverse and unfavorable effects as well as the high costs of these medications (D'Amico and Tepper, 2008;Mayans and Walling, 2018), and with regard to the facts that patients with severe and/or frequent migraine need long-term preventive medications, several non-pharmacological methods, including nutraceuticals, herbal medicine, behavioral techniques and acupuncture have attracted significant attention to manage migraine in clinical settings (Puledda and Shields, 2018). Indeed, today, monotherapy remedy is replaced by multiple therapies according to the multiplicity of targets (Mythri and Bharath, 2012). To achieve this purpose, intense investigations are being conducted to assess the combination of modern/conventional pharmacological therapies with medicinal plants, including herbal bioactive compounds and phytochemicals (Kumar, 2006;Sparreboom et al., 2004).
Herbal medicine as a safe, inexpensive, available, and accessible therapy approach is becoming one of the most attractive field of study to prevent and treat neurological disorders such as migraine. Medicinal plants can be considered a complementary and alternative method to prevent and treat migraine since these are an acceptable and favorable therapeutic approach for most of the patients.
Curcumin is a yellow polyphenolic pigment and the principal polyphenol and a key bioactive compound of turmeric, which has been used in traditional medicine for thousands of years. A wide range of beneficial properties are attributed to curcumin (Hewlings and Kalman, 2017;Bagherniya et al., 2018;Farhood et al., 2019;Mortezaee et al., 2019;Panahi et al., 2014;Parsamanesh et al., 2018;Shakeri et al., 2019;Javandoost et al., 2018;Kheiripour et al., 2021;Ghasemi et al., 2022;Atabaki et al., 2022). Its neuroprotective properties, as well as its anti-neuroinflammatory effects make it a focus of attraction in the area of neuroscience. Neuroprotective effects of curcumin are attributed to its antioxidant, anti-inflammatory, and anti-proteinaggregate activities (Cole et al., 2003). Specifically, several animal studies indicated the promising effects of curcumin on neurodegenerative disorders (Park and Kim, 2002;Zhang et al., 2011;Frautschy et al., 2001;Thiyagarajan and Sharma, 2004). It seems that curcumin might have beneficial effects on migraine signs and symptoms as a very recent systematic review showed that curcumin supplementation had positive effects on the reduction of inflammation and stress oxidative and decreasing the frequency, severity, and duration of migraine attacks (Mohseni et al., 2021). However, as in this review mentioned, factors that decrease the efficacy of curcumin in the clinical setting are its poor absorption and low bioavailability. It is recommended to use other forms of curcumin such as nanocurcumin, curcumin-piperine, and phytosomal curcumin to overcome the above limitations of the curcumin usage (Mohseni et al., 2021).
Curcumin bioavailability increases using the phytosomal formulation of curcumin (a complex of curcumin with phosphatidylserine).
Physicochemical properties including amphiphilic nature causes dispersion in both hydrophilic and lipophilic media, which appear with the abundance of phospholipids in phytosomes (Mirzaei et al., 2017). On the other hand, phosphatidylserine exists in abundant amounts in myelin in the healthy human brain, and its amount in the grey matter increases two times from birth to 80 years of age (Glade and Smith, 2015). To keep the health of nerve cell membranes and myelin, phosphatidylserine is needed. The absorption efficacy of oral phosphatidylserine is high and it can cross the blood-brain barrier (BBB) subsequent to its absorption into the bloodstream (Glade and Smith, 2015).
Altogether, it seems that phytosomal curcumin with high levels of absorption which might cross BBB, could be useful in migraine patients as a novel treatment agent. Different forms of curcumin such as nano-curcumin were previously assessed in migraine patients, and produced promising results in terms of migraine clinical symptoms and also in reduction of inflammation and oxidative stress and the other related factors in these patients (Sedighiyan et al., 2022;Rezaie et al., 2021;Abdolahi et al., 2021;Parohan et al., 2021). However, to the best of our knowledge, no study evaluated the effects of phytosomal curcumin on migraine patients. Thus, this study will be conducted to evaluate the effects of phytosomal curcumin on clinical outcomes and stress oxidative and inflammatory parameters in migraine patients.
Materials and Methods
This protocol was written based on CONSORT SPIRIT 2013 guidelines (Chan et al. 2013). This study is a parallel double-blind randomized clinical controlled placebo trial in which, a total of 60 migraine patients will be included. The trial design is illustrated in Figure 1. Patients will be chosen from neurology clinics (Imam Moosa, Sadr, and Khorshid) affiliated to Isfahan University of Medical Sciences from July 2022. To include in the study, subjects will be screened by an experienced neurologist according to our inclusion and exclusion criteria.
Eligibility criteria
Participants will be included in the study according to the following criteria:
Inclusion criteria
a) migraine without aura, migraine will be diagnosed by an expert neurologist (FK) based on ICDH-3 criteria (Munoz-Ceron et al., 2019) b) history of migraine for at least one year, c) age between 20-60 years old, d) following routine and steady treatment for the management of migraine headaches for at least 4 weeks before of the study, and e) willingness to participate in the study and fulfil the written consent form
Exclusion criteria
a) Having other types of headache such as tension-type headache, cluster headache, medication overuse headache, trigeminal autonomic cephalalgias, or headache due to menstrual, b) Pregnancy or lactation, c) history of chronic diseases (i.e. diabetes, high blood pressure, gastrointestinal (GI) disorders like Crohn's and ulcerative colitis, cancer, or liver, kidney, or thyroid disease), d) any changes in the pharmacological treatment such as type and dose of the drugs, e) Migraine with aura, f) Other neurological disorders, g) taking antioxidant supplements for at least 3 months before the study, h) Following a special diet for at least 3 months before the study, i) Having allergies to herbal medicine, particularly to turmeric and ginger, j) Smoking and alcohol consumption, or k) Individuals with poor compliance to the intervention (less than 80%).
Randomization
Eligible patients who fulfill the inclusion criteria will be enrolled in the study. At first, the included patients will be randomized (1:1) into intervention or control groups. We will use a permutated block randomization approach to stratify subjects according to migraine severity with a block size of 4. We will use a table of random numbers to perform random assignment. In addition, patients will be enrolled and assigned into intervention and control groups by a well-trained nutritionist. Allocation concealment will be performed by sequentially numbered containers.
Intervention
Patients in the intervention group will receive one capsule/day containing 250 mg phytosomal curcumin (250 mg containing 20% curcuminoids and 20% phosphatidylserine; Indena SpA, Milan, Italy), for 8 weeks. On the other hand, patients in the control group will receive one capsule/day containing 250 mg maltodextrin, for the same duration. Participants will be asked to take each capsule one hour after breakfast. Moreover, we will instruct the patients in both groups to follow their usual diet and physical activity as well as their medication therapy exactly as their neurologist prescribed.
Blinding
Capsules (curcumin and placebo) will be labeled A and B by the company in the same format of the packages; in addition, capsules will be manufactured in the same shape, size, color and odor. Investigators, patients, laboratory staff, outcome assessors, and data analyzers will be blinded to treatment assignment until final analyses will be conducted.
Ethics approval
The whole protocol has been approved and accepted by the ethics committee of Isfahan University of Medical Sciences (IR.MUI.RESEARCH.REC.1400.110). The study is registered in the Iranian Registry of Clinical Trials, IRCT (IRCT20201129049534N2). All patients will be asked to fill the written consent before being included in the study. We will store data of the participants at the security site. We also will use a unique ID number for each patient for the collected data, laboratory specimens, and reports. Just the research teams will have access to the collected data during the course of the research and these will be remained strictly confidential. The corresponding author will also have access rights to the dataset. In addition, to enable international prospective meta-analyses, the corresponding authors will share the anonymous data with other researchers.
Safety consideration
Curcumin is safe even at high concentrations as it has been previously shown that administration of 8,000 mg curcumin/day for 3 months was safe without any toxicity. Just some gastrointestinal discomfort, specifically nausea and diarrhea was occurred (Hsu and Cheng, 2007;Chainani-Wu, 2003). Thus, no considerable adverse effects will be anticipated for the consumption of curcumin and placebo capsules at the mentioned doses. However, if minor side effects occur, it will be reported to the Ethics Committee of Isfahan University of Medical Sciences for decision-making.
Power calculation and sample size estimates
Sample size was calculated using the formula suggested for randomized clinical trials considering the type I error of 5% (α = 0.05) and Type II error of 20% (β = 0.2; power = 80%). Nitric oxide (NO) level was also considered a main outcome, and based on a previous study (Zareie et al., 2020), sample size was calculated 23 persons for each group. With potential dropout, finally 30 patients will be assigned in each group.
Outcome assessment
After screening based on the eligibility criteria, participants will be asked to complete a sociodemographic questionnaire including sex, age, education, medical history, drugs, and food supplements (Table 1). To determine the dietary intake of participants to identify energy, and macro-and micronutrient intake relative to migraine, we will use a 3-day food record, assessing two weekdays and one weekend. A well-trained nutritionist blind to the study protocol will instruct participants about how to fill 3-day food records. Food records will be obtained at baseline and after 8 weeks of the intervention. Then, the data will be entered into the Nutritionist IV software to calculate the energy and nutrient intake of the patients. A 24-hour recall will be also used to determine the level of physical activity of the study participants.
Migraine assessment
The migraine headache characteristics including monthly frequency and duration of the attacks will be obtained by a neurologist (FH). Then, patients will be asked to fill questionnaires in terms of frequency and duration of the attacks during the intervention. The severity of pain will be evaluated applying a visual analogue scale (VAS) ranging from 0 to 10. If patients experience no pain, they will instruct to fill 0 and if they suffer from agonizing pain, they will instruct to fill 10 (Zareie et al., 2020). The Headache Impact Test-6 (HIT-6) scale will also use to assess the degree of disability due to migraine at baseline and at the end of the study. This questionnaire has six questions and examines the effect that migraine has on a person's daily life as a score. In the answer to each question, there are 5 options including never, rarely, sometimes, most often, and always, which have scores of 6, 8, 10, 11, and 13, respectively. The validity and reliability of this questionnaire have been confirmed in a previous study (Ghorbani and Chitsaz, 2011).
Assessment of quality of life
The Migraine-Specific Quality of Life (MSQ) questionnaire will be used to assess the quality of life of migraine patients. The questionnaire consists of 14 questions that assess the quality of life of patients in the last month. The validity of this questionnaire in Iran has been examined by Zandifar et al. (Zandifar et al., 2013). For each question, a score of one (never) to six (always) is considered, then the sum of the scores of the questions, which is from 14 (minimum total) to 84 (maximum total), will be considered. The sum of the initial scores will be converted to a scale of zero to one hundred, and finally a higher score will indicate a higher quality of life (Zandifar et al., 2013). To convert the initial score from zero on one hundred scale, we will follow this method: Individual score minus 14 (lowest total score) divided by 70 (distance between the lowest (14) and the highest total score (84)) multiplied by 100.
Assessment of mood status (stress, anxiety, and depression)
A modified DASS-21 questionnaire for Iran will be used to assess mood at the beginning and end of this study (Samani and Joukar, 2007). The DASS-21 subscales consist of 21 questions, seven questions for stress, seven for anxiety, and seven for depression. Each question is scored from zero (does not apply to me at all) to 3 (absolutely applies to me). The subscales should be doubled, then the severity of the symptoms can be determined as shown in Table 2 (Lovibond and Lovibond, 1995
Assessment of sleep quality
It was previously shown that episode migraine has an association with migraine (Alstadhaug et al., 2007;Vgontzas and Pavlović, 2018), so, we will assess the sleep quality of patients before and after the study using Pittsburg Sleep Quality Index (PSQI) that was previously validated for Iranian population (Farrahi Moghaddam et al., 2012). The Pittsburgh questionnaire consists of 9 questions that assess bedtime, waking hours, the amount of sleep per day, and other issues related to sleep patterns and mental quality. Each question will be scored between 0 and 3 (0 = not in the past month, 1 shows <1 time in a week, 2 indicates once or twice a week, and 3 shows ≥3 times a week).
Laboratory assessment
A well-trained phlebotomist will collect a 10 ml blood sample after 12 hours of fasting from each subject before and after the study. Then, the blood samples will be centrifuged for 10 min at 2500 rpm at room temperature. Enzyme-linked immunosorbent assay (ELISA) will be used to measure the serum levels of serum high-sensitivity C-reactive protein (hs-CRP), Total Antioxidant Capacity (TAC), Total Oxidant Status (TOS), Malondialdehyde (MDA), Superoxide dismutase (SOD), and Nitric Oxide (NO) (KiaZist, Hamedan, Iran).
Statistical methods
The analysis will eventually be performed in the form of Intention-To-Treat (ITT) and Per-Protocol (PP) using the SPSS software, version 22 (SPSS Inc, Chicago, IL, USA). Quantitative data will be reported as mean±standard deviation (SD), and qualitative data will be presented as frequency and percent. Normality of the data will be assessed using Kolmogorov-Smirnov test. Paired ttest will be applied to evaluate the differences in each group before and after intervention. Independent T will be used to show the baseline and endpoint differences between the two groups. We will apply analysis of covariance (ANCOVA) to show differences between two treatment groups after adjusting for confounding variables. P-value less than 0.05 will be considered statistically significant.
Discussion
This study is the first clinical trial assessing the effects of phytosomal curcumin on migraine symptoms and complications. Considering the high prevalence of migraine and its several complications, it is indispensable to find novel and safe treatment strategies to reduce pain and migraine attacks, which might result in increased quality of life and improved functional capacity of patients with migraine. Thus, considering the diverse range of beneficial effects of curcumin on different aspects of human health, particularly its promising effects on neurological disorders (Mohseni et al., 2021), the findings of this trial might be useful in the management of patients in the clinical setting.
In animal migraine models, nitric oxide concentrations were reported to be reduced in response to curcumin administration, suggesting this reduction is mediated through the antioxidant and antinociceptive effects of curcumin (Bulboacă et al., 2017;Bulboacă et al., 2019). Inflammation and oxidative stress play significant roles in the pathogenesis of migraine (Parohan et al., 2019). Importantly, as previously documented, the antimigraine properties of curcumin might be related to its anti-inflammatory and neuroprotective properties (Mohseni et al., 2021). Indeed, the antioxidant effects of curcumin are mediated directly through reactive oxygen species (ROS) scavenging and indirectly through induction of the expression of antioxidant/detoxifying enzymes and scavengers including catalase, superoxide dismutase, glutathione peroxidase, and HO-1 via a Nrf2dependent pathway (Tapia et al., 2012). Furthermore, anti-inflammatory properties of curcumin are due to its role in the activation of peroxisome proliferatoractivated receptor-γ (PPAR-γ) and inhibition of NF-κB signaling pathway (Jacob et al., 2008). Moreover, curcumin decreases neurotoxicity while increases autophagy and brain-derived neurotrophic factor (BDNF), nerve growth factor (NGF), and glial cell line-derived neurotrophic factor (GDNF). These all cause growth, maturation (differentiation), and maintenance of neurons, and promote survival of these cells (Mohseni et al., 2021;Borkum, 2018;Allen et al., 2013), which might lead to the reduction of migraine pain and other related complications (Figure 2).
Beneficial effects of curcumin phytosomes against a variety of conditions including diabetic microangiopathy and retinopathy, cancer, osteoarthritis, and inflammatory diseases have been documented (Mirzaei et al., 2017), hence, it seems that in our study it might exert promising effects on migraine patients. Figure 2. Schematic summary of pathways depicting the possible effects of phytosomal curcumin on migraine and its potential related mechanisms. As shown in the figure, curcumin, as a natural, accessible, safe, and inexpensive phytochemical has some limitations including low bioavailability, instability at physiological pH, and low solubility in water; thus, its combination with phosphatidylserine can effectively reduce the limitations of using curcumin. Phytosomal curcumin might have beneficial effects on migraine attacks and complications through several mechanisms, including reduction in oxidative stress, neuroinflammation, and neurotoxicity. On the other hand, it increases autophagy and brain-derived neurotrophic factor (BDNF), nerve growth factor (NGF), and glial cell line-derived neurotrophic factor (GDNF), which all causes growth, maturation (differentiation), and maintenance of nerve cells (neurons) and promote survival of these cells.
Although this study will be the first randomized double-blind clinical trial investigating the effect of phytosomal curcumin on migraine, some limitations such as relatively short duration of the intervention and lack of long-term followup should be acknowledged. In addition, considering the ethical issues, the effect of phytosomal curcumin as a monotherapy approach will not be feasible to be assessed.
The protocol for a double-blinded placebo-controlled trial design is described here, in which the effects of phytosomal curcumin supplementation will be assessed on clinical symptoms as well as inflammatory and oxidative stress biomarkers of patients with migraine. It is expected that oral supplementation with 250 mg/day of phytosomal curcumin for 8 weeks will result in relieving migraine signs and symptoms as well as reduction of inflammation and oxidative stress. Findings of the current study will provide evidence-based information in terms of the efficacy of curcumin as a complementary treatment in patients suffering from migraine.
|
2023-01-26T05:06:29.850Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "4eaf20423f023dcf1ed5b9e0a0cfcd672ab36665",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4eaf20423f023dcf1ed5b9e0a0cfcd672ab36665",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225496026
|
pes2o/s2orc
|
v3-fos-license
|
Thrombin generation abnormalities in Quebec platelet disorder
Abstract Introduction Calibrated automated thrombograms (CAT) with platelet‐poor (PPP) and platelet‐rich plasma (PRP) have provided useful insights on bleeding disorders. We used CAT to assess thrombin generation (TG) in Quebec platelet disorder (QPD)—a bleeding disorder caused by a PLAU duplication mutation that increases platelet (but not plasma) urokinase plasminogen activator (uPA), leading to intraplatelet (but not systemic) plasmin generation that degrades α‐granule proteins and causes platelet (but not plasma) factor V (FV) deficiency. Methods Calibrated automated thrombograms was used to test QPD (n = 7) and control (n = 22) PPP and PRP, with or without added tranexamic acid (TXA). TG endpoints were evaluated for relationships to platelet FV and uPA, plasma FV and tissue factor pathway inhibitor (TFPI) levels, and bleeding scores. Results Quebec platelet disorder PPP TG was normal whereas QPD PRP had reduced endogenous thrombin potential and peak thrombin concentrations (P values < .01), proportionate to the platelet FV deficiency (R 2 ≥ 0.81), but unrelated to platelet uPA, plasma FV, or bleeding scores. QPD TG abnormalities were not associated with TFPI abnormalities and were not reproduced by adding uPA to control PRP. TXA increased QPD and control PRP TG more than PPP TG, but it did not fully correct QPD PRP TG abnormalities or improve TG by plasminogen‐deficient plasma. Conclusion Quebec platelet disorder results in a platelet‐specific TG defect, proportionate to the loss of platelet FV, that is improved but not fully corrected by TXA. Our study provides an interesting example of why it is important to assess both PRP and PPP TG in bleeding disorders.
| Thrombin generation
Calibrated automated thrombograms analyses of PPP and PRP TG were performed as recommended, using autologous PPP to adjust PRP to 150 × 10 9 platelets/L. 1 For some experiments, gel-filtered platelets (GFP) were tested after resuspension in commercial FVdeficient plasma (George King Bio-Medical), as described, 25 using platelet count-matched controls if samples contained <150 × 10 9 platelets/L. TG was also assessed with purchased, plasminogen-deficient plasma (Affinity Biologicals Inc.).
Thrombin generation (triplicate estimates) was assessed in accordance with ISTH recommendations and completed within 3-4 hours of sample collection. 1 PRP test wells contained: 80 μL PRP and 20 μL PRP reagent (containing 0.5 pmol/L TF). Assays were done with or without added 10 mmol/L TXA (final, Sigma-Aldrich), which fully blocks the profibrinolytic effects of QPD platelets. 15 TXA effects were also tested by replicate determinations (n = 14 sets of triplicate determinations, with and without drug) of plasminogen-deficient plasma. For some experiments, control PRP was tested with or without added 300 ng uPA per 10 9 platelets (Research and Diagnostic Systems, Inc.) to mimic complete uPA release by QPD platelets (2 sets of triplicate determinations). Ten minutes before adding the FluCa regent, some PRP were preincubated with platelet agonists (combination of [final concentrations]: 10 µ g/mL Horm collagen, Helena Laboratories; 10 µ mol/L ADP, Sigma-Aldrich; and 50 mmol/L SFLLRN, Bachem Bioscience Inc). For all tests with additives, additives were added to both test and calibrator wells. TG was started by dispensing 20 μL of FluCa reagent (Stago Canada Ltd.) into each well. TG measurements were taken at 20-second intervals for 60 minutes at 37°C to evaluate: endogenous thrombin potential (ETP, nmol/L × min), peak thrombin concentration (nmol/L), time to peak (minutes), and lag time (minutes). 1 Areas under the curve were manually calculated for TG curves that did not return to baseline.
Plasma (n = 6 QPD, n = 14 controls) and platelet (n = 5 QPD, n = 11 controls) TFPI levels were assessed by an ELISA that uses a monoclonal antibody capture and a polyclonal antibody for detection and recognizes natural and recombinant TFPI (Human TFPI DUO SET ELISA, R&D Systems). Total platelet protein (DC protein assay; Bio-Rad Laboratories) was determined as described. 7 Megakaryocyte TFPI was evaluated using previously generated megakaryocyte RNA-seq data (n = 3 QPD, n = 3 controls). 8
| Statistical analyses
Quebec platelet disorder data were compared to age-and sexmatched controls, and to all controls after analyzing control data for age and sex differences. First sample data were analyzed for participants with multiple determinations. Two-tailed Mann-Whitney tests, with Bonferroni correction for multiple comparisons, were used to assess differences in TG endpoints and protein levels, and % differences in TG endpoints for simultaneous tests ± TXA. Linear regression was used to assess relationships between: (a) bleeding scores and TG endpoints in PPP and PRP; were also acceptable for: (a) PPP (based on 19 replicates for normal pooled plasma, 3 replicates for 1 QPD and 3-5 replicates for 3 F I G U R E 2 Thrombin generation findings for Quebec platelet disorder (QPD) and control platelet-rich samples. Top and middle panels respectively compare QPD and control thrombograms for PRP and GFP tested in FV-deficient plasma (all tested at platelet counts of 150 × 10 9 /L), and QPD and control PRP TG ETP endpoints (P values as indicated). Lower panels summarize associations (R 2 , P values and 95% confidence limits, as indicated) between QPD platelet FV antigen and QPD PRP TG findings for ETP and peak thrombin concentration PPP TG data showed no significant sex differences in TG endpoints (P > .13) and no relationship to age (R 2 < 0.55, P > .29), QPD TG findings were compared to all control data. The addition of exogenous uPA to control PRP samples did not reproduce QPD PRP TG findings (Figure 3). Platelet-activating agonists increased TG by control and QPD PRP, without accentuating QPD TG abnormalities (Figures 3 and 4). Although TXA did not significantly improve TG by plasminogen-deficient plasma (respective P values: ETP P = .062, peak thrombin concentration P = .12, time to peak P = .49, lag time P = .64), it significantly improved TG by both QPD and control PRP, without correcting QPD abnormalities in ETP and peak thrombin concentrations (Figure 3 and 5). TXA improved TG by QPD PRP more than PPP, but this was also evident with control samples, for all endpoints except ETP Figures 1,3 and 5). The improvements in peak thrombin concentration with added TXA appeared greater for the "off-TXA" sample (18% vs 4% for the 2nd sample drawn "on-TXA").
| D ISCUSS I ON
The main goal of our study was to reassess QPD TG abnormalities.
We found that QPD is associated with significant PRP but not PPP TG abnormalities, with strong associations between QPD platelet (but not plasma) FV levels and QPD PRP ETP and peak thrombin concentration. Interestingly, platelet-activating agonists improved PRP TG, as anticipated, 29 without accentuating the QPD PRP TG defect.
The addition of uPA to control PRP, to mimic full uPA release by QPD platelets, appeared to accelerate TG rather than recapitulating QPD abnormalities, possibly because the added uPA increased plasmin generation that enhanced platelet activation 30 and unlike QPD platelets, the control platelets were not deficient in FV or active PAI-1. We unexpectedly observed that TXA significantly improved TG of both control and QPD PRP samples, more than it improved PPP TG. As TXA did not improve TG by plasminogen-deficient plasma, we suggest that TXA improves TG by reducing plasmin generation during ex vivo assessments of TG. We noted that TFPI (which has emerged to be an important determinant of TG 3,4,31 ) is normal in QPD plasma and platelets and that plasma TFPI showed association to the lag time for QPD PRP samples. The QPD TFPI findings are interesting as plasma FV serves as a carrier for TFPI and FV-TFPI binding inhibits FV activation and prothrombinase activity. 5,32-34 We did not find any associations between F I G U R E 4 The effect of plateletactivating agonists on thrombin generation endpoints for Quebec platelet disorder (QPD) and control platelet-rich plasma. Panels compare control and QPD data for different endpoints (P values indicated) QPD ISTH-BAT scores and TG endpoints, platelet FV, platelet uPA levels, plasma or platelet TFPI levels, but this is not surprising as having QPD is the main predictor of QPD bleeding. 16 Our findings suggest that the impaired QPD PRP TG is largely due to the platelet FV deficiency. However, TXA (which is very effective in treating and preventing QPD bleeding) did improve how well QPD platelets support TG ex vivo. Furthermore, some QPD PRP TG abnormalities (eg, reduced peak thrombin concentration) resemble those of PAI-1-deficient PPP tested with added tPA. 24 It remains possible that other pathological changes to QPD platelets contribute to their TG abnormalities, including the deficiency of MMRN1. We studied TG without thrombomodulin, as recommended for bleeding disorder investigations. 1,2 Given the F I G U R E 5 The effect of added tranexamic acid on thrombin generation by Quebec platelet disorder (QPD) and control platelet-rich plasma and platelet-poor plasma. The table inset compares TG endpoints for control and QPD PRP, tested with and without added drug (TXA). The panels compare the effects of TXA on PRP vs PPP as % differences in TG endpoints compared to baseline (P values indicated) for control and QPD samples
|
2020-08-08T13:06:02.541Z
|
2020-08-06T00:00:00.000
|
{
"year": 2020,
"sha1": "8d48ebabec721903c48be2e8a5868417ac98888a",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ijlh.13302",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b32641f9861fd22cefdb9acac0fff77be226df72",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
158424605
|
pes2o/s2orc
|
v3-fos-license
|
TEACHING METHODS IN MBA AND LIFELONG LEARNING PROGRAMMES FOR MANAGERS
Teaching methods in MBA and Lifelong Learning Programmes (LLP) for managers should be topically relevant in terms of content as well as the teaching methods used. In terms of the content, the integral part of MBA and Lifelong Learning Programmes for managers should be the development of participants’ leadership competencies and their understanding of current leadership concepts. The teaching methods in educational programmes for managers as adult learners should correspond to the strategy of learnercentred teaching that focuses on the participants’ learning process and their active involvement in class. The focus on the participants’ learning process also raises questions about whether the programme’s participants perceive the teaching methods used as useful and relevant for their development as leaders. The paper presents the results of the analysis of the responses to these questions in a sample of 54 Czech participants in the MBA programme and of lifelong learning programmes at the University of Economics, Prague. The data was acquired based on written or electronically submitted questionnaires. The data was analysed in relation to the usefulness of the teaching methods for understanding the concepts of leadership, leadership skills development as well as respondents’ personal growth. The results show that the respondents most valued the methods that enabled them to get feedback, activated them throughout the programme and got them involved in discussions with others in class. Implications for managerial education practices are discussed.
students.Teaching methods in managerial education should motivate participants to engage continuously in the learning process.They should also enable participants of the education programme to build upon their experience, reflect on it, and add theoretical background that will make them more effective in their managerial work and help them to develop their selfawareness (Waddock and Lozano, 2013).Managerial education also focuses on developing leadership competencies and skills.Although management and leadership are different in many ways (Kotter, 1996), they also overlap.At the level of the individual, this means that managers are expected to provide leadership and to acquire leadership knowledge, skills, and abilities, i.e. competencies (Day and Dragoni, 2015).For organizations, competent leaders are one of the basic requirements of their survival in the turbulent conditions of the modern world.The topic of leadership is thus an integral part of managerial education, because organisations cannot afford to have managers without leadership competencies.People in managerial positions tend to evaluate their leadership behaviour higher than when it's judged by their subordinates (see, for example, Mehdinezhad and Sardarzahi, 2015).Teaching methods in managerial education should thus provide participating managers with the opportunity not only to understand theoretical concepts and to develop specific leadership skills, but also to self-reflect and gain valuable feedback on their leadership behaviours.
Introduction
Managerial education and development takes place in formal programmes outside of the workplace as well as in informal training opportunities at work in a managerial position.Managers learn or adopt knowledge and skills that will allow them to carry out or improve their current or future professional roles (Sadler-Smith, 2006: 2).Studying for an MBA or in shortterm lifelong learning programmes are examples of formal programmes.The teaching methods used in them have gone through certain changes that can be described briefly as a shift from teacher-centred to learner-centred teaching.Learner-centred teaching focuses on the participants' learning process.The teacher's role is not to transmit knowledge from the instructor to the students, but to facilitate their learning.The emphasis is on using and communicating knowledge effectively to address enduring and emerging issues and problems in real-life contexts (Huba and Freed, 2000).This "facilitative" style of teaching creates an inspiring and psychologically safe environment in which learners explore the subject by themselves as well as in peer groups.This teaching style works best when learners already have prior knowledge of the subject as well as experience or existing skills (Beevers and Rea, 2010).It's very important for managerial education to respect its participants' actual learning needs and provide them with learning opportunities that are clearly linked to their everyday work.Managers approach education (similarly to adult learners) in a more utilitarian way than students or undergraduate Printed ISSN: 2336-2375 Leadership could be defined as a "process whereby an individual influences a group of individuals to achieve a common goal" (Northouse, 2016: 6).For a leader to influence others, he or she must be a person that others are willing to follow.According to Hogan and Kaiser (2005), people seek four essential characteristics in leaders: integrity, judgement, competence, and vision.The most important of these characteristics is integrity, which creates trust between the leader and his or her followers.According to the above-mentioned authors trust in one's superior predicts the entire range of desirable organisational outcomes: productivity, job satisfaction, and organisational commitment (Hogan and Kaiser, 2005).Integrity must be understood as a personal trait of being honest with oneself and others; it's aligned with one's values system and ethic beliefs.It's also connected to the ability of self-insight, of being open to feedback and willing to perceive the wider context and consequences of one's behaviour.In terms of the managerial education programmes, there are demands for them to contain ideas for the personal development of their participants that could contribute to a greater extent to their personal integrity and thus trustworthiness as leaders.According to Hall (2004: 154) "leader development is largely personal development" while a crucial aspect of personal development is self-awareness (Hall, 2004).Managerial learning thus should include not only the acquisition of relevant knowledge and skills, but also opportunities for increasing selfawareness.Which of the teaching methods can be used to achieve this?One appropriate framework is the so-called whole person learning, which is an extended model of experiential learning (also known as Kolb's learning cycle) that has gradually been advanced since the 1980s.Whole person learning exposes participants in learning programmes to "both direct and vicarious modes of participation" and enables them "cognitively, emotionally, and behaviourally to process knowledge, skills, and/or attitudes in a high intensity learning situation characterized by a high level of active involvement" (Hoover et al. 2010: 195).Managerial education programmes, such as the MBA or short-term LLP, should therefore use a wide spectrum of teaching methods that facilitate the cognitive processing and understanding of leadership concepts as well as the adoption of leadership skills, and encourage self-development.This is in accordance with Conger (1992), who suggested four primary approaches to leadership development: conceptual understanding, skills building, personal growth, and feedback.This situation, such as it is described, raises questions on the methods used in management programmes for leaders' development and their frequency, relevance, and effectiveness.With regard to learner-centred focus, an important criterion of the evaluation of the teaching methods applied is the managerial education programmes' participants' own perception and assessment of those methods.These are important questions to ask in all managerial education programmes.In our study, we focused on managerial programmes realised by the International School of Business and Management (ISBM) of the University of Economics in Prague.Therefore, the objective of the presented study was the analysis of the frequency and perceived usefulness of the teaching methods in the examined managerial programmes.The research questions were as follows: how frequently are the particular methods used in MBA programmes and short-term lifelong learning programmes (LLP)?What specific teaching methods did the participants in managerial education programmes, MBA programmes, and short-term lifelong learning programmes (LLP) consider useful for the conceptual understanding of leadership, leadership skills development, and personal development?How does the frequency of the methods used differ from their perceived usefulness for different purposes (namely the conceptual understanding of leadership, leadership skills development, and personal growth)? 1
Materials and Methods
The data used for this paper was collected within the Norway funds project on the basis of a questionnaire survey that took place in the spring of 2015.The respondents were participants of managerial education programmes (MBA and lifelong learning programmes) from both partner institutes involved in the project, i.e. the University of Economics, Prague (VŠE, CZ) and Sogn og Fjordane University College (NO).The collected results for both countries were first published at the EGPA Annual Conference in August 2015 (Bukve et al., 2015).For the purposes of this paper, only data for the Czech Republic was used, which were the answers of the participants of the lifelong learning and MBA programmes that are taught at the International School of Business and Management (ISBM) of the Faculty of Business Administration VŠE.LLP programmes are one-offs and tailored to their participants' needs (as part of company training) and are one-semester long.The length of study in the MBA programme is 2.5 years (a total of 90 ECTS).In order to approach the above-mentioned questions, we designed a survey.Prior to developing the survey, we identified the 14 different teaching methods used in the programmes under study.The list of methods is adapted from Daniel Jenkins's list of instructional strategies (Jenkins, 2013), taking into account the methods with relevance to the programmes under study.An appendix containing the definitions of all the relevant teaching methods was attached to the questionnaire to prevent misunderstandings.The methods were described as follows: • Case study: participants examine written or oral stories highlighting a case of effective or ineffective leadership or managing organisation.• Large group discussion: instructor facilitates sustained discussion, asks or answers questions concerning the given topic with the entire class.• Interactive lectures: instructor presents information in 10-20-minute time blocks with periods of structured interaction and discussion between mini-lectures.• Lectures: participants listen to instructor presentations lasting most of the class session.• Reflective/experience writing: participants develop written reflections and analyses on their experiences (usually experience in the role of leader/ manager).• Self-assessment questionnaires: participants complete questionnaires or other diagnostic instruments designed to enhance their self-awareness in variety of areas (e.g.communication style, personality type, leadership style, etc.).• Role-playing: participants engage in activities where they act out roles according to a given scenario.The goal is to evolve the desired (managerial) skills.• Small group discussions: participants take part in small group discussions on the topic of leadership or Respondents were asked about how often the teaching methods were used in their programme, and how useful they found the methods for different purposes.In this paper, we analysed the teaching methods' usefulness for the conceptual understanding of leadership, the development of leadership skills and personal growth.These three purposes are based on Conger's primary approaches to leadership development (Conger, 1992), with the exception of feedback, which is included in the list of teaching methods (see above).The students filled out the questionnaires online (based on a link sent to them) or on paper, always after completing a course or a part of the programme devoted to leadership.The overall number of completed questionnaires for the CZ was 54 (the response rate was 55%), of those 60.7% were women and 30.5% men.Most of the respondents were participants in lifelong learning programmes (66.6%), others were students of the MBA programme (33.3%).Descriptive statistical characteristics were calculated in the statistical analysis of the collected surveys (mean, standard deviation, minimal and maximal values.Analysis was performed in statistical language R (R Core Team, version 2017).The differences between teaching methods were analysed by using within-subject ANOVA.
Results
The respondents were asked to report the frequency of the usage of each teaching method from the prepared list using a rating scale of 1 to 4 (1 = never, 2 = rarely, 3 = sometimes, 4 = often).The results are presented in Table 1.The respondents found discussions in small groups (3.50), larger groups (3.35), and feedback (3.35) as the methods most frequently used in their type of training.The analysis revealed that there are significant differences between the frequency of the usage of the individual methods; F(13.6663) = 19.18;p < 0.001.However, the post-hoc tests showed specific differences between the individual methods (see the graph of the average frequency of the usage of the individual methods, line segments designate the standard error mean).The further development of the effective use of the teaching methods can be undertaken in the field of so-called reflective methods that can, especially for managers with a lot of experience, significantly contribute to their further development as leaders.This is also supported by remembering to give feedback on activities carried out during full-time study as well as on homework assignments.
The frequency of the various methods used in teaching
Printed ISSN: 2336-2375 The usefulness of the methods for the conceptual understanding of leadership Respondents were asked to evaluate the usefulness of the specific teaching methods for the conceptual understanding of leadership.They were provided with a five-level scale (from 1 = useless to 5 = very useful).See Table 2 for the results.The respondents of our research have designated the so-called experience-based methods as the most useful for their conceptual understanding of leadership.These are, especially, feedback (4.52), simulations and model situations (4.50), small group discussion (4.33)and case studies (4.33).Short written exercises (2.94) and exams and knowledge tests (2.67) were seen as least useful for their conceptual understanding of leadership.Small group discussion (4.33)and oral presentation (4.07) seem to be useful for the conceptual understanding of leadership, i.e. methods that include sharing experience, but also interactive lectures (4.11), which fittingly combine the instructor's contribution with a discussion and the experience and opinions of students and case studies (4.33) Research projects (3.07), seldom used in this type of study programme, can be judged as useful.In contrast, the least useful according to the respondents are the methods from the group exams and knowledge testing.
There are significant differences among the represented methods in the assessment of their usefulness for the conceptual understanding of leadership (F(13.689)= 27.08;p < 0.001).Posthoc tests revealed specific differences between the individual methods (see the graph below, line segments designate the standard error mean).The usefulness of the methods for leadership skills development Respondents were asked to evaluate the usefulness of the teaching methods for leadership skills development.They were provided with a five-level scale (from 1 = useless to 5 = very useful).See Table 3 There are significant differences among the represented methods in the assessment of usefulness for leadership skills development (p < 0.00001).The analysis also revealed significant differences in the assessment of the usefulness of the teaching methods for Knowledge level is an important basis for the further development of managers, but the focal point of the instruction is gradually shifting to the level of skills.It's not enough just to "know" or "memorize," but it's also necessary to know how to use and apply, i.e. acquire a wide spectrum of skills (social, managerial, and others).For example, the concept of versatile leadership (Pavlica, Jarošová and Kaiser, 2015) places an emphasis on managers' need to adopt various different, even contradictory, but mutually complementary skills together with versatile, wide-ranging application in practice.
The usefulness of the methods for personal growth
Respondents were asked to evaluate the usefulness of the particular methods for their personal growth.They were provided with a five-level scale (from 1 = useless to 5 = very useful).See The analysis also revealed significant differences in the perception of the usefulness of teaching methods for personal growth (F(13.686)= 27.10;p < 0.001).Post-hoc tests showed specific differences between the individual methods (see the graph below, line segments designate the standard error mean).A two-way analysis reveals that the individual methods are assessed differently in terms of their frequency of usage and influence on personal growth (F(13.667)= 4.98; p < 0.001).
The next graph shows how the frequency of the usage of the individual methods differs from the usefulness of the methods for personal growth.Especially Feedback (p = 0.003), Oral Presentation (p = 0.001), Reflective Methods (p < 0.001), Research Projects (p < 0.001), Role-Playing (p = 0.007) and Self-Assessments (p < 0.019) are among the methods that could stimulate participants' personal growth if they were used in teaching more often.Printed ISSN: 2336-2375 Self-knowledge is understood as the cornerstone of leadership as well as of the further development of managerial skills.It also entails knowledge of one's own typical behaviour patterns, and also awareness of one's strengths and weaknesses (Pavlica, Jarošová and Kaiser, 2015).One must deepen one's self-knowledge and so-called selfacceptance in order to continue one's personal development, and getting to know and understand others (Rogers, 1961 in Pavlica, Jarošová andKaiser, 2015).
Discussion
The limitations of this study may be perceived in the fact that the respondents of the survey were MBA and lifelong learning programmes participants who all come from the same educational institution, so the results can't be seen as representative.Also, only the participants, and not the course instructors, were asked to fill in the survey.However, as the methods used in the research were based on prior research studies from abroad, it is possible to discuss the results in their context.The studies mentioned are those of Allen and Hartmann (2009) and Jenkins (2013).Both are inspired (as is our study) by Conger's primary approaches to leadership development (personal growth, conceptual understanding, skill building and feedback), which were combined with different sources of learning commonly found in leadership development activities.
In the Allen and Hartmann study, the respondents were undergraduates who were asked to share their opinion on the way in which they would like to learn about leadership.The students showed a preference for developmental activities where the primary learning objective was individual personal growth and skill building.Jenkins's study brings an overview of leadership programmes from the perspective of educators.Three hundred and three leadership instructors from the USA, teaching inclass academic credit-bearing undergraduate leadership courses, were asked to participate.The instructors showed a preference for instructional strategies that emphasise class discussion, forms of conceptual understanding, and personal growth.On the other hand, they seldom used skill-building instructional strategies or traditional assessment.These results indicate that even leadership educators, though they pay less attention to skill development, lend a significance to in-class interaction and communication in leadership development programmes and do not overestimate traditional ways of assessment, such as tests.
Since our research study was focused on analysing the methods used in specific educational programmes, we concentrated on those that were relevant to them.Therefore we chose Jenkins's overview of teaching methods (Jenkins, 2013) as the basis for our survey.The research study thus didn't strive to provide a complete summary of teaching methods and approaches that can be used in educational programmes for managers.Other methods that are appropriate for programmes that develop leadership competencies were described and analysed by quite a few authors.Inspirational examples include peer-led team learning, in which specific problem-solving workshops comprised of small groups of students led by a specially trained peer leader (Dobson, Frye and Mantena, 2013) are included in the MBA programme, or the "live-case" intervention method, which consists of a CEO bringing to the classroom a strategic issue that he/she is currently struggling with to be discussed with students in real time (Rashford and De Figueiredo, 2010).The authors cited, like others (for example De Dea Roglio and Light, 2009) emphasised the significance of teaching methods for leaders' development that enable participants to actively participate in the learning process, but also reflect their current work and experience.Similar conclusions can also be inferred from our research study.
The statistical analysis showed a significant correspondence in the differences between the frequency of the teaching methods and their perceived usefulness for various purposes (conceptual understanding of leadership, leadership skills development, and personal growth).The results reveal the respondents' increased need to present their thoughts, opinions, and experiences (individually and in a team) orally in class.Reflective methods, research projects, role-playing, and simulations are other methods that should be used much more than they have been in the programmes assessed.In the case of leadership skills development and personal growth, the greatest difference between frequency and perceived usefulness was found in the feedback provided by a teacher or other class participants.If we consider that feedback is used rather often in the programmes assessed (see Table 1), it seems apparent that the participants consider the possibility of getting feedback as the greatest stimulus for the development of leadership competencies.
Conclusion
The results of the survey have confirmed the trends in education introduced in the introduction of this article, especially the limitation of traditional "teacher-oriented" teaching, and strengthening the use of modern "learner-oriented" teaching methods.Managerial education should entail, among other things, the development of its participants' leadership competencies.
The respondents of the research -MBA and lifelong learning programmes´ students at the University of Economics, Prague -have found the methods that keep them active over the course of the training and enable them to develop their understanding of leadership concepts, skill development, and personal growth through getting feedback, sharing experiences and discussion, solving various problem situations through role play and simulations or case studies to be among the most useful.The results also suggest that respondents appreciate it the most if activities that, once completed, are followed by feedback from the instructor or other participants are included.The findings are also supported by the comparison of the frequency of the usage of the given methods and their perceived usefulness.It was shown that the participants of the assessed programmes would prefer that most of the teaching methods (especially the interactive ones) would be used more often, except for lectures and exams (in the case of personal growth even interactive lectures), where the frequency of usage was higher than their perceived usefulness.In this case, it leads to reflections on the best way that the managerial education programmes impart information and knowledge or to a question whether they should use other methods of assessment (for example research projects).
The results of the study have practical implications for similarly specialised educational programmes.We can recommend that instructors consider a wide range of teaching methods to meet the various purposes in leadership development while designing managerial training programmes.While they are teaching, they must bear in mind the stock that the participants place in the possibility of getting feedback for their individual inputs in the teaching process through active teaching methods.Using these teaching methods places an emphasis on creating the right atmosphere in the group to support the active participation of all education programme participants, as well as providing and accepting feedback.
Printed ISSN: 2336-2375 The use of modern "learner-oriented" teaching methods places greater demands on the instructor, the level of his or her preparation, the ability to adapt the content and type of activities directly to the target group, and to plan time.Meeting these demands isn't easy.If managerial education programme instructors don't have the proper training in relevant pedagogical competencies, don't work on their further development, or don't get professional feedback about their teaching, their adequate usage of the teaching methods is highly unlikely.
1
This article is the expanded version of an article published at the ERIE 2016 conference (13th International Conference on Efficiency and Responsibility in Education 2016) held at the Czech University of Life Sciences in Prague.The data was then analysed further.It contains other results that weren't part of the conference article in 2016.Printed ISSN: 2336-2375 other aspects of managerial practice, sharing their own experiences.• Feedback: participants receive feedback from the lecturer or their colleagues.• Simulations, model situations: participants engage in activities simulating complex problems and requiring final decision-making (e.g.simulations of team decision making, meetings, etc.) • Research projects: participants actively research a leadership theory or other topic and present findings in writing.• Short written exercises: participants complete given sentences, answer written questions, etc. designed to enhance understanding of the course content.• Exams, knowledge tests: participants complete tests or exams designed to appraise their level of understanding of the given topic.• Oral presentations: based on individual or team preparation, participants present knowledge of the area of management or leadership in oral presentations to other participants.
Figure 1 :
Figure 1: Average frequency of the usage of the individual methods (source: own research)
Figure 2 :
Figure 2: Average usefulness of teaching methods for the conceptual understanding of leadership (source: own research)If we carry out a two-way analysis, it reveals that the individual
Figure 3 :
Figure 3: Comparing the frequency of usage of the methods and their usefulness for the conceptual understanding of leadership (source: own research) Printed ISSN: 2336-2375 leadership skills development (F(13.689)= 27.85;p < 0.001).Post-hoc tests showed specific differences between the individual methods (see the graph below, line segments designate the standard error mean).
Figure 4 :
Figure 4: Average usefulness of the methods for leadership skills development (source: own research) If we carry out a two-way analysis, it reveals that the individual methods are assessed differently in terms of their frequency of usage and influence on skills development (F(13.663)= 4.82; p < 0.001).The next graph shows how the frequency of the usage of the individual methods differs from the usefulness of the methods for leadership skills development.The results indicate that increasing the frequency of the usage of certain methods (for example Feedback (p = 0.004), Oral Presentation (p < 0.001), Reflective Methods (p = 0.002), Research Projects (p < 0.000), Role-Playing (p = 0.014) and Simulations (p = 0.028) etc.) could have an influence on the perception of the usefulness of the methods for skills development.
Figure 5 :
Figure 5: Comparing the frequency of the usage of the methods and their usefulness for leadership skills development (source: own research)
Figure 6 :
Figure 6: Average usefulness of the teaching methods for personal development (source: own research)
Figure 7 :
Figure 7: Comparison of the frequency of the usage of the methods and their usefulness for personal growth (source: own research)
Table 1 : Frequency of the methods used in teaching, 2015 (source: own research)
The results of the survey of the frequency of the various types of teaching methods used in MBA and lifelong learning programmes at the University of Economics, Prague can be considered very encouraging (see Tab. 1).It can be said that traditional teaching methods such as lectures (3.07), exams, knowledge tests (2.43), and short written exercises (2.30) are techniques that are less frequently used compared to learner-centred interactive methods such as small/large group discussion (3.50/3.35),case studies (3.19), and interactive lectures (3.26).Due to the target group of learners, one can appreciate the emphasis on sharing and exchanging experience, especially through discussions, but even on the relatively often used tailored preparative activating methods, such as simulation (3.22), role-play (3.02), and case studies (3.19).
|
2018-12-27T13:27:05.828Z
|
2017-09-30T00:00:00.000
|
{
"year": 2017,
"sha1": "1fc4edbaabb9d66a22429f404fffd7d72dc068ab",
"oa_license": "CCBY",
"oa_url": "https://www.eriesjournal.com/index.php/eries/article/download/164/154",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1fc4edbaabb9d66a22429f404fffd7d72dc068ab",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
73592472
|
pes2o/s2orc
|
v3-fos-license
|
Mechanical Impedance of Cerebral Material
The tentative variation of the mechanical impedance, of a cylindrical sample of cerebral material, has been achieved by Vibrometer Laser according to the frequency. The studied matter is supposed homogeneous, isotropic and stationary. A multilayered mechanical model has been associated to the studied sample to simulate its vibration. The theoretical expression of mechanical impedance has been determined while taking the mechanical/electric analogy as a basis. A good adjustment of theoretical model parameters permitted us to have a good agreement theory/experience of the mechanical impedance variation according to the sample vibration frequency.
Introduction
The head is part of the body most threatened by the fatal injuries in accidents.Brain injuries cause approximately 56,000 dead and 83,000 disabled in the United States each year [1].The typical duration of loading in road accidents is between 1 ms and 50 ms, according to the rigidity of the impacted area.This interval is approximately between 20 Hz and 1000 Hz frequency.It is therefore essential to carry out measures to the Interior of this frequency band.Because of the impossibility of technical and legal studies of human in vivo, they have been supplanted by studies in vitro performed in low proportion on humans [2][3][4], and largely on animals like pigs [5][6][7][8] and monkey [9][10][11].
The aim of this work is to develop a model to simulate the variation of the impedance mechanical (Z = force/ speed) material cerebral pork (viscoelastic materials [12]) on a range of frequency from 60 Hz to 580 Hz.The studied material is assumed to be homogeneous, isotropic, and stationary.
Method
The sample mixture of substances grey and white taken within the cerebral cortex pig brain is cylindrical, diameter d = 2 cm, height h = 3 mm and mass M = 1 g.It is cut from using part takes precedence.It entails part allows cutting cylindrical samples by helical motion descent.This is a method commonly used in [13] soft tissue Biomechanics because it allows you to obtain a cylindrical geometry, uniaxial tests, allowing the hypothesis of a field of uniform and unidirectional constraint depending on the axis of the cylinder.The sample is requested at its bottom surface with a sinusoidal force F = F 0 exp(j2πft) provided by a vibrating pot where the sample is deposited.The vibration solicitation is therefore a normal tension/compression type solicitation.This solicitation is own weight of the sample.A sensor force brought into contact with the vibrant plateau of the pot enables measurement of the force F applied to the underside of the sample.The speed v of top sample is measured by a speed sensor to laser.Treatment of experimental data by software MATLAB allows us to deduce the F 0 force as well as speed v 0 module.
Experimental Device
The experimental device (Figure 1) mainly includes a vibrant pot, a force sensor, a laser (helium neon) speed sensor and a computer with its acquisition Board (HPVEE).Note that experience was carried out ten times with samples taken from the same region (cerebral cortex), different, of the same dimensions.
Modeling of the Mechanical Impedance
To determine the theoretical mechanical impedance Zth = F/v of studied system, we have associated the mechanical model to 5 channels (types Kelvin-Voigt) (Figure 2) that we used in 2009 [14] to characterize brain matter of pork in terms of modulus of elasticity and depreciation internal.The vibration of the sample is then equated with the vibration of 5 overlapping cylindrical layers on the other.Each layer is then height h 5 = h/5 diameter d and mass m 5 = M/5.
To resolve the problem we have used the analogy of mechanical/electrical (analogy force-voltage).Thus, our mechanical model was replaced by an electric model (Figure 3) with resistors R 5 , inductances L 5 and capacitors C 5 : In this circuit V 0 and V 5 represent the voltage input and output circuit respectively.I 0 and I 5 are input and output circuit intensities.Since then, studied mechanical system is excited a single endpoint (lower surface) that the other end (upper surface) is free (is not subject to any constraints), in this case, in our electric model output should be short-circuitée (V 5 = 0).On this circuit transfer matrix is given by: where 5 and 5 Z Z are the equivalent impedances to the inductance and the resistance in series with the capacity respectively: (2) and (4)
It follows that
The two Equations ( 5) and ( 6) allows us to deduce that By returning to our mechanical model (KV model) and taking into account the analogy mechanical/electrical, theoretical expression of the mechanical impedance Zth = F/v can then be written as (7).Except that (2) and (3) and electrical impedance will be replaced by their mechanical analogues: 1) [14] we could calculate the values from the MATLAB software, and corresponding to each frequency of vibration of the sample.The module on KV model to 5 mass layers depending on the frequency theoretical mechanical impedance variation is then determined.
Results
The next curve represents the theory/experience the mechanical impedance module overlay.
From the Figure 4 we can see that the values of k 5 (f) and α 5 (f) already determined in 2009 [14], model very well the mechanical impedance of the studied system.
Conclusions
Our model (5-layer mass) type Kelvin-Voigt, with variable parameters based on the frequency is very well to simulate changes in mechanical impedance module of cylindrical sample of pork brain tissue.
The theoretical mechanical impedance Zth = F/v, that was used in this study, can be used in finite element model of brain to simulate the mechanical impedance.
This mechanical impedance represents a maximum in the surrounding of 400 Hz.It means that deformation is maximal at this frequency.
So, we can note that this frequency vibration can cause damage on the brain tissue.
Figure 1 .
Figure 1.(a) The experimental device; (b) Diagram of the experimental device.
Figure 3 .
Figure 3. Electrical model analogous to our KV mechanical model.
3 .
Determination of the Parameters of the Model By introducing the k 5 (f) values and α 5 (f) already calculated in 2009 (Table
|
2018-12-26T15:59:03.059Z
|
2012-03-28T00:00:00.000
|
{
"year": 2012,
"sha1": "afedd12be7fa08eb5b742d91bc7bade667c9408f",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=18203",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "afedd12be7fa08eb5b742d91bc7bade667c9408f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
259507383
|
pes2o/s2orc
|
v3-fos-license
|
The marxist philosophical basis of socialist literature and art
Abstract: Socialist literature and art are constructed on the basis of Marxist philosophy. The “people character” of socialist literature and art is based on the fact that Marxist philosophy abandons individualism and focuses on a wider population’s freedom and happiness. Therefore, socialist literature and art take people rather than individuals as the subject of expression. Realism, the basic principle of socialist literary and artistic creation, is based on the materialism of Marxist philosophy. The “typical environment and typical characters” proposed by Marxist classic writers for socialist literary and artistic creation are the method and principle formed by applying the philosophical concepts of the organic combination of the objective world and human subjectivity, and the organic combination of nature and social history to the concrete literary and artistic creation in Marxist theory. Marxist philosophy regards sensibility and aesthetics as important elements for literature and art to play a practical role. It believes that sensibility and aesthetics are not limited to the subjective world of people, but have the possibility of connecting with the external objective world. Therefore, socialist literature and art must pay attention to aesthetics.
inTroducTion
Socialist literature and art are a kind of literature and art form constructed by socialist ideology with Marxist theory as the guiding principle.To some extent, socialist literature and art are also a kind of practice that Marxist theory uses literature and art to participate in the construction of socialism.Any practice is built on a certain philosophical basis, and the philosophical basis of socialist literature and art is naturally Marxist philosophy.Essentially, the characteristics, creative principles and aesthetics of socialist literature and art are determined by its Marxist philosophical basis.Analyzing the Marxist philosophical basis of socialist literature and art can not only better understand the characteristics of socialist literature and art, the principles of creation and the internal decisive factors of their realization path, but also present the manifestation form of Marxist philosophy in literature and art in a clearer way, so as to have a new angle of understanding Marxist philosophy.In addition, socialist literature and art are a concept that is constantly being constructed.It should reject some sort of essentialist fixed explanation.However, no matter how it is constructed, its most basic principle content should always be fixed.For example, what is the goal of socialist literature and art and how to achieve the goal should be the core content that has always been adhered to in the process of dynamic construction.The analysis of the Marxist philosophical foundation of socialist literature and art may clarify these contents.
The principal posiTion of man and The ToTal liberaTion of mankind:
The marxisT philosophical foundaTion of The "people characTer" of socialisT liTeraTure and arT Marxist philosophy has been criticized for "seeing history but not humanity" (Peng, 2015, p. 148-152), which actually has some misunderstandings.It is not that Marxism does not pay attention to "people", but the "people" that Marxism pays attention to are not abstract people, but people in specific historical situations.Marxism does not only focus on the concept of "people" constructed on the basis of the self, but on a wider group of people (TAN, 2016, p. 30-37), that is, in Marxism, people are not the self, but the world.In the Marxist philosophical system, the common people in the world, in the specific historical reality, are always in the most central principal position.All the theoretical systems of Marxist philosophy are to find a path for comprehensive liberation of this specific group of "people".Marxist philosophy pays more attention to the unity of human liberation and the individuals' overall freedom, rather than pursuing the freedom and liberation of oneself and the groups in the same class (XU, 2012, p. 25-28).
Marx believed that theoretical deductions confined to words cannot make "people" free and liberated.Only through practice can we enter the real world and change the production relations that oppress most people.
Only then can human liberation and personal freedom be realized (MARX;ENGELS, 1960, p. 22-23).But this does not mean that Marx denies the important role that theory and art can play in the realization of human liberation and individual freedom.
As pointed out by some researchers, what Marx opposed was just the empty talk of self-closing in the cage of language instead of entering the real world and putting it into practice (LU, 2014, p. 125-134).That is why he criticizes the Young Hegelians: "They just use words against words (MARX; ENGELS, 1960, p. 40)."He hoped that theories and literature and art could influence people's thinking and understanding, thereby promoting people to enter the real world and realize social change.In this process, literature and art can play the role of "weapon of criticism (MARX; ENGELS, 1960, p. 9)".This forms the ideological motivation of socialist literature and art.This ideological motivation can unfold on two levels.One is in the field of spiritual production.Marx once regarded literary and artistic creation as a kind of spiritual production, and the products produced are to meet people's spiritual needs.In this field, how to realize the principal position of man in the sense of Marxist philosophy?If it is only to express the literary and artistic creators' subjective feelings, regardless of the life feelings of more people in the real world, only a few people can obtain spiritual pleasure and enjoyment in appreciating literary and artistic works.Most of them cannot have spiritual resonance, that is not the principal position of man in the sense of Marxist philosophy.Socialist literature and art are not created for a small number of people, instead, its benefits should benefit a wider group of people.The other is the role of socialist literature and art at the level of "weapon of criticism", that is, to promote changes in the real world.In order to achieve this goal, socialist literature and art must also leave the abstract "human" and "human" in a small range, and then pay attention to the larger group of "human" in historical reality.In socialist literature and art, this large-scale group of "human" in history is called "people" by some socialist literature and art theorists."People character" has become an important or even fundamental feature of socialist literature and art (WEN, 2022, p. 10-12).
The "people character" of socialist literature and art has undergone a process of dynamic development, and its meaning has changed in different historical periods.In the 1940s, "people character" was more developed at the level of who is literature and art for, that is, it emphasized that literature and art serve workers, peasants and soldiers, rather than serving the bourgeoisie and landlord class.In his Speech at the Forum of Art and Literature in Yan'an, Chairman Mao once criticized some works "from the standpoint of the petty bourgeoisie, and they created their works as the self-expression of the petty bourgeoisie", while literary and artistic works should "gradually move to the side of the workers, peasants and soldiers, and to the side of the proletariat in the process of going deep into the masses of workers, peasants and soldiers and their actual struggles and in the process of studying Marxism and studying society (MAO, 1991, p. 865-867)".After the founding of New China in 1949, "people character" has continued to have such a political meaning.During the 17-year period from 1949 to 1966, a large number of literary and artistic works were created with workers, peasants and soldiers as the main body of expression, while a few works expressing petty bourgeois sentiments were criticized politically.After the reform and opening up, the national ideology began to change its attitude towards literature and art, and the meaning of "people character" also changed to a certain extent.It is no longer a concept filled with polar opposites.There is no clear division of which part of the people is not the "people", and the expression of personal life experience by writers and artists in their works is no longer considered to be the opposite of people's literature and art.Therefore, the performance themes of literary and artistic works in this period have diversified characteristics.
It was in 2014 that China once again emphasized the "people character" on the ideological level.President Xi pointed out in his speech, at the symposium on literature and art work, "Socialist literature and art, in essence, are the literature and art of the people"; "Literature and art cannot deviate from the question of who they serve (XI, 2015, p. 22)".This seems to be re-emphasizing the people's literature and art tradition of the 1940s, but it is not simply copying the people's literature and art of the 1940s.What should attract people's attention most is that the people's nature of literature and art proposed in 2014 is no longer a bipolar concept, that is, it no longer implies the confrontation and struggle between people's groups and nonpeople's groups.The ideological department of China once required satellite TV stations to broadcast main-theme TV dramas only during the prime time period, so the TV dramas broadcast during these time periods can be regarded as works that meet the socialist literary and artistic requirements advocated by the state ideology.In recent years, TV dramas, such as the Ode to Joy Series, which reflect the theme of urban life, have been broadcast in prime time (LIU, 2019, p. 46-51).The main characters highlighted in these TV dramas include both the ordinary working class in the city and the very wealthy capitalist class.These characters of different classes live and struggle together in the works, and there is no confrontation and struggle among different classes at all.There are also TV dramas, such as Feather Flies To The Sky, that have won the Best Works Award (an award set by the national ideology department to enhance the influence of mainstream ideology), which also takes entrepreneurs as the object of expression (LIU, 2018, p. 58-66).Entrepreneurs are capitalists who are the former workers, peasants and soldiers' opposite.We can see from this that the concept of "people" in contemporary China no longer causes confrontation among the masses of workers, peasants, soldiers, the petty bourgeoisie and capitalists, but instead integrates different social strata groups, showing inclusiveness and harmony.
Although "people character" has been endowed with different meanings in different periods, it is essentially determined by the political and ideological motives of socialist literature and art, that is, it is also for the purpose of realizing the comprehensive liberation of mankind (GU, 2022, p. 25-33).Therefore, socialist literature and art refuse to sing and whisper in the small circle of creators, but focus on the broader "people" outside the self.Generally speaking, the "people character" of socialist literature and art contains three levels of content.First of all, it means that "people" are the subject of literary and artistic expression.It describes the people's life world, especially the majority of ordinary people in the people, expressing the people's joys and sorrows, rather than just showing the creator's personal feelings and life experience.Xi Jinping put forward a series of requirements and expectations for socialist literature and art workers, such as "making the people the central focus of development", "going deep into the masses and life", "loving the people", and "All aspiring and pursuing literary and art workers should follow the footsteps of the people, go out of the world, read the world, and let their hearts always beat with the hearts of the people (XI, 2016)" and so on.
In addition, "people character" also includes a correct attitude towards the people.First, socialist literature and art should praise the people and truly reflect the truth that the people are the creators of history and the decisive force in promoting social development.Wanton attacking, slandering and vilifying the people are not allowed by socialist literature and art.This does not mean that the people cannot be criticized for their shortcomings.Criticisms made for the purpose of exposing the problems and attracting the attention of healing, and sincerely hoping to awaken the people to become better people, are also in line with the purpose of serving the people.Lu Xun, for example, a famous modern Chinese writer, criticized the national character, but his essence was to criticize the feudal ideology and culture that caused the bad root of the national character, hoping that, after changing the ideology and culture, the people could get rid of their shortcomings and become better people.Therefore, instead of being criticized by socialist ideology, Lu Xun was identified as "the standard bearer of culture", "the soul of the nation" and "the backbone of the nation".In addition, socialist literature and art should help the people improve their art appreciation.Lu Xun's literary works are definitely not popular literature and art, and the admiration of his works also shows that socialist literature and art do not exclude high-level minority literature and art and blindly advocate popular literature and art that meet the public's taste.
In fact, the constructors of socialist literature and art have always paid attention to the dialectical relationship between the popularization and improvement of literature and art, emphasizing that literature and art should be understandable to the people, and also emphasize the need to help the people improve their appreciation of literature and art.In Chairman Mao's Speech at the Forum of Art and Literature in Yan'an, "how to serve the masses" is one of the key issues discussed, and the relationship between "popularization and improvement" is the foothold for his discussion of this issue (QIN, 2020, p. 21-28).President Xi pointed out in his Speech at the Forum of Art and Literature that socialist literature and art must have a sense of high-quality works, and that producing more high-quality works is the key to the prosperity of socialist literature and art.This further shows that socialist literature and art do not blindly pursue popularization and reject high-quality literature and art, and do not blindly cater to the people, but emphasize the dialectical unity of satisfying the people's spiritual needs and helping the people improve their appreciation of literature and art (ZHAO, 2015, p. 20-26).
Moreover, "people character" is also reflected in the people's position of literary and artistic creators.In the process of creation, literary and artistic creators must not forget that they are also a member of the people.They must not think that their cultural level, vision and knowledge are higher than the ordinary people's ones.So they criticize the people from a high position in their works, and criticize the people's shortcomings.Criticizing the people's shortcomings and, at the same time, elevating oneself, this invisibly forms the creators and the masses' separation, and creates a confrontation between intellectuals and ordinary people.It's not that you can't criticize the people, but don't forget that you are also a member of the people while criticizing.
The marxist philosophical basis of socialist literature and art Artigos / Articles
For example, Lu Xun, whom Chairman Mao highly admired, often does not view, criticize and educate ordinary people in his works from the intellectuals' perspective.Instead, he regards himself (expressed as the narrator of the novel in his works) as a member of the common people, and always shares the same fate with the people.Even when criticizing the people's shortcomings, it is still as a member of the people, criticizing and reflecting together.
realism on The basis of maTerialism: The marxisT philosophical basis of
The principles of socialisT liTeraTure and arT creaTion Marx's materialism was established on the basis of criticizing Hegel's idea of absolute spirituality.Marx was dissatisfied with the unrestrained ideas and language of the Hegelian school in the field, but was incapable of doing anything in the real world or even paid no attention to the transformation of the real world.So he parted ways with the Hegelians he believed in in his early years, and established a materialism that focuses on the objective world.From this point of view, Marx's materialism is to get rid of the tendency to attach only importance to people's subjective spiritual imagination while ignoring the objective real world.This seems to be the same as Husserl's phenomenology that, removing the cover of subjective ideas on the real world, the objective world will naturally appear (REN, 2015, p. 20-26).And Heidegger's "ontology", that "existence" is an objective world that is not covered and invaded by the subject concept, has the same meaning direction (WANG, 2022, p. 81-86).However, under the common intention of focusing on understanding the "objective world" or "objective existence", there are important differences between Marx's materialism and Heidegger's "existence".The difference is that Heidegger's "existence" is a pure free space that completely removes the obscuring and interference of various subjective concepts, and in the process of removing the obscuring, human's subjective initiative to change the world also disappears, while the former is full of strong will and practical efforts to transform objective existence with human subjective initiative (CHEN, 2022, p. 140-143).The practice theory of Marxist philosophy maintains that people can understand the objective world through practice, and the process of practice is the process of giving full play to human subjectivity.That is to say, the objective world will not be known automatically by people, but can only be discovered and known by people through the people's subjective care.People LIU, Z. must start from certain subjective concepts to come to the understanding of the objective world.Therefore, "existence" in Marx's materialist vision is the objective world under the care of human subjectivity, rather than a pure objective "image".
The Marxist view of literature and art established on the basis of Marxist materialism naturally does not insist that literature and art depict the face of the natural world in a completely objective and calm way, and cannot require literary and art creators to restore the reality of the world with zero emotion, but pursues a "typical reality" under the care of the Marxist world view.Perhaps this is why when Engels commented on literary and artistic works, such as Jijingen and City Girl, that truly described the life of ordinary people, he pointed out that these works should strengthen the working people's tragic situation at the bottom to portray more truly and deeply (MARX; ENGELS, 1960, p. 589-592).This is the expectation of these works: by describing the people's suffering at the bottom, the work becomes a weapon to criticize capitalist exploitation and the people's oppression.In fact, the Marxist theory of literature and art believes that literature and art should not only reflect the real life world, but also become an ideological weapon to change the objective world.Socialist literary and artistic works should express the laws of social and historical development through "typical environments" and "typical characters", which not only deeply reveal historical truth, and allow people to accept socialist ideology, thereby generating material forces that change the world and promote social development.
It is on this basis that socialist realism has been established as the basic principle of socialist literary and artistic creation.Realism is an important part of the Marxist literary thought system.Engels once advocated the creative method of realism and pointed out that "typical environments and typical characters" should be used to expose and criticize the workers' exploitation and oppression by capitalism, and to show the real life conditions of the working class (MARX; ENGELS, 1960, p. 578-579).Therefore, the Chinese socialist literature and art constructed with Marxist theory, as the philosophical basis, will naturally take realism as the basic creative principle.However, in different periods, the understanding and implementation of "realism" in socialist literature and art are not the same.
From the 1930s to before the founding of New China, socialist literature and art were largely used as part of the political propaganda work of the Communist Party of China, so realism, in this period, emphasized that literature and art should reflect the political themes of the time.Since its founding, New China has taken the Soviet Union as a learning object for its socialist construction.Not only did it learn from the Soviet Union in terms of politics and economy, but it also imitated the Soviet Union a lot in terms of cultural construction.The realist writers promoted by the ideology of the Soviet Union, such as Gogol, Turgenev, Tolstoy, Chekhov and Gorky, and Soviet realist works, such as Iron Flow, Destruction, How Steel Was Made, Young Guards, etc., are widely known in China.In 1951, three Chinese works won the Stalin Prize in literature and art in the Soviet Union: Ding Ling's novel The Sun Shines on the Sanggan River (second prize); He Jingzhi and Ding Yi's opera White Haired Girl (second prize); Zhou Libo's novel Storm (third prize).These winning entries are all "realistic" works.Zhou Yang, Vice Chairman of the Chinese Writers Association (then called the National Writers' Association), wrote a the article Socialist Realism -The Way Forward for Chinese Literature for the Soviet literary magazine "Banner".This happened after his Chinese works won awards, pointing out that the socialist literature and art that China is building must be "Learning from the Socialist Realism of Soviet Literature (Wu;Ma, 2016, p. 167)".Subsequently, the Writers Association organized leaders, writers and critics of literary and art work to study the theory of socialist realism, and designated Marx's, Engels', Stalin's, Mao Zedong's, etc. 22 works on literary and artistic issues as required reading.Since then, "socialist realism" has basically become the basic method of socialist literary and artistic creation.In 1956, the literary and art circles launched another discussion on socialist realism, and published Zhou Bo's on Realism and Its Development in the Socialist Era and Zhang Guangnian's Socialist Realism Exists and Develops in newspapers and periodicals, such as Changjiang Literature and Art, Literary and Art Newspaper.Since then, socialist realism has been established as the basic creative principle of Chinese socialist literature and art (HONG, 1996, p. 60-75).
However, once socialist realism is established as a principle and has a decisive impact on literary and artistic creation, many of its negative effects will follow.The most prominent problem is that the artistry of literary and artistic creation is suppressed by ideology.Literary and artistic creators must reflect real life according to political themes.However, the real life that political themes require them to reflect is often the other people's they are not familiar with.Therefore, many writers and artists are forced to give up their familiar life experience to write about the others' life world that they are not familiar with, so some works are written very bluntly.Some writers and artists have spent a lot of energy collecting folk songs, that is, going to the countryside or other grassroots to understand and be familiar with the people's lives, and then reflect the people's lives in their works, so as to meet the requirements of socialist realism.There are also some famous writers and artists in the 1930s who chose not to engage in literary creation because they could not write about other people's real life.Obviously, this has stifled some writers and artists' creativity to a certain extent (DING, 1999, p. 58-64).
After entering the 1980s, the national ideology no longer required "socialist realism" for literary and artistic creation, but encouraged writers and artists to use the methods they are good at to carry out creations that meet the requirements of the "main theme" of the country, that is, "promote the main theme and advocate diversification".Over the next thirty years, Chinese literature and art developed simultaneously in two directions.One is the rise of individualistic creations, modernist creations and postmodernist ones that have been suppressed before.The other is that realistic creations that focus on social and people's livelihood have also continued to develop.The latter not only received strong support from national ideology.For example, most of the winning works of various government-sponsored literary awards were the latter, but also received strong praise from academic critics, who believed that these works assumed the social responsibility of intellectuals (LIU, 2014, p. 171-172).
At a time when marketization is increasingly affecting literary creation and the logic of capital is controlling literature and art more and more, the realism advocated by socialist literature and art begins to show its positive significance.Some researchers pointed out, "From the perspective of creation, socialist realism literature suppresses individuality, but the writer's enthusiasm for deeply intervening in reality in an attempt to construct an era in the text and influence the people of an era has practical significance.Perhaps in the state of lack of artistry, this kind of enthusiasm is not important to some people, and it is just the result of politics over art.However, if we put aside our prejudices and return some issues to the level of the writer's professional attitude and creative resources, we may get new inspiration from socialist realism literature.From the perspective of production and dissemination, the realism creation mode for the public has formed a wonderful echo relationship with the production mode of film and television dramas, comics, and games that we are familiar with in a non-market state.They are by no means the same thing, but they also remind us that in this era of increasingly niche literature, The marxist philosophical basis of socialist literature and art Artigos / Articles the reason why "popularization" and "creation for the people" is difficult to achieve lies in the barriers of concepts and the limitations of abilities, not the insoluble problems themselves" (ZHANG, 2022, p. 68-73).
This seems to indicate that, in today's cultural context, people's views on realism have undergone fundamental changes.In their view, realism once suppressed the individuality of literary and artistic creation.However, with some reframing of realism, it is entirely possible to make the word glow in a positive sense.This kind of positive meaning can enable literature and art to maintain its independence under the impact of capital logic.Therefore, it can be said that the inner meaning and form of expression of "realism" have undergone major changes compared with those in the 1940s and 1970s.This change makes realism closer to the true face of Marxism and more in line with the spirit of Marxist philosophy.When Marx and Engels advocated literature and art to reflect social history and the people´s life world at the bottom, most people interpreted it as literature and art should serve political purposes.In fact, maybe we should notice that Marx and Engels hoped to liberate most of the oppressed people from the oppression of capital.Realism is a means for literature and art to achieve this goal, but for a long time during the development and construction of socialist literature and art, we regarded this means as an absolute principle and dogma, which instead formed a constraint on the realization of the goal of human liberation.Fortunately, in recent years, more and more people have realized this point.Literary and art creators and critics are reconstructing "realism", integrating modern consciousness and the spirit of realism in the new era into the new "realism".It is expected that this reconstruction effort will enable "realism" to lead socialist literature and art to find the channel of human liberation in the siege of capital.This new understanding of "realism" is also reflected in the supervision of socialist literature and art by national ideology in recent years.The approval process for the 2017 TV drama In the Name of People, which attracted wide attention, illustrates this point.Zhou Meisen, a writer who created the TV drama In the Name of People, sent it to state authorities for approval.He expected to delete at least five episodes and revise it 1,000 times, but the review was approved in just 10 days, with few major changes or cuts required (Liu, 2019, p. 46-51).The fact that the TV series became popular later proved that it was correct not to make a large number of cuts and changes.Socialist literature and art on the main theme should fully consider the people's concerns and reflect the real situation in the real world, rather than be written according to political intentions.
The realistic creation principle of socialist literature and art is a special way of dealing with realistic situation constructed on the basis of Marxist materialism.The Marxist view of literature and art hopes that realism can make literature and art reflect the true state of the objective world.It is just that the purpose of reflecting reality at that time became a political intention, so that later socialist literature and art mistakenly turned realism into reflecting political intentions, thus deviating from objective reality.This actually runs counter to the original intention of Marxist materialism.When materialism becomes politicalism, and objectiveness becomes subjective, literature and art therefore embark on a narrow path.When realism returns to Marx's original expectations, subjective political intentions no longer interfere with it too much.When it truly reflects the realities of life of the people's broad masses , socialist literature and art may be able to regain vigor and vitality.
The power of sensibiliTy: The marxisT philosophical basis of The aesTheTics of socialisT liTeraTure and arT
In German classical philosophy, sensibility and the feeling of beauty are suppressed to a certain extent, because classical philosophy respects ideas and thinking, and despises the power of sensibility.Hegel recognized the revolutionary power of aesthetics, which can awaken people's awareness of resistance and has an emancipatory nature, but he believed that beauty is the product of people's rational thinking, rather than being directly triggered by sensibility.A literary image is a reflection of an idea existing in the form of an idea.Some people break through the cover of reason and discover and point out that sensibility is an important way to realize aesthetics.For example, Schiller believes that people's perceptual impulses play a more fundamental role in the aesthetic process than rational thinking (SCHILLER, 1984, p. 106); Marxist philosophy is obviously influenced by these theories.However, the materialism stance of Marxist philosophy makes the position of subjectivity "idea" in aesthetics drop significantly, and the importance of objectivity "sensibility" in aesthetics becomes more prominent.
Philosophers who respect and value sensibility are often prone to belittle rationality and lose the rational connection with the external objective reality world, thus maintaining a tense relationship with the external world and indulging in the perceptual aesthetic world constructed by themselves, making their philosophical propositions lose the possibility of changing the real world.For example, Freud was indulged in the "daydreams" constructed by himself (SCHILLER, 2009, p. 113), and his criticality to reality was greatly reduced; Heidegger lived poetically in the illusory world of language, but he could not generate the power of reality in the cage of language.Unlike some philosophers who admire sensibility and regard sensibility and aesthetics as a beautiful paradise, Marxist philosophy regards sensibility and aesthetics as an important shaking force that can break through the old order.The introduction of the concept of "practice" in Marxist philosophy breaks the traditional "subject-object" thinking mode, and puts forward the statement that practice is the objectification of human nature, thus communicating the subject and the object with practice (ZHU, 2014, p. 26-33).This makes sensibility and aesthetics no longer limited to people's subjective world, but has the possibility to connect with the external objective world, so that sensibility and aesthetics can become a driving force for social change.
When Marx and Engels criticized the ideological constraints of ideology on people, they keenly pointed out that it is impossible to win a complete victory by opposing words with words, and that the real elimination of the influence of erroneous ideas on people must be done by changing conditions rather than by theoretical deduction.Sensibility and aesthetics are another way of affecting people that is different from rational logic.The rich diversity of human feeling determines that it cannot be completely bound by a simplistic idea, so sensibility and aesthetics are always easy.Discovering the flaws in false ideas makes people spontaneously reflect and doubt the interpretation of the world provided by false ideas.Therefore, socialist literature and art are also placed on such high hopes.Through literature and art, people's rich feelings are aroused to question false ideologies, and then subversive forces are generated.
Therefore, on the one hand, Marxism emphasizes the "historical standard" of literature and art, that is, it advocates that literature and art should reflect social history and real life, express the law of social and historical development through "typical environments" and "typical characters", and profoundly reveal historical truth, rather than just expressing personal encounters.This is the inevitable requirement of Marxism for the realization of the revolutionary nature of literature and art.But at the same time, we should also note that Marxism also emphasizes the "aesthetic standard" of literature LIU, Z. and art.Marxist philosophy has fully noticed that literature and art are not words and phrases of theoretical speculation, but to activate people's sensibility through literature and art images and produce aesthetic pleasure.This process subtly affects or even changes people's ideas.Therefore, Marx and Engels paid great attention to the irrational aesthetics in literary and artistic works.Marx believed that the art of the ancient Greek period was the peak of human art, just like a child's innocence can make artistic achievement reach a very high level.However, once adults have mature reason, they lose their innocence, so that they can never surpass the artistic achievements of childhood.This fully shows that Marx believes that the aesthetics of literature and art is not within rational logic, but within another set of perceptual logic.When evaluating literary and artistic works, Engels also pointed out that literary and artistic works should express the writers and artists' ideas and concepts.But the more subtle the expression, the better the effect.This shows that both Marx and Engels paid full attention to the influence of sensibility and aesthetic power on people.Literature and art are different from theoretical speculation, but arouse people's sensibility through literary and artistic images, so as to form subtly certain concepts in the process of aesthetic production.There is a huge revolutionary force in this, which can cause a shaking force to false ideas.
In the practice of socialist literature and art construction in China, there have been cases where political goals overwhelmed the aesthetics of literature and art.For example, in the 1930s, Chinese revolutionary literature paid special attention to the political purpose of literature and art, and used literature and art as a means of propaganda.There was nothing wrong with this in itself, but when the pursuit of political propaganda suppresses the aesthetics of literature and art, and the theoretical dogma suffocates the readers' perceptual perception of literature and art, the revolutionary power of perceptual aesthetics contained in literature and art is actually weakened.For example, in the 1930s, when Mao Dun, a famous modern Chinese writer, was writing his novel Midnight, he followed the advice of the Communist Qu Qiubai, an early leader of the Communist Party of China and an outstanding literary theorist, and revised the novel according to political standards.However, many literary critics believed that the revised part was the same as the rest of the novel.The narration based on the real feeling of life is too blunt.When readers see such novels, they are not able to activate their perceptual perceptions, and then have a real psychological identification with political ideas.They will only feel estranged, thus losing the emotional power The marxist philosophical basis of socialist literature and art Artigos / Articles of literature and art, and unable to accomplish the task of revolution effectively.During the 'Seventeen Years" literature period after the founding of the People's Republic of China, many literary and artistic works overemphasized the political nature, resulting in the weakening of sensibility and aesthetics.
A similar situation also occurred in the creation of model dramas during the Cultural Revolution after the founding of New China.The attempt to publicize and interpret political ideas overwhelmed the sensibility and aesthetics of literature and art, which makes the model play, although illustrating political concepts, lose the aesthetic power of sensibility contained in literature and art.
Actually, the loss outweighs the gain.Because propaganda and interpretation of political ideas can be carried out in theoretical logic through propaganda, education and other forms.It is a unique advantage of literature and art to profoundly affect people's cognition in the logic of literature and art through sensibility and aesthetics.The loss of this advantage is a great loss of socialist literature and art.
In fact, Marxist philosophy particularly emphasizes and values the aesthetics of literature and art.Not only did Marx, Engels and others emphasize repeatedly in their comments on literary and artistic works that they should pay attention to the aesthetic nature of literature and art when revealing reality, but also realized the use of aesthetics to reflect reality in t some socialist literary and artistic creators' works.For example, Qu Qiubai, who emphasizes the political nature of literature and art, not only put forward many theoretical principles about socialist literature and art in literature and art theory to guide the creation of literature and art, but also engaged in the creation of prose.In his two collections of essays, The Chronicles of Hungry Township and The Heart of Chidu, not only achieved the realm of "typical truth" expected by Engels by depicting the ordinary people's living conditions , but also used real insights to explain Marxist theory specifically, and, at the same time, integrated real and simple emotions into the meticulous description, which can resonate with the reader at the perceptual level, so that the reader perceives and identifies with Marxist theory at the aesthetic level during the reading process.Another example is Chairman Mao's poetry, which is also a model of a high degree of unity between politics and aesthetics.All these examples can prove that it is completely possible to achieve political goals through sensibility and aesthetics advocated by Marxist philosophy, and this is also the correct path that socialist literature and art should take.LIU, Z.
In a country like China, where socialist ideology has already dominated, does it still need to develop the aesthetics of socialist literature and art?The answer is yes.Today's China is facing the problem of capital controlling people at the cultural level pointed out by the Frankfurt School.The logic of capital has penetrated into literature and art, making literary and artistic creation completely swayed by commercial logic."Searching for novelties, blindly kitsch, and vulgar tastes, treating works as a 'cash cow' for chasing interests, and as an 'ecstasy' for sensory stimulation."In order to obtain commercial benefits quickly, some creators "make things up, rough and far-fetched," and create a pile of rubbish, and some "pursued luxury, over-packaged, and ostentatiously ostentatious; there are also some literary and artistic works that, while taking the initiative to stay away from commercial logic, are also divorced from the current real life", and "make a big fuss over a minor issue" (LU, 2006, p. 50-54).In such an environment, socialist literature and art have been entrusted with a major mission, that is, to guide people out of the shackles of capital logic and onto the road of liberation.Therefore, the struggle of socialist literature and art still needs to be emphasized, but this struggle and aesthetics are unified rather than opposed.That kind of opportunistic literary and artistic creation, which grasps a political theme and then conceptualizes it bluntly, is incapable of fulfilling the mission undertaken by socialist literature and art.It must be the creator's high-level creation through the accumulation of true emotions, so that it is possible to resonate with readers in the aesthetic and emotional dimensions, and this can make it possible for people to embark on the road to liberation.Perhaps this is what Engels said, the more subtle the subjective intention expressed in literary and artistic works, the better the inner meaning.That is to say, the more implicit the concept, the less the power of sensibility and aesthetics will be suppressed, and the stronger the power of sensibility and aesthetics will explode, the easier it will be for people to accept the concepts contained in it.
conclusion
Marxist philosophy regards the existing society as a reality that should be transformed.Socialism means that human beings can enjoy more freedom and greater happiness in it. 18Socialist literature and art are also an important part of the socialist system.As a fundamental decisive force, Marxist philosophy fundamentally determines some of the most basic characteristics and norms of socialist literature and art.The philosophical foundation of The marxist philosophical basis of socialist literature and art Artigos / Articles Marxism guarantees from the inside that socialist literature and art always pay attention to the people's freedom and happiness.It plays a unique role in realizing the ideal of human liberation in Marxist philosophy.
|
2023-07-11T15:14:32.447Z
|
2023-07-07T00:00:00.000
|
{
"year": 2023,
"sha1": "e290722e13b64a0cc765bafbcac88c71e1e0176b",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/trans/a/YCwfXtYC5jvGnGVxTrrfsFd/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Dynamic",
"pdf_hash": "8056dd6141f67fccd09ba788168085270fb0d56a",
"s2fieldsofstudy": [
"Art",
"Philosophy"
],
"extfieldsofstudy": []
}
|
40859015
|
pes2o/s2orc
|
v3-fos-license
|
Ecology of Soil Arthrobacters in Clarion-Webster Toposequences of Iowa
Toposequence variations in soil properties were characterized and related to variations in populations of total isolatable bacteria and arthrobacters. Increases in soil NO,-N, available phosphorous, NO-N-producing power, Arthrobacter counts, and the percentage of the total counts represented by arthrobacters were correlated with decreases in soil acidity. The total bacterial counts were not correlated with soil acidity but were associated with percentage of soil organic matter and percentage of clay. The percentage of the total counts represented by arthrobacters was lowest at the summit position and increased downslope to the highest value in the toeslope position. Factor analysis of the data revealed that 67 to 81% of the total variance exhibited by all variables per site-sampling period could be accounted for by soil acidity, soil structure, soil fertility, soil moisture, and bacterial factors. A selective medium was developed for soil arthrobacters and tested on a wide variety of central Iowa soils to determine its potential as a medium for enumeration as well as isolation. The medium developed in this study was found to be superior to the other available direct-isolation media for soil arthrobacters.
Various studies have shown that members of the genus Arthrobacter are often among the more numerically predominant bacteria routinely isolated from soils (15,25). These soil arthrobacters are nutritionally very diverse (20,27), and many isolates can be found that exhibit the ability to degrade various pesticides (9,14,24). Very little work has been done, however, to determine possible correlations between variations in soil properties and variations in any particular group of soil bacteria. Soil pseudomonads have been found associated with slightly acid rhizosphere soil samples, whereas arthrobacters have been associated with slightly alkaline non-rhizosphere soil samples (22). Increased numbers of arthrobacters have been associated with soil samples adjusted to higher moisture contents, whereas pseudomonads have been predominant in soil samples adjusted to lower moisture contents (21).
Topography is a very complex soil formation factor that could affect the bacterial populations by influencing certain soil properties through climate or drainage-related functions (11). Proceeding downslope from the shoulder position to the toeslope in Clarion-Webster toposequences of Iowa, the percentage of soil 'Present address: Department of Microbiology, Oregon State University, Corvallis, Ore. 97331.
organic matter increases to a maximum while the mean particle size decreases to a minimum. The thickness of the A-horizon and the depth to carbonates or mottles decreases as the slope gradient becomes steeper (29). The toposequence soils in Clarion-Webster toposequences represent a gradation in textural classes; several studies have associated nematode populations with texture variations (18,19), but no attempts have been made to do this with bacterial populations. Soil organic matter levels are interrelated with other soil properties, and little is known about the effects of this relatively stable soil property of bacterial populations.
Arthrobacters have normally been isolated from soils by using either enrichment techniques or by randomly picking and identifying isolates from media used to determine total counts (15). Mulder and Antheunisse (16) developed a selective procedure for arthrobacters involving two separate media where the identification was based on observation of a morphological cycle possessed by members of this genus. Their method was not intended to serve as a means of enumerating Arthrobacter populations and, because of the lack of a suitable enumeration procedure, one was developed in our study.
This study investigated variations in total isolatable bacteria and Arthrobacter populations in two toposequences in the Clarion-Webster soil association area in Iowa. Toposequence variations in soil properties were characterized in relation to their effects on the total bacterial and Arthrobacter populations.
MATERIALS AND METHODS
Site location and description. The two toposequences were located in the Clarion-Webster soil association area in north-central Iowa and are described in Table 1. Site I was in a corn-soybean rotation from 1963 to 1968 and in continuous corn from 1968 to 1973, whereas site II was in a cornsoybean rotation from 1963 to 1973. During this interval site I received no lime treatments, whereas site II received the appropriate amount of lime to maintain the soil pH at 6.9.
Sample collection and sampling periods. Four adjacent rows of corn that extended parallel to the toposequence transect were chosen at each sampling site, and nine core samples were removed, three each from the middle of the furrow between adjacent rows of corn. Three sampling sites were chosen in each of the four soil types comprising the toposequences. The core samples were obtained and processed individually on 30 August at site I (soil temperature, 33 C) and 25 October at sites I and II (soil temperature, 26 C). All core samples were taken from the Ap-horizon at a depth of 10 cm at both sampling sites.
Total bacteria analyses. All core samples were placed in plastic bags and transported to the laboratory, and platings were performed on the same day that each core sample was obtained. A 5.0-g sample was aseptically removed from the previously unexposed center of each core sample and suspended in 495 ml of sterile 0.5% peptone broth. Each sample was then agitated in a Waring blender for 3 min at low speed, serial dilutions in 0.5% peptone broth were Harps, 0%, Harps, 0%, none none made, and 0.1-ml portions of appropriate dilutions were spread over the surface of sterile media in petri plates. Total counts were made from a medium containing 0.1% peptonized milk (Difco), 0.1% yeast extract (Difco), 0.01% Acti-Dione (Upjohn Co.), and 1.5% agar. The pH was adjusted to the pH of the particular soil being plated and plates were incubated at 25 C for 10 days, after which colonies were counted. All platings were done in triplicate. Arthrobacter selective medium and analyses. Seventeen Arthrobacter named strains from the American Type Culture Collection (ATCC, Rockville, Md.) and 20 Arthrobacter, 6 Bacillus, 6 Micrococcus, 4 Nocardia, 4 Streptomyces, 4 Flavobacterium, and 6 Pseudomonas soil isolates were screened on 31 dyes, 13 antibiotics, and 11 assorted compounds to determine possible selective properties for the arthrobacters. The soil isolates were taken from the medium used to determine total counts and were identified to the genus level according to procedures outlined by Buchanan and Gibbons (5). All 67 cultures were tested on a wide range of concentrations of each of the 55 potential selective agents to detect any differential as well as selective properties. The screening was performed by incorporating the various concentrations of the potential selective agents in either Trypticase soy agar (BBL), peptonized milk agar, or nutrient agar (Difco). The three basal media were tested at a variety of concentrations with additions of various amounts of yeast extract as well as with the potential selective agents. Those agents that were heat sensitive were filter-sterilized and added aseptically to the cooled, autoclaved media. The media were adjusted to a variety of pH values ranging from 5.0 to 8.5. The cultures were transferred to the surface of the media with a multipoint inoculation device (7). All plates were incubated at 30 C for 72 h, after which plates were examined.
Those compounds that exhibited either selective or differential properties for the arthrobacters were retested on various concentrations of the three basal media at varying pH values. A total of 720 different variations were examined to determine the best possible combination of a basal medium plus yeast extract plus various concentrations of different selective ingredients.
The best selective medium had the following composition: 0.4% trypticase soy agar, 0.2% yeast extract, 2.0% NaCl, 0.01% Acti-Dione, 150 pig of methyl red (Harleco) per ml, and 1.5% agar. The methyl red was filter-sterilized and added aseptically to the autoclaved, cooled medium (see Results and Discussion). Soil samples were diluted and plated, and plates containing the selective medium were incubated at the temperature used for the total count medium.
The pH was adjusted to the pH of the particular soil being plated. The selective medium was tested, on a variety of soils, to determine what percentage of the isolates were arthrobacters by subculturing and microscopic examination of all the colonies on various randomly selected plates for each soil type. The colonies were transferred to trypticase soy agar plus 0.2% yeast extract and examined microscopically for possession of a rod-to-coccus morphological cycle, snapping divi-ECOLOGY OF SOIL ARTHROBACTERS sion, pleomorphism, and V-forms (5). The same procedure was also performed on plates containing the total count medium and the media developed by Mulder and Antheunisse (16). For enumeration of the arthrobacters in the four soils examined in the ecological survey ( Table 1), 78% of the counts on the selective medium was taken as the Arthrobacter counts.
Soil analyses. After the bacterial analyses were performed, eight portions were taken from each core sample, two each for the following determinations: soil NO,-N (4), soil NH4-N (2), NO.N-producing power (26), and soil moisture (28). One particle size analysis was performed on each core sample by using a modified pipette method (28). The remainder of each core sample was air-dried and screened through a 4.0-mm sieve, and two replicate determinations for all procedures were made on each core sample. Soil pH, total exchangeable bases, exchangeable hydrogen (10), soil organic matter (6), available phosphorous (17), and soluble salts and saturation percentage (3) determinations were then performed.
Statistical analyses. Simple correlation matrices were computed, and a preliminary set of factor-loading values for the factor analysis was computed from these matrices by using the principal components method (8). These factor-loading values were subjected to a varimax rotation (12) to maximize the factor loadings without changing the specific variance of each variable.
The linear factor analysis model (23) used for each of the 16 variables was z l = a F, + aF. + aF3 +cE,. This model equation expresses each variable, z, in terms of three factors, F1 to F,, and an error factor E. The factor loadings, a and c, indicate the extent to which each factor participates in the test. The specific variance of the error factor for each variable indicates how much of the variation exhibited by the variables is not explained by the three factors.
This particular factor analysis model was used because the results of a test of significance for the total number of factors indicated that there were not more than three factors involved at any one site-sampling period (13). In using this model we assumed that the sample size of 108 toposequence samples per site-sampling period was large enough to avoid sampling error. To insure this, only factor-loading values larger than 0.50 or smaller than -0.50 were considered significant correlational values.
RESULTS AND DISCUSSION
Of all the compounds tested as possible selective agents, only a few exhibited any selective properties for the arthrobacters. The combination of Acti-Dione at 0.01% and NaCl at 2.0% effectively inhibited all fungi and most streptomycetes, nocardia, and gram-negative bacteria. The methyl red at 150 ,g/ml inhibited other gram-positive bacteria (bacilli and micrococci) but did not affect the arthrobacters. The pH of the medium, between 5.0 and 8.5, did not affect its selectivity, and the combination of trypticase soy agar at 0.4% and yeast extract at 0.2% gave the highest yield of arthrobacters with the addition of the selective ingredients over the other basal media (data not shown).
In testing the selective medium (Table 2), the percentage of the colonies identified as arthrobacters was much higher (74%) than that of either the total count medium (14%) or the nutritionally poor medium (24%). From the soils tested, approximately 25% of the colonies on the selective medium were not arthrobacters, and microscopic examination was necessary to distinguish them. In examining the selective medium for enumeration potential ( Table 3) the percentages of arthrobacters from the selective medium were close to or slightly higher 29,1975 on May 7, 2020 by guest http://aem.asm.org/ Downloaded from than the percentages from the total count medium. Because of this close agreement, it was decided to use the selective medium for enumeration purposes by taking a percentage of the colonies growing on the plates as being the arthrobacter counts and comparing these with a Percentage of arthrobacters for the NPM and SM were obtained by using the numbers of arthrobacters from both of these media as determined from the data in Table 2. These figures were then compared with the total counts for each soil type to obtain the percentage of the total counts represented by arthrobacters for each of the media.
AND HOLT
APPL. MICROBIOL. the counts from the total count medium to arrive at the percentage of arthrobacters contained in any one sample. The nutritionally poor medium (Table 3) was not suitable for enumeration purposes. Further tests on the four soils used in the ecological survey (data not shown) indicated that 78% of the counts on the selective medium was a suitable figure for determining the arthrobacter counts from the respective soils. The largest total bacterial and Arthrobacter populations occurred at the toeslope position of both toposequences during each sampling period. The smallest total bacterial and Arthrobacter populations occurred at the backslope position and increased down to the toeslope and up to the summit position ( Table 4). The percentage of the total counts represented by arthrobacters was lowest at the summit and increased downslope to the highest percentage in the toeslope position.
Pronounced changes in soil variables accompanied these variations in bacterial populations at each toposequence (Table 4). However, due to the higher pH caused by the limed conditions in the soils at site II, the variation in most of the variables was not as great as for either sampling period at site I. Proceeding from the summit to the toeslope position, there were increases in soil pH, percentage of clay, percentage of silt plus percentage of clay, soluble salt levels, percentage of organic matter, soil NO,-N, NO-N-producing power, available phosphorous, total exchangeable bases, percentage of moisture relative to field capacity, and percentage of moisture relative to percentage saturation. There were decreases downslope in the exchangeable hydrogen and soil NH,-N (Table 4).
Due to the interpretational method of factor analysis, each of the factors was arbitrarily named, depending upon which variables appeared to be consistently interrelated ( Table 5). The soil fertility factor name was chosen because NO-N is the end product of nitrification and therefore is a useful indicator of the ability of the soil to supply plant-available NOU-N. The other factors were named according to the obvious combinations of variables composing the various factors. More of the variation in the soil fertility, acidity, structure, and bacterial variables was accounted for in factor analysis than variation soil moisture variables, which was indicated by higher specific variance values of the soil moisture variables compared with the other variables (Tables 6-8). At site I during both sampling periods (Table 6, 7), the soil acidity factor was positively correlated with soil NO-N, NO-Nproducing power, available phosphorous, Arthrobacter counts, and the percentage of the total counts represented by arthrobacters, and negatively correlated with soil NH,-N. The soil structure factor was negatively correlated with percentage of moisture relative to field capacity, total bacterial counts, and Arthrobacter counts. The absence of a soil fertility factor at site I on 30 August (Table 6) was probably due to interference by the roots of the corn plants with the soil fertility variables (uptake of available P and NO-N). By 25 October (Table 7) the roots were dead, the interference was removed, and a soil fertility factor, which was positively correlated with percentage of moisture relative to percentage of saturation, was generated.
At site II (Table 8) the soil acidity factor was positively correlated with the same variables as at site I, but the degrees of correlation were not as great. The soil structure factor was not correlated with any variables, whereas the soil fertility factor was negatively correlated with the percentage of moisture relative to field capacity and positively correlated with the percentage of moisture relative to percentage of saturation, total counts, and Arthrobacter counts.
At site I, increased soil acidity resulted in decreased soil NOO-N and increased soil NH,-N (Table 5). This was a measure of the lessened activity, due to acid sensitivity, of the nitrifying bacteria Nitrosomonas and Nitrobacter (1). The increased soil acidity was responsible for the decreased Arthrobacter counts and percentages of the total counts represented by arthrobacters. The total bacterial counts were influenced strongly by the soil structure factor (percentages of clay and organic matter) and, to a lesser degree, by the soil acidity factor. The Arthrobacter counts were positively correlated with acidity, but the degree of correlation was not as great (Tables 6-8) as was that of the percentage of the total counts represented by arthrobacters. This was due to the effects of the soil organic matter and clay content on the Arthrobacter counts, especially at the shoulder position (Table 4), whereas these variables did not significantly affect the percentage of the total counts represented by arthrobacters. The same relationships were found at site II, but the significant correlations were not as great due to the decreased variation in many of the variables caused by the limed conditions.
During the 30 August sampling period at site I, 76.80% of the total variance removed by all factors was accounted for, whereas 80.92% was accounted for in the 25 October sampling period site I (Table 9). At site II 67.84% of the total variance was accounted for. The increased contribution of the soil acidity factor at site I accounted for the greater total percentage of the variance removed as compared with site H. The 20 to 30% of the total variation unaccounted for represented other unmeasured and/or unknown environmental factors.
Factor analysis of the data from the toposequence soils examined in this study indicated that the arthrobacters in these soils were acid sensitive and their numbers decreased in a cause-and-effect relationship with increasing acidity. At site I on 25 October (Table 4), the percentage of the total counts represented by arthrobacters was 4.68% at the summit (pH 5.79) and increased to 23.39% at the toeslope as the acidity decreased (pH 7.42). At site H on 25 October (data not shown), the variation was less due to the limed conditions since, at the summit, 14.81% of the total counts were arthrobacters (pH 6.91) and increased to 20.26% at the toeslope as the acidity decreased (pH 7.38). The total bacterial counts were not correlated with soil acidity during any of the site-sampling periods, which probably indicated that as the acidity increased and the Arthrobacter counts decreased, the numbers of some type of acidtolerant bacterium were increasing.
This study demonstrated that the distribution and abundance of certain types of bacteria (in this case, arthrobacters) were, to a large extent, determined by certain ecological variables. If the assumption is valid that microbial populations are selected by their environments, then the methodology applied in this study might find additional uses in determining which environmental variables most strongly influence the distribution of any of a wide range of microorganisms.
|
2018-04-03T04:56:33.754Z
|
1975-02-01T00:00:00.000
|
{
"year": 1975,
"sha1": "2c895eb9d2c85ee398495535befae80643bb7664",
"oa_license": null,
"oa_url": "https://doi.org/10.1128/aem.29.2.211-218.1975",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0bb4561a0b1bc1ef41108e312a9ffce734c1f275",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
}
|
213425991
|
pes2o/s2orc
|
v3-fos-license
|
Mapping of the Volgograd agglomeration territory for its arrangement by the green construction methods
The problems of mapping the territory of the Volgograd agglomeration for the planning of green building works are presented. The estimation of the application efficiency at mapping the geoinformation systems technologies urban landscapes degradation and restoration processes is given. The results of the agglomeration landscapes geoinformation mapping methods development using the large-scale satellite images are presented.
Introduction
Research in the mapping of urban landscapes has difficulties due to the need to study, evaluate and use their geomorphological, ecological and socio-economic characteristics, data on which are presented mainly in the form of quality indicators, as well as insufficient development of methods for conducting a comprehensive assessment of the environmental factors of the arid zone and problematics overlay them on each other in space-time dynamics.
Currently, the ways of overcoming the stated difficulties with the use of geoinformation technologies (GIStechnologies), carried out with the help of modern electronic means of information processing, are being developed. GIS technologies provide an opportunity to integrate cartographic and aerospace monitoring, methods of mathematical modeling and computer mapping into a single process that provides a qualitatively higher level of research results [1].
Their basic discipline is geoinformatics, it studies natural and social-economic geosystems (their structure, connections and dynamics of functioning in space-time) by means of the computer modeling on the basis of the geographical information databases. It includes the technology of collection, storage, transformation, display and distribution of spatially coordinated information, providing the solution of inventory, optimization and management of geosystems. Another area of the research is the development of hardware and software products for the creation of databases and data banks of control systems, standard systems for different purposes and problem orientation [1].
The relationship of cartography and geoinformatics is manifested in the following aspects: a) topographic and thematic maps are the main source of spatial and temporal information; b) the system of zonal rectangular and geographical coordinates, as well as cartographic mapping are the basis for the coordinate reference of all information received and stored in GIS, c) maps -the main means of geographical interpretation and organization of remote sensing data and other information used in GIS; d) cartographic analysisa universal method of identifying patterns, relationships, dependencies in the formation of databases included in the GIS; e) mathematical and cartographic computer modeling -a leading tool in the process of decision-making, forecasting the development of geosystems, etc.; f) cartographic imagethe most convenient and effective form of information [2].
Developed at the junction of geoinformatics and mapping, geo-information mapping integrates the achievements of remote sensing, space mapping, carto-graphic method of research and mathematical and cartographic modeling.
The most active results of interrelations between cartography and geoinformatics are used in the field of complex application of geoinformation and automated cartographic technologies and automated (including digital) cartography [3].
One of the main sources of data for GIS is the materials of di-station sensing obtained from space and aircraft-based carriers [4]. At present, along with the aerial photographs traditionally used in meliorative mapping at a scale of 1: 10,000 to 1: 15,000, other types of shooting materials are used more widely -high-resolution television and scanner images taken from artificial satellites. On scanner images of good quality, especially on color synthesized ones, the same objects are clearly distinguished as on photographic ones, but at the same time, the periodic repetition of the survey and the convenience of automated entry into digital databases are provided.
During surveys in landscape melioration, including green building for landscape mapping, blackand-white and multizone surveys performed best in the narrow zones of the visible spectrum of 600-700 Nm with various equipment on different types of films provide the best results [5]. Computer technology mapping, providing a breakthrough in research, design and economic activity, are used mainly in agriculture and forestry and are considered promising in urban planning. But the methods of using geo-information technologies in planning green construction for arranging urban landscapes require some refinement and detailing.
Based on the critical analysis of scientific information on the existing problem, taking into account the experience of using information technologies in aerospace monitoring and reclamation mapping [6], we have developed a methodology for studying and mapping degradation processes in urbanized landscapes.
Methodology
The geomorphological structure of the territories of Volgograd and its environs is quite complex. Its elements are characterized by great diversity, causing a significant difference in the particle size distribution, the presence of soluble salts that are toxic to woody vegetation, the water regime and, consequently, soil fertility, which determines the conditions for the growth and development of greenery. Within the Volgograd agglomeration, valley, slope and watershed landscapes are distinguished.
Most of the Volga valley is represented by a 20-40-meter abrasive accumulative terrace, which passes into the southern part of the city to the Beketov lowland and the low Sarpinskaya plain. At the base of the terrace there are outlets of groundwater.
The length of the slope of the Volga Upland in the northern part of the city is on average about 10 km, in the south it decreases to 1.0 -1.5 km. On the slope there are two floodplain terraces with average relative heights of 10-15 and 50-60 m, above which local structural steps are observed. The surface of the abrasive accumulative terrace is mainly composed of Mechetka deposits. On the Volga side, areas with Hvalynsk chocolate clay adjoin it. In the northern part of the city in the Mechakin sands, a fifty-to sixty-meter above-flood terrace was developed. To the west of it, there are structural local steps with a height of 80-90 m. To the north of the Kuporosnaya beam there is a 10-15-meter terrace, to the south it manifests itself in the Otradnaya beam and is a wide marshy surface, bounded from the west by a high Yergeni ledge. In the extreme south of Volgograd there is a section of the Volga floodplain with a width of 1.0-1.5 km [7].
Urbolandscapes of the Volgograd agglomeration due to a combination of unfavorable phytograsion conditions are extremely low resistance to man-made and recreational effects. As a result, rapid- dynamic land degradation processes occurring in its territory, sometimes turning into desertification. Successful implementation of the tasks of arranging urban disturbed lands by carrying out green building measures is possible only with careful analysis of the current situations, which are determined by the peculiarities of the degradation processes of various categories of territories occupied by industrial enterprises, their infrastructure, residential areas and various linear engineering structures [8].
In this regard, in planning for green construction, a landscape-cartographic approach based on aerospace photography and scanning in combination with geo-information technologies is becoming the most important methodological basis, allowing not only to monitor the state of the land and the dynamics of degradation processes, but also to develop a system of integrated land-improvement activities.
To solve the existing problems, the following questions were worked out: 1. Development of the basics of landscape-cartographic approach to the reclamation of degraded urban land.
2. Development of the concept and technology of application of cartographic and aerospace monitoring of degradation and restoration processes in urbanized landscapes, differ in different degrees of landscaping.
3. Development of methods for the qualitative and quantitative assessment of degradation and restoration processes on the basis of remote indicators and biotic criteria for land degradation and restoration.
4. Development of an integrated scale for assessing the degree of degradation of urban soils based on a logistic approach.
5. Clarification of the methods of landscape-typological maps urbanasirakan territories on satellite imagery for the purposes of the green building.
The technology of integrated mapping, proposed by B.V. Vinogradov [9], including field research and Desk analysis of the results of remote survey of territories was used. It is revealed that, as well as forest reclamation combined mapping, the most effective is a five-stage technology of work, consisting of preliminary decoding, field calibration and extrapolation, including field control, final decoding and mapping.
It was found that for achieving a good mapping of urbrandsafe dimensions of the investigated polygons should not go beyond images of scale 1:10000 -1:15000. In conducting research priority is the visual-instrumental and computer analysis of large-scale space images, as they are reflected not only the physical components of the landscape of its infrastructure and linear structures, but also the landscape as a whole.
An important section of geoinformation mapping is to solve the problems of the theory and practice of landscape interpretation, including in parallel with the topographic areas of special interpretation: geomorphological, soil, geo-ecological, geobotanical, and others. They are based on identifying the relationship between the properties of the object and the features of its image in the pictures. At the same time, the efficiency of decryption largely depends on the completeness of the information of the decoder about the landscapes of the studied territory.
The recognition of objects during interpretation is determined by the peculiarities of the visual perception of their image in the image, including the photographic reproduction of the optical and geometric properties of the elements of the urban landscape [10]. It is revealed that with a sufficiently good quality of shooting and high resolution from a space photograph, it is possible to model both the internal and external structure of the landscape. It is advisable to carry out the decoding of the urban landscape in three stages: preliminary, topographic, and proper landscape.
At the preliminary stage, according to the available information sources, a general scheme of landscape differentiation of the agglomeration territory is outlined and a preliminary classification of its landscapes is drawn up. As a result of the topographic decoding of the elements of the images, orientation and binding of landscape objects with the determination of the coordinates of their characteristic points is made, their physical and geographical characteristics are given. This reveals the main indicators of the structure of the territory: the morphostructure of the surface, its dismemberment, the degree of drainage and watering, the nature of the building, the presence of green spaces, the location of the carriageways of streets, sidewalks and other types of land tenure.
To clarify the procedure for performing mapping urban landscape landscapes, geoinformational interpretation of several satellite images of suburban areas was carried out, the detailed study of which was of considerable complexity.
Topographic interpretation provided orientation and binding of landscape types to the cartographic basis and made it possible to identify the structural features of their image on satellite images. With proper landscape interpretation, large mapping units were identified and separated: landscapes, and terrain.
Features of the image of various objects of urban areas on satellite images necessitated a detailed study and clarification of the existing methods of their landscape interpretation. Taking into account the fact that space images integrally reflect the morphological structure of the landscape, which is perceived as a combination of localities -dominants, the main emphasis was placed on identifying complex interpretive signs of localities -dominants. Based on the analysis of large-scale satellite images, they were distinguished in all major groups of landscape types.
Results and Discussion
It was established that the spatial differentiation of the natural conditions of suburban and sparselybuilt urban landscapes is determined, first of all, by the mesorelief. The following types of geosystems were distinguished: a) near-watershed surfaces, b) near-valley and near-sloping slightly eroded slopes, c) floodplain terraces, d) floodplains, e) beams, e) slopes strongly dissected by ravines.
Drawing the boundaries of the types of terrain on the overview map of the contours of erosiondenudation landscapes allowed to proceed to the selection of the boundaries of groups of landscapes. Refinement and specification of the boundaries of landscape types were carried out on the basis of a conjugate analysis of images and topographic maps. Landscape interpretation of cosmic photographs of natural geosystems of the rank of landscape-terrain in combination with a conjugated analysis of topographic and thematic maps allowed the classification and mapping of the studied landscapes of agglomeration. Then a landscape mapping of geosystems of the Tract-Facies rank was carried out.
The recognizability of the objects under study is strongly influenced by the reflectivity of their surface. The magnitude of the reflection is not the same for different rays of the solar spectrum. Using this fact allows more accurate decoding of urban and suburban areas with a large set of objects to be displayed, since many types of situations that are poorly recorded in the visible spectrum range are contrasted with the invisible infrared range. It is noted that, as well as in suburban areas, geosystems of the rank facies of steppe urbolandscapes of agglomeration are characterized by high brightness and have a maximum spectral line on green areas with tree and shrub vegetation.
A separate study of each facies with large-scale mapping is very laborious due to the wide variety of growing woody vegetation, which is distinguished by biometric and landscape-architectural indicators. Combining facies of similar status and biocenosis into groups and types allowed us to identify areas with similar types of natural conditions. Each type of landscape corresponds to a certain structure of stows, which is reflected in the pictures by a certain type of pattern and image texture. In the structure of suburban landscapes, the dominant tract forming the general background is usually distinguished. Against this background, a number of minor tracts of small area are often observed. At the same time, a portion of one tone with spots of another tonality is visible in the photographs. An important step in aerospace mapping is the field calibration of images in key areas laid within the limits of the polygons under study. In taxonomic terms, these are tracts or groups of facies that can be distinguished in photographic images, characterized and extrapolated within polygons. For the field interpretation of aerial and space photographs of suburban areas, it is advisable to use the method of integrated or landscape profiling. The profile should cover all types of tracts. Profile lines are pre-marked in the pictures, then refined on the ground. Perform a breakdown and leveling of the course, draw the profiles of the territory in the selected direction. On each contour selected in the picture, it is necessary to determine the relief forms, soil and plant conditions, the nature of modern exogenous processes, paying particular attention to the rapid dynamic processes of landscape degradation (excessive recreational load, impact of pollutants, erosion, deflation, etc.). Within the landscape profile, a description of the components of the landscape is made in each facies or subsurface. Extrapolation includes operations to decipher untested territories according to the established criteria. The field control is performed by sampling the reliability of the interpretation.
The final interpretation and mapping as well as in forest reclamation studies [11] should include all the operations stipulated by generally accepted programs for the desk processing of the material and the mapping of a given scale and subject matter.
Summary
It has been established that for the creation of thematic maps describing the conditions of growth of greenery, the iso-linear method of mapping, the geographical field, is most suitable. In the mathematical sense, the geographic field is the distribution on the earth's surface of a certain quantitative assessment, each point of which is characterized by a specific indicator (scalar) [12]. Scalars display the morphometric parameters of objects, as well as indicators of the intensity of land degradation or restoration processes. If the indicators change from point to point, then they can be characterized by a spatial field, and when changing them over time and space-time, used in the dynamic mapping of the processes of degradation -the restoration of landscapes.
The isoline cartographic materials allow us to represent the processes of changing the state of landscapes in the form of models, which are integral continual images representing scalar fields on which you can perform various mathematical operations. As a result of material processing, a point model of the distribution of process values is created. It is advisable to automate the operation using digitizers, scanners or automatic coordinators.
When combining iso-linear maps of urbanized territories of different content (geomorphological, soil, natural vegetation, etc.), there are great opportunities to study the interrelationship of spatial phenomena using multiple correlation and regression apparatus, factor and component analysis.
Overlaying the maps of the state of the urban landscape (speed, speeding up or slowing down the degradation processes) makes it possible to make predictive maps of the dynamics of recreational and man-made degradation of geosystems, using which, plan activities to end degraded processes and then convert degraded urban landscapes to cultural ones. arrangement of urban areas with green building methods.
|
2019-12-19T09:11:26.471Z
|
2019-12-18T00:00:00.000
|
{
"year": 2019,
"sha1": "abdf8f93451ac6f49e1e17d997e8ab023e2ed83a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/698/5/055011",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1ab16544b951048a6c37a50fd922babc6f3eb35b",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Geography"
]
}
|
266584747
|
pes2o/s2orc
|
v3-fos-license
|
Accelerating Materials Discovery: Automated Identification of Prospects from X‐Ray Diffraction Data in Fast Screening Experiments
New materials are frequently synthesized and optimized with the explicit intention to improve their properties to meet the ever‐increasing societal requirements for high‐performance and energy‐efficient electronics, new battery concepts, better recyclability, and low‐energy manufacturing processes. This often involves exploring vast combinations of stoichiometries and compositions, a process made more efficient by high‐throughput robotic platforms. Nonetheless, subsequent analytical methods are essential to screen the numerous samples and identify promising material candidates. X‐ray diffraction is a commonly used analysis method available in most laboratories which gives insight into the crystalline structure and reveals the presence of phases in a powder sample. Herein, a method for automating the analysis of XRD patterns, which uses a neural network model to classify samples into nondiffracting, single‐phase, and multi‐phase structures, is presented. To train neural networks for identifying materials with compositions not matching known crystallographic structures, a synthetic data generation approach is developed. The application of the neural networks on high‐entropy oxides experimental data is demonstrated, where materials frequently deviate from anticipated structures. Our approach, not limited to these materials, seamlessly integrates into high‐throughput data analysis pipelines, either filtering acquired patterns or serving as a standalone method for automated material exploration workflows.
Introduction
X-ray diffraction (XRD) has long been regarded as an indispensable tool for the characterization of material samples, which is capable of analyzing a wide array of substances, ranging from metals, ceramics, polymers, to thin films and nanostructured materials. [1]One of the key factors behind the prevalent use of XRD is its ability to provide a comprehensive analysis of various distinct properties.For instance, the XRD technique allows for determining the material's phase composition, crystal structure, lattice parameters, texture, and strain, among other characteristics. [2]oreover, the diffraction analysis is a nondestructive technique, safeguarding the integrity of the material for further studies.Given these advantages, XRD instruments are ubiquitously present and essential for materials research workflows.
In the field of materials discovery, the primary goal is to develop materials with enhanced or unique properties that can outperform existing materials.Due to the inherent limitations of existing materials in aspects such as performance, cost, and sustainability, the development of new substances is imperative for propelling technological advancements and elevating living standards.A prevalent approach to discovering these novel materials includes the intentional addition of foreign atoms or ions to existing components.This can lead to enhanced properties, such as thermal stability or electrical conductivity, or it can serve to replace scarce or environmentally harmful substances.One of the most effective methods for identifying such novel materials is the combinatorial approach, in which a multitude of different substances are systematically combined in varying proportions and configurations for rapid screening of vast material composition spaces. [3]Nevertheless, a large fraction of these configurations unfortunately results in materials that exhibit inconsistent and inhomogeneous properties that are not desirable. [4]ere, XRD is an essential tool for the identification of amorphous, phase-pure, and multi-phase samples, as well as further characterization of the crystalline properties for the produced materials.
The analysis of the data generated from the XRD technique, however, poses a considerable challenge.In the traditional analysis of powder XRD patterns, similarity metrics such as the figure-of-merit (FOM) are typically used to compare measured signals with reference phases, as obtained from databases such as the ICSD or the COD. [5,6]However, the presence of experimental artifacts, such as measurement noise and background signals, complicates the analysis process and necessitates manual preprocessing steps. [2]Additionally, the incorporation of multiple elements into a single-crystal structure in newly developed multicomponent materials can lead to significant lattice distortions and reflection shifts, posing a challenge due to crucial deviations from the reference phases.Given the exponential surge in data volume generated by newly developed high-throughput systems, [7,8] manual analysis of powder XRD data using the traditional FOM method becomes highly time-consuming and practically unfeasible.Consequently, the automation of XRD analysis becomes a necessity, enabling researchers to efficiently process and interpret large datasets, accelerating the pace of material discovery.
As an alternative to the manual data analysis, artificial neural networks have demonstrated promising results in the accurate and fast interpretation of unprocessed powder XRD data.Neural networks are trainable mathematical models that use interconnected neurons, layers, weights, and activation functions to map input data to output predictions.During training, the network adjusts the weights to minimize the difference between its predictions and the desired output, gradually learning to recognize complex patterns and make accurate predictions for new data.Within the domain of XRD analysis, Park et al. first developed a neural network to determine the crystal system, extinction group, and space group for scans of phase-pure samples. [9][13] Beyond phase identification tasks, neural networks have shown promising performance in other applications, such as the determination of scale parameters or lattice constants from the XRD scans. [14]xpanding upon the foundational research, the application of neural networks to XRD data has extended to include their use for novel material discovery in experimental settings.For instance, Velasco et al. used a neural network to determine the crystal structure of unique compositions in complex multicomponent systems. [7]Furthermore, Massuyeau et al. introduced a neural network capable of differentiating between perovskite and non-perovskite materials through their XRD patterns. [15][18] Additionally, Szymanski et al. deployed a neural network to identify target and intermediate phases in material synthesis experiments, enabling their optimization algorithm to determine the most suitable precursors and experimental parameters for the effective synthesis of the target phase. [19][11][12][13][14][15][16][17][18][19] This approach involves generating synthetic diffraction patterns from crystallographic database entries, incorporating variations and experimental artifacts characteristic of actual experimental patterns, to ensure that models trained on simulated data effectively transfer their performance to actual experimental scans.The primary aim of this methodology is to address the difficulty of obtaining a sufficiently large dataset containing high-quality XRD scans with their specific phase identification results, crucial for training the neural network.As an alternative to simulating the training data, Velasco et al. acquired phase-pure signals of the essential structures in their study and systematically altered these signals to enlarge the data basis. [7]onetheless, the existing studies on applying neural networks for the analysis of powder XRD patterns present some limitations.First, the exemplary data needed to train the network models is typically not at hand for novel materials.While databases provide reference materials from past studies, they are not typically equipped with information on newly synthesized materials.Alternatively, the method of altering measured patterns from phase-pure samples, as introduced by Velasco et al., [7] requires the synthesis of pristine samples, which is not a trivial task for complex materials.Second, an appropriate network structure is required to handle the peculiarities of the diffraction patterns.For instance, we evaluated commonly used neural network structures for the analysis of XRD in a recent study and identified deficiencies in detecting minor peaks in the diffraction patterns, [20] which extend to the recognition of multi-phase samples in the material discovery data.Additionally, amorphous phases have not been considered in prior works, so modifications to the architecture of established networks are necessary to handle such components.
Therefore, we present a universal approach for the rapid identification of prospects from XRD data using a neural network structure.The model categorizes the samples into nondiffracting (including amorphous) and crystalline samples and accurately distinguishes between referenced and highly distorted structures that exhibit nonideal properties, such as the formation of multiphase compounds.Training data is generated by simulating diffraction patterns based on a theoretical description of the desired structure in the form of a crystallographic information file (cif ), eliminating the need for an initial production of pristine reference samples.The simulation of XRD patterns and training of the neural network takes less than 5 min on consumer-level hardware, so models are readily available for use cases at hand.In this work, we demonstrate the application of our approach on distinct material structures: multimetallic spinels and doped copper oxides.
Results
To train a neural network for the automated analysis of the acquired powder XRD patterns, we present a universal data generation pipeline that simulates realistic signals.Accordingly, Figure 1 provides an overview of our presented method.First, synthetic patterns are generated based on variations of a description of a structure in the form of cif.Our model generates realistic variations of the base structure without the requirement of modeling the exact lattice and occupancy changes, providing a general approach to represent altered structures.Generally, for each variation, the position of the peaks, the ratio of peak heights, and the shape of the peaks are varied to account for naturally occurring variations.Prior research demonstrated that such variations are crucial to generate adequate training data for the application of neural networks to measured XRD patterns. [12]n the context of doping experiments, for instance, the structure variations are depicted by lattice contractions and expansions that are reflected in the synthetic diffraction patterns without specifying the exact type and concentrations of the doping material.Furthermore, altered scattering factors of the incorporated species are reflected by varying the intensity ratios of the peaks in the pattern, and the width of the peaks is randomly chosen to mirror the varying crystallite sizes and defects.To depict multi-phase samples, the simulated patterns of the varied structures are complemented with arbitrary, additional diffraction peaks placed randomly.Finally, samples that lack a periodic atomic arrangement, including amorphous materials, are represented by patterns that only contain a diffuse background intensity without characteristic reflections.
Utilizing the simulated data, a specialized neural network is trained for automated classification of the XRD patterns.While this model is developed to categorize the three distinct classes encountered in fast screening experiments, we have elected to divide the classification task into two separate predictions.The initial model output discerns between nondiffracting and crystalline samples, while the second output differentiates between single-phase and multi-phase patterns.Both outputs use a sigmoid activation function (scaled between 0 and 1), allowing the predicted values to be interpreted as probability estimates for their respective classification tasks.In this context, the initial output predicts the sample's crystallinity, while the secondary probability estimate quantifies the likelihood of a multi-phase compound's presence.Should the initial output's predicted value fall below 0.5, the sample is designated as non-diffracting (amorphous), irrespective of the secondary output.
In the following sections, the adaptability of our approach is presented by applying trained neural network models to experimental data.Therefore, the doping of copper oxides and composition variations to form a spinel-type structure are tested in fast screening experiments, which enable the compilation of large and diverse datasets for the evaluation of our method.The networks have been trained using our generalized data generation pipeline with reference structures obtained from the ICSD, [5] and no modifications are required to apply the presented approach for the different datasets.
Spinel Structures
First, the described method is applied to identify spinel-type MgAl 2 O 4 structures that incorporate a multitude of different elements.The respective materials class is called "high-entropy oxides", related to a high configurational entropy that is formed when many different elements are incorporated into a singlephase structure.Between the different elements, interactions arise, called cocktail effects, which can give these materials unique properties that can differ completely compared to the parent materials.In this study, the parent structure Fe 3 O 4 (Fe(II) Fe(III) 2 O 4 ) was used and the divalent and trivalent Fe replaced by other elements, forming, for example, (CuMg)(FeMnCr) 2 O 4 .The samples are produced and characterized on a high-throughput platform, which allows for parallel synthesis and analysis of 99 specimens (11 Â 9 grid) using a robotic synthesis platform and a high-throughput sample holder for a Ga-Jet X-ray source. [7]ccordingly, a cif (ICSD code 13 859), representing the spinel structure of MgAl 2 O 4 , is used to generate the synthetic training data.
A brief, manual screening of the acquired data reveals the different outcomes of the unique precursor combinations, which include amorphous or multi-phase compounds instead of the intended, phase-pure spinel structure.Figure 2a illustrates exemplary XRD patterns of the three classes, which have been shifted on the intensity axis for clarity of presentation.In the case of the amorphous/nondiffracting class (gray), the XRD patterns mainly exhibit a background signal, with minor diffraction peaks observed in a few cases.Here, the 9 Â 11 grid contained positions that were unoccupied, producing diffraction signals devoid of reflections, which were grouped with the amorphous class during analysis to simplify the process.In contrast, the singlephase XRD pattern (blue) shows diffraction peaks that stand out from the noise and background, with the major peak being located at 29°2θ.Likewise, the multi-phase samples (red) exhibit Subsequently, the model classifies measured XRD patterns into the amorphous, single-phase, and multi-phase categories.The spinel structure was obtained from the materials project. [27]he same peaks as the single-phase structure in addition to other unassociated diffraction peaks.
While manual screening of the patterns is only feasible for limited sizes, the developed neural network analysis approach was applied to categorize the patterns within seconds.Figure 2b shows XRD patterns for identified crystalline samples and the corresponding multi-phase probability estimates (blue: low, red: high), as predicted by the model (second output).A visual assessment of the measured patterns alongside the predicted confidence scores demonstrates that the network has learned to detect multi-phase samples based on the presence and prominence of additional peaks.Signals that align precisely with the diffraction pattern of the identified structure are categorized as single phase (represented in blue), whereas irregular intensity baselines yield heightened multi-phase confidence predictions (muted blue to red colors).Regarding the light-blue pattern in Figure 2b, the marginally elevated background between 27 and 28°is ambiguous and could be due to noise or minor impurity peaks.Consistently, the corresponding pattern is classified as single-phase, since the multi-phase confidence lies below the detection threshold.In comparison, the light-red pattern exhibits even higher intensities in this range and is assigned to the multiphase class by the model.Following this trend, patterns with even more distinct impurity peaks yield higher multi-phase probability estimates.Similarly, the model accurately identified the amorphous samples, as well as patterns that resulted from the empty grid positions.
Doped Copper Oxides
Using the identical robotic platform for sample preparation and subsequent characterization via XRD, the doping of copper oxide (CuO) is examined in a second experimental series.While the first experiment examined the unique precursor combinations to form the spinel structure, we additionally show how our approach can be used for automatic determination of the dopant concentration thresholds, thus avoiding the formation of multiphase compounds and preserving the material's desired properties.Doped copper oxides have been produced and analyzed in various studies, [21][22][23][24] but typically, only a few compositions are tested.Depending on the concentration, the dopant material is either fully incorporated or forms an impurity phase, but diverging results have been reported with respect to the critical dopant concentration for synthesizing phase-pure samples.For example, Al-Amri and colleagues reported phase-pure oxides with Ni doping concentrations ranging from 1 to 7%, [23] while Meneses et al. detected impurity phases for the same Ni-doped CuO nanoparticles, even for low concentrations of the dopant. [21]While both studies analyzed the structures of the identical nanoparticles, the differing groups used distinct experimental routes and configurations to produce the doped structures (i.e., differing temperatures for the synthesis).Doped copper oxides in the form of Cu 1Àx (Zn,Ni,Mn) x O were produced with systematically incremented doping concentrations ranging between 0% and 25%.Moreover, the materials were generated and examined within three discrete experimental conditions, with samples being calcined at temperatures of either 500, 600, or 700 °C.
Figure 3a shows several XRD patterns from the fast-screening experiments with varying dopants and compositions.While the ideal CuO sample exhibits two major diffraction peaks at about 28°and 30°2θ (for the Ga-jet radiation source) and further, minor reflections at 25°and 39°, the doped copper oxides show additional reflections due to impurity phases, as highlighted by the black triangles on top of the respective patterns.High concentrations of the dopants (here 25%) cause the sample to form multiphase compounds with ZnO, NiO, or MnO 2 being present, in addition to the intended CuO phase. [21,22,24]The detection of those additional phases, however, remains a challenging task due to overlapping diffraction peaks of the doped copper oxide structure and the impurity phase.For example, the additional ZnO phase peaks are almost overlapping with one of the major peaks from the CuO structure (around 28°).In this series, all materials exhibit clear reflections in the XRD signal, so there are no amorphous samples produced for the doped copper oxides.
We applied our novel method for the doped copper oxides by generating varied structures from the CuO ICSD entry (code 16 025) and training a neural network for the classification of the respective XRD patterns.Using our trained network, we were able to analyze the 225 synthesized samples (75 unique compositions and 3 distinct temperatures) within milliseconds.In addition to the outstanding speed of the automated analysis, the model proved to be sensitive to those additional reflections, even if the extra peak positions aligned almost perfectly with the diffraction patterns of the doped copper oxides.The additional peaks display as shoulders of the diffraction peaks, and the neural network was able to identify those nonsymmetric peaks for the accurate detection of multi-phase compounds.Notably, our presented method does not require detailed information regarding occurring impurities or the possibility of overlapping peaks and is even applicable to further dopants without the need for retraining, highlighting the versatility and adaptability of our approach.
The fast-screening experiment and analysis of the XRD patterns revealed interesting properties of the copper oxide with respect to incorporating dopants, especially while considering the temperature during the synthesis process.Figure 3b shows the diffraction patterns for the sample with 5% Ni dopant (in shades of brown), together with the signal obtained for the pure CuO specimen (green).While the XRD pattern of the sample synthesized at 500 °C concurs with the signal of the phase-pure material, the patterns of the samples synthesized with higher temperatures exhibit minor, additional diffraction peaks at 29°a nd 33°2θ.Despite the relatively indistinguishable peaks submerged within the noise, the trained network demonstrated proficiency in accurately classifying these patterns.This simultaneously confirms both results from Al-Amri et al. and Meneses et al. which observed the presence and absence of additional phases for Ni-doped copper oxides that have been produced at similar temperatures. [21,23]cordingly, Figure 4 shows the classification of our network for the XRD patterns with respect to synthesis temperature and dopant.The colors correspond to the output of the network, which ranges between 0 (single-phase, blue) and 1 (multi-phase, red), with the impurity classification threshold at 0.5 (white).For Cu 1Àx (Zn,Ni, or Mn) x O and 500 °C, about 7% dopant can be incorporated into the copper oxide, while still forming a phasepure material (blue region).The output of the network correlates with the significance of the additional peaks, so there's an initial dopant concentration region with only minor additional peaks (white, multi-phase threshold), before the impurity phases are distinctly detectable (red).The lighter shades of blue correspond to single-phase predictions with elevated multi-phase probabilities, that we identified as patterns with higher noise levels or minor irregularities of the baseline intensities.For higher temperatures, the detected multi-phase threshold decreased for all three dopants, so, presumably, lower concentrations can be incorporated without forming multi-phase compounds.
To verify the predictions of the neural network, some XRD scans have been analyzed manually.Using the Rietveld refinement method, the weight percentages of the primary and impurity phases were determined to identify those samples that contain multi-phase structures.Instead of performing the refinement for all scans, the model's prediction allowed for the selection of a subset of the patterns.Therefore, only the samples calcined at 500 and 700 °C have been evaluated, as it was determined that multi-phase thresholds in the 600 and 700 °C test series exhibited substantial similarities.Moreover, according to the prediction of the model, only the dopant concentration The predictions of our neural network separating the XRD patterns of dopants for CuO into single-phase samples (blue) and multi-phase compounds (red) are here visualized for all temperatures.The intensity of the color corresponds to the confidence of the neural network.For 500 °C, about 7% dopant can be incorporated and the thresholds shift for higher temperatures.
range of 1-10% held crucial significance for the formation of multi-phase structures (1-8% for the 700 °C samples).This model-driven insight notably reduced the number of samples necessitating manual analysis, streamlining our focus to the most pertinent data subsets.Manual analysis showed agreement with the results predicted by the model.Detailed information for the Rietveld refinement can be found in the Supporting Information.
An unexpected observation of our tests is that the threshold for dopant concentrations that yield multi-phase compounds declines with increased synthesis temperatures.Concurrently, samples synthesized at these higher temperatures display narrower peak shapes attributable to larger crystallite sizes.While narrow peaks stand out from the noise, broader diffraction peaks can merge indistinguishably with noise and background.This indicates that at lower synthesis temperatures, it is not that higher dopant concentrations were incorporated, but rather the resulting impurity phases became undetectable due to the broad peak shapes.However, neither the neural network model nor the manual Rietveld refinement identified suspected impurity phases at lower dopant concentrations in the 500 °C samples.Such a limitation underscores the requirement for extended acquisition times, which enhance signal-to-noise ratios and could consequently facilitate the detection of impurity phases.
Conclusion
To facilitate material discovery experiments, we present a method for the automatic analysis of XRD patterns in fast screening experiments.The XRD technique provides information about the crystalline structure of the analyzed sample and allows the distinction between single-phase and multi-phase structures.Single-phase materials are of particular interest because they possess uniform properties and behavior, which can be critical for certain applications.The neural network we developed automatically separates the produced samples into three categories: nondiffracting/amorphous, single-phase, and multi-phase.
We demonstrate the fitness of our approach on two distinct experimental series: spinels (Fd-3m) and doped copper oxides (C2/c).Using our unified data generation approach and a cif-file of the desired structure, models were trained for automated analysis of the XRD scans.The accuracy of the predicted classifications was validated manually through Rietveld refinement and visual examination of XRD patterns.While a quantitative Rietveld refinement analysis necessitates the identification of precise phases to ascertain weight percentages and detect impurities, our method operates at a more general level, bypassing the need to explicitly define impurity phases.Consequently, the speed of materials discovery experiments can be significantly enhanced using our universal approach.This method swiftly filters out unsuitable materials, ensuring that only prospective materials advance to subsequent stages of analysis or are considered for future experimental series.
Moreover, our methodology lessens the burden of manual analysis in expedited screening experiments.Given that the model's output aligns with the significance of additional reflections, experts can cherry-pick samples with high multi-phase probability estimates, which exhibit distinct diffraction peaks, thereby facilitating the phase identification process.Alternatively, manual examination of dopant concentration thresholds can be strategically limited to samples near the predicted multi-phase detection boundary, rather than analyzing the entire spectrum.Therefore, our method not only paves the way for full automation of the analysis process but can also effectively complement human expertise and promotes a synergistic relationship between AI and human experts for more nuanced and efficient investigations.
Experimental Section
Generation of Training Data: To train neural networks for identifying prospective material samples, it is essential to have reference data.As the materials synthesized in our experiments were novel, experimental data for these materials did not exist and, therefore, must be simulated.Crystalline materials are characterized by their structure, as described by the lattice, and the atoms that constitute the crystal.Databases such as the ICSD or the COD store this information and provide it in the form of text files or database entries, [5,6] which are parsed from the database in commercial software for the analysis of the experimental data.One example for such text files is the crystallographic information file (cif ) format, which contains information about the crystals, including lattice parameters, space group, and coordinates for each atom in the unit cell.In the materials discovery experiments described here, the reference material and its structure are known, which serves as a starting point for generating training data.
A variety of software packages and libraries are available for handling crystallographic information, including parsers for cifs.We chose to build on the well-established Python library pymatgen for generating synthetic data. [25]While it is possible to accurately describe the resulting structures of the synthesis (regardless of stability) given exact information about experimental parameters like doping material and size of substituting atoms, we decided to take a more general approach.When foreign atoms or ions substitute positions within the structure, or when they are incorporated, the lattice is influenced by factors like the atom size of elements present in the precursors.These factors can either compress or extend the lattice, thereby impacting its overall structure.Thus, we introduced random variation (up to 1%) to the lattice parameters, while maintaining restrictions defined by the crystal system of the reference structure.
In addition to parsing crystallographic information, the pymatgen package also provides tools to simulate X-ray powder diffraction patterns using the XRDCalculator object.This tool is designed to calculate the positions and intensities of diffraction peaks, using the specified structure and wavelength as input.While the variation in lattice dimensions accounts for the shift in peak positions, it is equally essential to represent the changes in relative intensities that arise due to differences in form factors introduced by foreign species.Given the uncertainty in the variance range of form factors, we chose to model peak intensity variations with a separate effect, preferred orientation, which occurs when certain particle orientations are overrepresented, thus altering the XRD pattern's relative intensities.Accordingly, preferred orientation is introduced to the training set to account for these variations in relative intensities.
Finally, the width and shape of the diffraction peaks in the recorded signal depend on the sizes of the crystals in the powder sample.The relation between crystallite size and full width at half maximum of the peaks is described by the Scherrer equation, [26] so we generated synthetic powder diffraction patterns with varied peak shapes related to grain sizes between 10 and 100 nm.In consideration of the diverse instrumental broadening effects arising from the use of different equipment, our data simulation pipeline used a pseudo-Voigt diffraction peak profile to encapsulate the distinct optical characteristics inherent to each instrument, thereby accounting for the diverse appearances of peaks observed in the acquired signals.Additionally, Gaussian and Poisson noise were added to the simulated patterns to accurately represent the variation of the measured signals.Moreover, the baseline of the measured XRD patterns was simulated using Chebyshev polynomials, as is common practice to replicate XRD data. [12]ccordingly, Figure 5 provides an overview of our simulation approach.The presented data generation pipeline takes a cif-file as input (either from a database or a description of an arbitrary structure) and generates multiple variants by varying the lattice parameters, texture, and crystallite sizes.Additionally, artificial noise and a baseline intensity were added to account for experimental artifacts.By comparison of the simulated and measured patterns, it is shown that the simulation approach depicts the realistic variation that occurs in such fast-screening experiments.
In addition to generating single-phase XRD pattern training data, the automatic discrimination system must be capable of handling multi-phase and amorphous XRD patterns.Generating amorphous XRD patterns is straightforward; instead of adding a baseline intensity to the simulated pattern, the background function alone can act as an example of an amorphous structure.Multi-phase structures, on the other hand, are based on single-phase patterns that are complemented with additional, random peaks.We added a few diffraction peaks at random positions to generate multi-phase examples while ensuring that those positions do not overlap completely with the peak positions of the single-phase pattern.As a result, our simulated dataset represents the three classes to identify: amorphous phases, single-phase patterns, and multi-phase patterns.
[11][12][13][14] These CNNs use convolutional layers to apply a kernel that slides linearly across the input, identifying position-independent features, such as diffraction peaks, that stand out from background noise.By doing so, the CNNs can suppress baseline intensity and noise while matching varying shapes of diffraction peaks.Pooling operations often follow the convolutional layers to reduce input dimensionality, thereby minimizing peak position variations.
However, we conducted a recent study that revealed the lack of sensitivity with respect to identifying minor peaks in patterns for established network structures. [20]The detection of multi-phase peaks is of great importance for this work, necessitating modifications to the network architecture to improve performance in minor peak identification.Although single-phase structure peaks regularly occur in the training data, multiphase peaks are inserted at random positions, making them outliers from the expected results.To identify minor outliers, a common strategy involves scaling data points according to mean and standard deviation, resulting in an amplification of irregular peak intensities.Nevertheless, scaling cannot be applied to the raw input, which includes noise, background, and peak position shifts.
Thus, we present a modified network which is illustrated in Figure 6.This network takes Min-Max scaled XRD patterns as input, so the first layer The normalized XRD patterns are fed into the network and multiple pairs of convolutional layers and max-pooling operations condense the input to fewer data points.Subsequently, the relevant features are compressed in the global-positional feature extractor (GPFE), which simultaneously identifies peaks with respect to the positions in the signal and globally relevant features, such as exceptional peak shapes.By concatenating the two separate types of features, the network conditions the information for the following classification.The first output (1) classifies nondiffracting versus crystallite structures based on the activations of the extracted features.To amplify the anomalies of multi-phase samples, a normalization and subsequent ReLU layer are used for rescaling of the intensities before the second output (2), that distinguishes between single-phase and multi-phase signals.
of the network matches the dimensionality of the signals.For instance, XRD signals collected from our robotic platform were measured from 10°to 60°2θ with a step width of 0.015, resulting in 3334 data points.The input then passes through the convolutional stage, which contains multiple convolutional layers and pooling operations to identify peaks and reduce the dimensionality of the input.The exact configuration of weights in the respective layers depends on the properties of the data.Here, we used a kernel size of 17 in 4 convolutional layers and 32 filters to identify the relevant features, but different parameters could be necessary for other instrument configurations (e.g., larger kernels for patterns with smaller step sizes).
Following the convolutional stage of the network is our custom globalpositional feature extractor (GPFE) that combines positional features and unique textures that appear globally in the patterns.The position of the peaks is crucial for identifying crystalline structures; therefore, it's essential to maintain the integrity of positional information to differentiate between various patterns.Additionally, the detection of exceptional features, such as the almost-overlapping peaks of the Zn x Cu 1Àx O structures, necessitates additional paths in the network that are not related to the positions.Hence, our GPFE extracts both types of features simultaneously and combines the diverse information for the following layers.Utilizing GlobalMaxPooling layers, we identified the largest activation both across the channel dimension (thereby preserving positional information) and within each channel (thus pinpointing unique features).The resultant information was then condensed, serving as a compressed input for the subsequent layers.A more detailed explanation of the GPFE's functionality can be found in the Supporting Information.
The network splits the three-class categorization task into two separate outputs.First, the model distinguished between nondiffracting (amorphous, empty sample holders) and crystalline structures depending on the extracted activations of the positional and global features.For signals without relevant reflections, the feature maps should be mostly zero, as noise and the baseline intensity were filtered from the inputs.Therefore, the model identifies patterns that match the defined reference structure based on the positions of the extracted diffraction peaks.
Multi-phase samples, on the other hand, exhibit XRD signals with additional reflections, which are an anomaly from the typical phase-pure pattern.Thus, the normalization layer that scales the respective features according to the mean and standard deviation is placed right before the second output that classifies single-phase and multi-phase samples.By amplifying the exceptional activations, the network facilitates the detection of anomalies, hence, crucially improving the accuracies of analyzing XRD patterns in fast-screening experiments.Additionally, a rectified linear unit (ReLU) is used to clip intensities below the learned means that are not relevant for the identification of additional reflections, which stabilizes the training process.
While our neural network architecture presents a significant deviation from established structures, the adaptations were necessary for robust classification of those subtle multi-phase peaks.In Table S3, Supporting Information, we compared our developed network to a similar network presented by Szymanski et al. for the identification of XRD patterns. [13,19]ur model performed better on both presented datasets.While the performance of their model nearly matches ours on the spinels dataset, the network by Szymanski and colleagues fails to successfully extract the almost overlapping peaks that appear in the doped copper oxides dataset, resulting in considerably worse performance metrics.Even for the spinel dataset, the reference model mostly detected the patterns with clear multiphase peaks, while failing to correctly classify those signals with only minor impurity reflections.This highlights the application of our GPFE and the subsequent normalization, which allows for the detection of unique textures and subtle peaks.
Details: Spinel-type oxide synthesis: Water-based nitrate salt precursor solutions (0.For both studies, the nitrate salt solutions were mixed in different combinations in a standard 360 μL 96-well plate using an automated pipetting robot (Opentrons OT-2).To initiate coprecipitation, the respective precursor solutions were mixed with ammonia (Sigma Aldrich, 28-30%) at a ratio of 1:2 on a carrier substrate (two-sided polished (100) Si wafers) suitable for calcination and further XRD analysis.The spinel-type oxides were calcined at 700 °C for 5 h in air and the Cu doping study materials were calcined at 500, 600, and 700 °C in air for 3 h.For both studies, a constant heating rate of 300 °C h À1 and naturally cooling down to room temperature inside the oven were used.
Automated X-ray Diffraction (XRD): Automated XRD measurements were performed at a STOE Stadi P diffractometer, equipped with a Ga-jet X-ray source (Ga-Kβ radiation, 1.2079Å) and a custom-built XY stage for automated sample measurement.XRD patterns were obtained in transmission mode.Patterns were collected between 10°and 60°2θ with a step size of 0.015°.The powder samples on the (100) Si wafer were fixed with Kapton film, and the Si wafer was held by an in-house designed holder.
Figure 1 .
Figure 1.Concept of the presented method for training and application of a neural network to automatically categorize XRD patterns in materials discovery experiments.The neural network is trained with simulated signals that depict the range of variation in the experimental data.Subsequently, the model classifies measured XRD patterns into the amorphous, single-phase, and multi-phase categories.The spinel structure was obtained from the materials project.[27]
Figure 2 .
Figure 2. Experimental XRD patterns from the MgAl 2 O 4 spinel-type structure experimental series.a) The diffraction patterns show examples of amorphous/nondiffracting (gray), single-phase (blue), and multi-phase (red) structures.b) Predicted confidences (blue: low; red, high) by our model grouping the exemplary XRD patterns into single-and multi-phase.The patterns with muted colors are close to the multi-phase detection threshold (0.5).The predicted value corresponds with the prominence of those extra peaks.
Figure 3 .
Figure 3. Experimental XRD patterns of pure and doped copper oxides (CuO).a) The dopants Zn, Ni, and Mn have been tested with concentrations up to 25% and cause additional peaks in the diffraction signal due to forming multi-phase compounds, as highlighted by the black triangles.b) Depending on the synthesis temperature, the XRD signals show either single-phase or multi-phase patterns for identical compositions.Here, the XRD patterns for the CuO samples with 5% Ni doping show signs of additional diffraction peaks at about 29°and 33°2θ (as indicated by the black arrows and dashed lines), only if the material was synthesized with temperatures higher than 500 °C.
Figure 4 .
Figure 4.The predictions of our neural network separating the XRD patterns of dopants for CuO into single-phase samples (blue) and multi-phase compounds (red) are here visualized for all temperatures.The intensity of the color corresponds to the confidence of the neural network.For 500 °C, about 7% dopant can be incorporated and the thresholds shift for higher temperatures.
Figure 5 .
Figure 5. Pattern simulation approach.Based on a provided cif (here: MgAl 2 O 4 , ICSD Code 13 859), artificial patterns are generated that depict the typical variation that occurs in experiments that test the doping of base materials.To account for variations and experimental artifacts, the lattice, the simulated peak heights, and the crystallite sizes are varied, and a baseline intensity and noise are added.
Figure 6 .
Figure 6.Network architecture for the model presented in this work.The normalized XRD patterns are fed into the network and multiple pairs of convolutional layers and max-pooling operations condense the input to fewer data points.Subsequently, the relevant features are compressed in the global-positional feature extractor (GPFE), which simultaneously identifies peaks with respect to the positions in the signal and globally relevant features, such as exceptional peak shapes.By concatenating the two separate types of features, the network conditions the information for the following classification.The first output (1) classifies nondiffracting versus crystallite structures based on the activations of the extracted features.To amplify the anomalies of multi-phase samples, a normalization and subsequent ReLU layer are used for rescaling of the intensities before the second output(2), that distinguishes between single-phase and multi-phase signals.
|
2023-12-29T16:22:32.611Z
|
2023-12-24T00:00:00.000
|
{
"year": 2024,
"sha1": "a5e0fe6cdae6c69cb94ca1316cd5f4688b9d60cd",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aisy.202300501",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b53a2985b645bf88d5393ad6b08afb9086bf9a72",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
208145035
|
pes2o/s2orc
|
v3-fos-license
|
Exploring the utility of robots in exposure studies
Obtaining valid, reliable quantitative exposure data can be a significant challenge for industrial hygienists, exposure scientists, and other health science professionals. In this proof-of-concept study, a robotic platform was programmed to perform a simple task as a plausible alternative to human subjects in exposure studies for generating exposure data. The use of robots offers several advantages over the use of humans. Research can be completed more efficiently and there is no need to recruit, screen, or train volunteers. In addition, robots can perform tasks repeatedly without getting tired allowing for collection of an unlimited number of measurements using different chemicals to assess exposure impacts from formulation changes and new product development. The use of robots also eliminates concerns with intentional human exposures while removing health research ethics review requirements which are time consuming. In this study, a humanoid robot was programmed to paint drywall, while volatile organic compounds were measured in air for comparison to model estimates. The measured air concentrations generally agreed with more advanced exposure model estimates. These findings suggest that robots have potential as a methodology for generating exposure measurements relevant to human activities, but without using human subjects.
Introduction
Robot use is increasing in the workplace and throughout society in general [1,2]. The complexity of tasks (any scenario the robot executes) performed by robots continues to grow, and as more affordable, functional robots become available, there are increasingly more routine applications that robots can perform at home, in the workplace, for emergency response activities, research, therapy, and mundane day to day tasks [2][3][4]. Within occupational settings, like the food industry, robots have been designed and programmed to perform specific tasks to promote the health and safety of workers [1,5]. Nevertheless, robots have had limited use in exposure science research, such as estimating airbourne particle exposures [6,7] and simulating infant behavior to obtain a more accurate indoor infant exposure profile due to the challenges in sample collection from this age group [8][9][10][11][12][13]. With the rapid and accelerating advancement in robotic technology, a proof of concept study was designed to determine the utility of robots to perform activities that lead to exposure and to facilitate the collection of air samples to estimate inhalation exposure.
The primary focus of the study was to determine if robots can be used as a new methodology for generating exposure data as a plausible alternative to employing human subjects in exposure studies. If robots are effective in this domain, as indicated by this study, they can help close exposure knowledge gaps where data are: • Currently lacking (e.g., new products or formulations, infrequent tasks) • Needed as specific inputs for exposure models • Or difficult to acquire (e.g., consumer product safety) This robotic-based e-methodology would be very useful where staged simulations may have ethical considerations related to intentional exposure or when repetitive actions are desired to evaluate the distribution of exposures. Human-based studies are subject to institutional review board (IRB) protocols and approval requirements, as well as considerations to data privacy laws related to the confidentiality of human subjects. Developing a way to address these issues can save time, facilitate changes in protocols while the studies are on-going, and reduce administrative overhead needed for tracking subjects and outcomes, as well as justifying risk-benefit ratios for intentional exposures. Robots do not have to adhere to the aforementioned considerations and requirements. They have the ability to complement human subject exposure studies by providing exposure ranges and uncertainty estimates for specified activities thereby advancing the field of exposure science.
This study explored the feasibility of using a robot in place of humans to generate exposure data for exposure estimation. In addition, the measured exposure profile, specifically of air concentrations during robotic activity, was evaluated by comparison to model estimates.
Exposure chamber and monitor setup
A dual-arm humanoid robotic platform was programmed to perform a simple task of painting drywall. Painting was chosen because of the readily available consumer products (i.e., paint) and manageable necessary programming to generate painting motions using a roller. One type of low-VOC water-based paint (WBP) was used as received throughout the study. All painting trials (daily robot activity including setup and postpainting sampling) performed in this study were executed inside Rutgers University's Controlled Environmental Facility (CEF) to ensure uniform conditions (i.e., temperature and humidity) throughout the study. During each of the six painting trials (A-F), short term (trial duration) and long term (8 h from the start of painting) air samples were collected using a combination of personal volatile organic compound (VOC) real-time monitors, thermal desorption (TD) tubes, and consumer indoor air quality (IAQ) monitors 1 . Other direct reading instruments with data logging, for measuring total hydrocarbons, relative humidity, and temperature were collocated for 24 h of continuous measurement during and after the trial completion.
Prior to the start of the first painting trial, continuous monitoring was conducted using the total hydrocarbon (THC) analyzer for 24 h to evaluate the release of VOCs from the drywall. Prior to and upon completion of each painting trial, the pre/post mass of paint and the length and width of the painted drywall area was measured and recorded. The temperature and relative humidity were kept at 25 ± 0.46°C and 40 ± 6%, respectively. Four new ultralight gypsum drywall panels were placed inside the chamber before each painting trial as described in Fig. 1. Each drywall panel was trimmed to measure 4′ × 6′ × ½″.
Robotic platform
The experiments were performed using a Yaskawa (Kitakyushu, Fukuoka Prefecture, Japan) Motoman SDA10F bimanual robotic manipulator with seven joints in each arm, and a single revolute joint at the robot's torso. The torsional degree of freedom allows the robot to rotate its arms and provides increased reachability. In order to apply paint on the wall at a realistic rate with minimal requirements in terms of computational reasoning and sensory information, a novel robotic end-effector was designed with a spring-loaded mechanism connected to a standard paint-roller (Fig. 2, right). The physically adaptive nature of the spring-loaded mechanism allows the necessary pressure to be applied on the drywall in a sensor-less manner without damaging the robot or the drywall. The entire experimental setup was first generated in simulation (Fig. 2, left). The robot and drywall were modeled in the simulation environment and motions were computed using a motion planning software framework [4]. The objective of this process was to generate motions that maximized the area of drywall covered by paint, using human-like sweeping motions over contiguous vertical planes in front, and to the side of each arm. Motions were also generated to dip the paint-rollers into the paint container. All generated motions are disallowed from colliding with any object in the simulated scene to ensure that the robot does not risk damaging the setup. Once the motions were generated in simulation, they were transferred and subsequently executed by the real-robot in the CEF. Repeated identical painting motions were performed for the duration of the trials, involving both arms, one at a time, painting the drywall panels in front of it and to its side.
Painting trial parameters
During each painting trial, the robot performed seven painting cycles (set of generated painting motions) over the same drywall area for all trials except the first, where it performed ten cycles, for an average of 53 ± 6.2 min at a high air exchange rate (AER) of 11-12 h −1 or at a low AER of 6-8.5 h −1 . AER was determined by measurements of the decrease of CO 2 added to the CEF at an initial concentration of 1900 ppm ( Table 1). The AER are consistent with the AER found in buildings when the windows are open to provide appropriate ventilation during painting [14]. At the beginning of each painting cycle, the roller was dipped into the paint for 5 s then held for 1 min at a 45°a ngle over the paint container to let the extra paint drip off before it touched the drywall. Next, a sequence of four generated motions: first, the left arm painting in front, next, the left arm painting to the left, then, the right arm painting in front, and lastly, the right arm painting to the right (Fig. 1). A dry run-through, consisting of the robot performing one cycle of the generated motions, was conducted before each trial to adjust the placement of boards to assure the robot touched the surface of the drywall with the roller.
Exposure models
In order to evaluate the robotic methodology, paint exposures were estimated using a range of consumer and worker modeling tools including models covering low tier (most conservative) to high tier (most realistic) exposure predictions. Lower tier models are easy to use and are based on readily available data; however they are usually built on conservative assumptions and typically overestimate exposures. Higher tier models can be more difficult to use, but they can reduce or quantify uncertainty. . The robot's motion is first programmed and tested in simulation for safety and effectiveness before deployed in the real setup. Accurate-enough reproductions of the geometries of the robot, the paint roller, the paint bucket and the walls, as well as corresponding software, are needed to produce motions that a avoid undesirable collisions, and b result in contact between the roller geometry and the target wall. Right: the figure shows the 3D digital model of the specially designed compliant paintroller, which was attached to the robotic arm, right next to the real one. The real system was constructed from 3D printed components based on the digital model. The key feature of the paint-roller is that it has an internal spring-loaded mechanism-highlighted in the digital modelwhich provides compliance and robustness to positioning errors. This makes it possible to use the robot for the intended purpose without the need for expensive sensors and a complex sensor-monitoring process [19], and Advanced Reach Tool (ART) [20]. These models were used to estimate average VOC air concentrations for the duration of the trial for three separate trials.
Models were run to approximate conditions for three specific trials, selected to span the range of measured results over the trials. The trials selected included the days with maximum and minimum amount of paint used under the higher ventilation conditions (Trials A and B), and the maximum amount of paint used under the lower ventilation conditions (Trial D). Not all parameters in the low tier models were adjustable. Higher tier models were set to match the experimental conditions as closely as possible. Detailed information on the tools, the assumptions used, model inputs, and model outputs are included in the supplemental material (Supplemental Material 1).
Total hydrocarbon analyzer
A Thermo Scientific™ (Waltham, MA, USA) 51i THC analyzer was used to measure the concentration of total nonmethane hydrocarbons present throughout the duration of the study. The THC analyzer records data points averaged over 60 s intervals and responds to a wide range of volatile compounds, including VOCs off-gassing from paint. The THC analyzer uses a flame ionization detector to detect organic compounds via combustion with a hydrogen flame [21] and was calibrated with propane. The THC analyzer inlet was located in the back of the chamber at the top of the ceiling, behind the robot (Fig. 1). The observed background readings for the THC analyzer were generally <0.1 ppm and did not exceed 0.2 ppm.
VOC monitors
Ion Science (Fowlmere, Royston, UK) CUB monitors [22] were used as personal VOC monitors to detect the concentration of VOCs in the general "breathing zone" area of the robot during short term (trial duration) and long term (8 h) durations. A measurement reading was taken approximately every 20 s. These VOC monitors use a photoionization detector (PID) to detect compounds with a part-per-billion (ppb) limit of detection. The PID response varies with compound molecular structure and functional group and is most sensitive to aromatics and olefins [22]. It is important to note this limitation since the VOCs of interest have a range of structures which can account for some of the differences in the air concentrations measured by multiple instruments shown in the results section [22]. Three VOC monitors were placed on the front of the robot in-between the arms on the center of the robot's body that rotated with movement (Fig. 1). The personal VOC monitors were calibrated before and after each trial with isobutylene.
Thermal desorption tubes
The thermal desorption tubes (TD tubes) are active air samplers used to collect VOCs during the painting trials for subsequent Gas Chromatography/Mass Spectrometry (GC/MS) analysis. Two multibed (graphitized carbon black and carbon molecular sieve adsorbents), carbotrap 300 equivalent TD (Supelco, Bellefonte, PA, USA) were attached to an air pump and set on a~3 foot high table in the back of the CEF behind the robot (Fig. 1). The air pumps were set to an average flow rate of 60.5 cc/min for all trials except the first, which had a flow rate of 30 cc/ min. The total collection time was equivalent to the duration of the painting trial (short term), in order to collect an average of 3.42 ± 0.73 L of air, sufficient to detect organic compounds via GC/MS at expected concentrations of low µg/m 3 . Quantitative results were not used for the statistical analysis because the calibration mix of the EPA TO-17 method did not have the compounds coming from the paint. The qualitative results were used for comparison with a GC/MS headspace analysis (Supplemental Material 2) of the paint to confirm what compounds were attributed to the paint. These compounds were then used in the model estimates.
Data analysis
Temperature and relative humidity data were continually collected. The TD tubes were analyzed using EPA method TO-17 for determining toxic organic compounds, on a GC/ MS to verify the identity of compounds emitted from low-VOC WBP. The chromatogram peaks that increased during painting were identified through a match of the mass spectrum to a NIST library (Wiley 1.01, 9th edition). Background measurements were collected pre-and postpainting for each trial. Background subtraction was performed for both the THC analyzer and VOC monitors. To accurately determine the background for the THC analyzer and VOC monitor data, an average of each dataset collected 10 min prior to the start time of each individual painting trial was used as the background for that trial. This calculation accounted for factors that could potentially change the background concentration throughout the study. The VOC monitor and THC analyzer data were normalized to the amount of paint used in each trial.
As stated previously, the direct-read, real-time monitors used in this study do not have equivalent responses to all categories of VOCs and were calibrated with different gases. In order to compare the datasets, all the VOC monitor and THC analyzer data were converted to ppm per carbon (ppm-C) as methane equivalents. For the VOC monitors, isobutylene (4-carbon) was used for calibration so the data were multiplied by four to get the ppm-C as methane equivalent values. For the THC analyzer, the data was multiplied by three to convert to ppm-C as methane equivalents because propane (3-carbon) was used for calibration. While this conversion allows for a better comparison of the data between monitors, it does not account for the differential monitor responses to diverse classes of compounds. This limitation is explored further in the discussion section.
Since few personal exposure measurements during painting with low-VOC WBP were identified in the literature, our experimental results were compared with model estimates to help determine if exposure data collected during the robotic painting are representative of the exposure during human painting. To compare measured data to model estimates, VOC air concentrations were converted from ppm-C to mg/m 3 using the average molecular weight (MW) of compounds detected in the headspace analysis of the WBP and the ratio of the total atomic weight of carbons in the compound, W c , to MW. For example, the molecular weight (MW total ) of butyl propionate is 130 g/mol; there are seven carbons, so the total weight of carbon in the compound (W c ) is 84 g/mol. Calculating the ratio of total MW total to W c (130/84 g/mol) results in 1.55. The average ratio of the four compounds identified as coming from the paint detected in the headspace analysis (Table 2) is~1.5, resulting in a final conversion formula shown below: Standard conversion formula for ppm to mg/m 3 based on 25°C and 1 atm. 2 : where the units of Y and X are the concentration in mg/m 3 and ppm, respectively. Conversion formula for average VOC air concentration using headspace analysis result: where AW c is the atomic mass of carbon, 12 g/mol, MW total is the compound molecular weight, W c is the total weight of carbon in the compound, X is the VOC air concentration in ppm-C and Y is the same as Eq. (1). Table 2 outlines the parameters used in the calculation (described in Eqs. (1)-(2)) to determine the vapor pressure (VP) of each compound found in WBP. The VP and MW of the compounds were used in the models.
Exposure evaluation
VOCs encompass a large number of carbon-containing compounds spanning multiple chemical classes, have been linked to acute and/or chronic health effects, and can be released as vapors from a variety of products and materials including paints [23,24]. Some VOCs have been linked to acute and/or chronic health effects, such as headaches, respiratory tract and eye irritation, liver and kidney damage, and some are known carcinogens [23,24]. Low-VOC paint can contain up to 50 g/L of VOCs [25]; the VOCs commonly found in WBP are ethylene glycol, texanol, and propylene glycols [26][27][28][29].
The results for VOC air concentrations during the painting trial for the THC analyzer and the personal VOC monitors are summarized in Table 3. The background measurements for the THC and VOC monitors, temperature, and relative humidity indicate that the controlled environment had a background VOC air concentration of <0.2 ppm-C [30][31][32] (Supplemental Material 2: Fig. S1). Temperature and humidity were kept constant (Supplementary Material 2: Figs. S1, S4-6, S9-11). The THC analyzer measured the highest air concentrations, even though it was farther away from the paint and drywall compared with the VOC monitors, most likely due to its higher response to saturated and oxygenated hydrocarbons than the PID deployed in the VOC monitors (Fig. 3, Table 3). The THC analyzer and VOC monitors measured the same trends in VOC air concentrations during (Fig. 3) and after (Supplemental Material 2: Figs. S2, 3, S7, 8) the painting trial. The VOC air concentration rose rapidly at the beginning of the trial, leveling off within 15-20 min after the painting started and remaining nearly constant for the duration of the painting. Once the painting was completed and the unused paint was removed, the air concentration declined in an exponential fashion returning to background levels within 2 or 3 h. The periodicity in the air concentration during the painting may be due to the robot obtaining more paint to conduct a new paint stroke. The same trend was observed for all seven painting trials (Fig. 3, Supplemental Material 2: S2, 3, S7, 8).
A strong correlation, with R 2 values of 0.89, 0.86, 0.85 for the right, middle, and left VOC monitors, respectively, was observed, showing an increasing trend for the average VOC air concentration per amount of paint used (Fig. 4). The R 2 value for the THC analyzer was 0.56, indicating less of a correlation between the average VOC air concentrations and amount of paint (Fig. 4) compared with the VOC monitors. This is most likely due to a combination of the laminar flow of air within the CEF, its differential response to compound classes, and proximity to the painted drywall. However, at high AER, the VOC air concentrations for the THC analyzer show an increasing trend with increasing amount of paint. The same increase is observed for the VOC monitors at high AER. The average VOC air concentrations at low AER for both the THC analyzer and VOC monitors are clustered together and do not visually show a distinct pattern. This phenomenon could be due to more mixing at the higher AER.
Five major volatile components were identified based on comparison of a library mass spectral match in the head space above the paint: acetone, methyl methacrylate, butyl acetate, n-butyl ether, and butyl propionate. While all five were present in the TD tubes, acetone was also present in background samples in high concentrations. The four other compounds were found in the TD tube samples collected during the painting trials but were absent in the background and blank TD tube samples. Thus, the average VP and MW of these four compounds were used in the models to estimate VOC air concentrations for three separate trials as described in the next section.
Model estimates
Since limited exposure data using low-VOC WBP have been reported in the literature and those reported are dependent on the painting conditions, a series of mathematical models were run, using a tiered approach, for comparison to experimental measurements. Lower tier models were run first, as this was a proof of concept study looking at the use of robots for exposure data generation. Higher tier models were then run using more of the experimental conditions to provide more refined estimates.
The model estimates and the measured results from the THC analyzer and VOC monitors are summarized in Table 4 and Fig. 5. In general, the models used in this study provided expected results based on their respective tiers. Lower tier models (TRA, EGRET, CS instantaneous) overestimated VOC concentrations, mid-tier tools had less overestimation (CS constant rate, CS evaporation Langmuir isotherm, WMB), and higher tier model estimates were within the range of measured concentrations by~1 order of magnitude (CS evaporation Thibodeaux E-FAST wall paint specific algorithm, ART).
Evaluating the utility of robotics technology for exposure studies
The average measurements of the resulting painted areas for all trials are shown in Table 5. The complete set of measurements for each trial are shown in Supplementary Material 2: Table S1. As expected, the area painted by the robot was highly repetitive with standard deviations within 1-2 cm for each drywall panel. The amount of paint used for trials B-F was 1.3-1.8 kg but trial A was performed for a longer period of time so it used more paint (2.4 kg), which resulted in excess dripping. This issue was corrected in later trials by reducing the duration of painting and therefore the amount of paint applied. The variation observed for trials B-F is likely due to small differences in pressure applied to the roller while painting, slight changes in how much paint dripped off the roller prior to touching the drywall during the trial, and the two different air exchange rates used. The differences in the width on different drywall panels can be attributed to the robot's reachability region, given the placement of the drywall panels in front of it and to its side. The software aimed to maximize the region on each drywall panel that was painted by the robot, given the CEF dimensions and the robot's reachability. The setup allowed for more space (width) in the front than on the sides of the robot. The robot was able to rotate to paint the side panel but not able to bend down at the waist. The robot painted an Fig. 3 Real-time THC and VOC monitor data normalized to the amount of paint used in each trial are shown for the painting trial (short term) for trials a-c with the high ACH (top four graphs) and trials d-f with the low ACH (bottom four graphs) average width within 9 cm and the height within 1 cm for 28 drywall panels. Even though the robot did not have pressure sensors on the arms, approximately the same amount of pressure was applied by the arm during painting. However, the pressure of the roller could have changed due to minor differences in the distance between the robot and drywall. This is not unlike human painters who likely apply different pressure to the roller to achieve complete coverage of paint. More sophisticated robotic setups could utilize sensors to minimize pressure changes. This would reduce the variability in the amount of paint applied and allow for simulation of a wider range of human behaviors while performing specific tasks to more accurately evaluate how they might alter exposure.
In the current setup, care was taken to place the drywall panels on stable frames at the target distance from the robot. The motions, originally generated in simulation, were replayed in the real-world experimental setup. No sensing was used to adapt the motions to any minor variations of the drywall placement between experiments. The physically adaptive nature of the paint roller accounted for the lack of sensing and ensured application of paint on all the surfaces across all the experimental runs.
During the study, some advantages and disadvantages of using robots for exposure estimation were identified. The chief advantages are (i) safety, (ii) repeatability of the activities and resulting exposures, and (iii) a robot's ability to be programmed to test a large set of predetermined variations in behavior, in the context of repetitive tasks. The use of robots instead of human subjects eliminates intentional exposures to emissions during the activities of staged exposure studies, thus eliminating the need for full IRB involvement and privacy laws related to human subject considerations. Modern robot manipulators, like the one used in this study, provide accurate repeatable motions within a few millimeters. In particular, a robot allows the allowing for a precise evaluation of how various conditions can alter exposures. In addition, this study aims to encourage the development and deployment of specially designed robotic platforms for exposure studies. This will allow the precise control of the robotic movement and sensing of its environment so that researchers can easily define different types of behaviors and measure any variability that might be of interest in a controlled manner (e.g., variations between different human subjects such as pressure and amount of paint). Further, a robotic platform has no limit on the number of samples and repetitions performed in a uniform fashion, a limitation of using human studies brought on by general fatigue of the subjects.
The type of actions that can be included and measured in the experiment is limited by the capabilities of the robot and the setup. For instance, a mobile robot could paint large stretches of walls compared to a stationary model that can reach only its immediate surroundings. In this study, the portion of painted drywall was less than originally anticipated due to the lack of bending capabilities in the robot's waist. The kind of motions that a robot can perform are unique to the specific hardware platform. Careful choice of the robot's design and setup must be made to ensure a comparable and faithful emulation of a human doing the same task.
Even though a robot provides significant advantages, there is an overhead cost associated with setting up the experiment. At this point in time, specialized robotics operators and programmers are required to design the motions. When deploying robots to truly unstructured scenes for more complex studies, sophisticated sensing and planning abilities are required because the task can become arbitrarily complex for the robot. Robust sensing and vision systems might be required to make the robots adapt to changes in the environment. Moreover, robotic platforms and logistical support might not be readily available to all researchers currently. The above issues are quickly changing through the introduction of increasingly affordable and capable robots to be made available for research and industrial use. Furthermore, robot control interfaces are being increasingly simplified to allow for the use of robots by nonexperts.
Conclusion
There is a convergence of developments in robotics, which allows the adaptation of this study's methodology to a wide variety of tasks. The types of exposure scenarios that can be evaluated using this approach will increase as the field of robotics advances. For example, exposure data generated by robots can be used to assess tasks in which the outcome depends on how the worker or consumer performs the task (e.g., welding, painting, cleaning, and spraying). This approach can also be used to evaluate the impacts to exposures from the use of different substances in the same setup (e.g., new formulations or products), or impacts from other exposure determinants (e.g., controls, secondary sources). Examples of key technologies may include: activity recognition of human actions from visual data and mapping of the corresponding operations to human motion; learning from demonstration, where robots are trained by humans in order to replicate specific tasks; and tactile sensors placed on human hands, which can provide highfidelity data for delicate operations performed by people that can assist the mapping of the motion to robots.
The current study provides a working motivation for using robotic platforms in exposure studies, especially to fill exposure data gaps mentioned above. The reproducibility and applicability of the robot were demonstrated through a simple task: painting drywall. The potential exposure estimates generated from the robotic platform are consistent with higher-tiered modeled estimates for painting. With advances in the capabilities of robotic platforms, and their ubiquitous and affordable availability, it is expected that robots will provide a safe and reliable platform for exposure estimation in the future.
Author contributions JS conceived and designed the study; EFC and KM performed experiments and collected the monitoring data; RS programmed and operated the robot; HK designed and created the robotic end effector; KEB supervised the design and programming of the robot; RTZ and CPW provided overall technical advice and guidance; EFC completed the experimental data analysis; RTZ and JS completed the exposure modeling and comparison; EFC and JS jointly prepared the manuscript with technical support and input from RTZ, CPW, and KEB.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/.
|
2019-11-19T15:36:57.644Z
|
2019-11-19T00:00:00.000
|
{
"year": 2019,
"sha1": "1023250e5b10b3d328dc36dc96e5d268a0326f1d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41370-019-0190-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1023250e5b10b3d328dc36dc96e5d268a0326f1d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
3709321
|
pes2o/s2orc
|
v3-fos-license
|
Behavioral Monitoring of Sexual Offenders Against Children in Virtual Risk Situations: A Feasibility Study
The decision about unsupervised privileges for sexual offenders against children (SOC) is one of the most difficult decisions for practitioners in forensic high-security hospitals. Facing the possible consequences of the decision for the society, a valid and reliable risk management of SOCs is essential. Some risk management approaches provide frameworks for the construction of relevant future risk situations. Due to ethical reasons, it is not possible to evaluate the validity of constructed risk situations in reality. The aim of the study was to test if behavioral monitoring of SOCs in high-immersive virtual risk situations provides additional information for risk management. Six SOCs and seven non-offender controls (NOC) walked through three virtual risk situations, confronting the participant with a virtual child character. The participant had to choose between predefined answers representing approach or avoidance behavior. Frequency of chosen answers were analyzed in regards to knowledge of the participants about coping skills and coping skills focused during therapy. SOCs and NOCs behavior differed only in one risk scenario. Furthermore, SOCs showed in 89% of all cases a behavior not corresponding to their own belief about adequate behavior in comparable risk situations. In 62% of all cases, SOCs behaved not corresponding to coping skills they stated that therapists focused on during therapy. In 50% of all cases, SOCs behaved in correspondence to coping skills therapists stated that they focused on during therapy. Therapists predicted the behavior of SOCs in virtual risk situations incorrect in 25% of all cases. Thus, virtual risk scenarios provide the possibility for practitioners to monitor the behavior of SOCs and to test their decisions on unsupervised privileges without endangering the community. This may provide additional information for therapy progress. Further studies are necessary to evaluate the predictive and ecological validity of behavioral monitoring in virtual risk situations for real life situations.
Measures
The participants were asked to rate each virtual character on a 6-point Likert-scale with regards to their realism (0 = not realistic at all, 5 = very realistic). The participants were further asked to rate each virtual character on a 6-point Likert-scale with regards to their sexual attractiveness (zero = not sexually attractive, 5 = very sexually attractive). Without the knowledge of the subject, the time from stimulus onset until the end of the sexual attractiveness rating was measured (Viewing Time, VT). The stimulus onset was defined as the first time point, the virtual character was in the field of view of the subject. The VT is a well-established method to assess (deviant) sexual interests (Schmidt et al., 2017). The so-called viewing time effect (VT effect), which is typically found when viewing time is assessed, shows that subjects take a significantly longer time to look at sexually attractive stimuli than at sexually non-attractive stimuli. It was shown, that these effects can be seen with regards to sexual orientation as well as sexual age preference (Harris et al., 1996;Imhoff et al., 2010Imhoff et al., , 2012Schmidt et al., 2017). Fromberger et al. (2015) recently demonstrated that the VT effect can also be replicated in high-immersive virtual environments.
Procedure
When arriving at the lab, a short questionnaire was applied in order to assess basic data (educational level, age, current neurological or psychiatric disorders). The participants were then asked to fill in the Kinsey Scale (Kinsey et al., 1948). Next, the VR equipment and its usage was explained, before the participant was asked to fill in the pre-test of the (Kennedy et al., 1993, SSQ). Afterwards, the participant was equipped with the HMD, the Wand and a headset in the experiment room. The headset allowed the participant to talk with the investigator, who was able to monitor the progress of the experiment in a separate room and to give instructions via the headset. In order to secure, that the cable of the HMD did not trouble the participant, a second investigator was in the experiment room. After the start of the virtual environment, the participant had the opportunity to become familiar with the virtual environment and the controlling of the environment. The participant was instructed via the headset, that he can freely move within the virtual green house and that he can press the buttons of the console with his virtual hand. He was further told, that he has the task to rate the virtual characters regarding their sexual attractiveness and regarding their realism on a 6-point Likert scale (zero = not sexually attractive at all / not realistic at all, five = very sexually attractive / very realistic). The instruction emphasized that the subjective feeling of the participant was of interest, rather than how he thinks others would feel about the character. Furthermore, he was told that he could look at the virtual characters in more detail, if he walks near to or walk around the virtual character. After all instructions, the participant walked through four test trials (two clothed virtual female and two clothed virtual male characters) in order to get comfortable with the task and the controlling of the experiment. The test trials followed the same rationale as the main trials afterwards. Overall, five trials with stimuli from each category (virtual women, men, boys, and girls) were presented in a fully randomized order. Each trial started with the instructional text on the virtual screen, which prompted the participant to turn around and look at the virtual character behind him. The participant had to look at least one time at the virtual character before he can start rating the character. The virtual character was positioned four meters behind the participant and animated with a neutral idle animation. In order to rate the virtual character, the participant had to go back to the virtual screen and was prompted to touch the number on the console with his virtual hand, which corresponds to his subjective experienced sexual attractiveness. In order to confirm his choice, he additionally had to press a virtual enter button. Without the knowledge of the participant, the time from start of the trial until the end of the attractiveness rating was assessed in each trial (Viewing Time). After the sexual attractiveness rating, the participant was prompted by the screen to rate the realism of the virtual character the same way. A randomized inter-stimulus interval between 10-15 seconds was applied. Every fifth trial, the participant has the opportunity to pause the experiment and to leave the virtual environment. After finishing the experiment, the participant was asked to fill in the post-test of the SSQ, the IPQ, the SPQ and the VRSRS. Before entering the main experiment phase, the results of the initial rating were automatically analyzed at the individual level in order to identify the virtual adult character with the shortest viewing time and lowest attractiveness rating (most unattractive virtual adult character) and the virtual child character with the longest viewing time and the highest attractiveness rating (most attractive virtual child character).
Data analysis
Before entering the main experiment phase, the results of the initial rating were automatically analyzed at the individual level in order to identify the virtual adult character with the shortest viewing time and lowest attractiveness rating (most unattractive virtual adult character) and the virtual child character with the longest viewing time and the highest attractiveness rating (most attractive virtual child character). The individual most unattractive adult character was presented during the baseline scenario; the individual most attractive child was presented during all risk scenarios. In order to validate the identification algorithm, ANOVAS with the within-group factor Condition (Virtual characters presented in the baseline condition vs. virtual characters presented in the scenarios) and the between-group factor Group (NCAs vs. CAs) were performed for all virtual characters chosen for the main experiment with regards to the Viewing Time, the attractiveness rating, and the realism rating. Figure S2 shows the means and SDs for the sexual attractiveness rating as a function of condition and subject group. The ANOVA for the attractiveness rating revealed no significant main effect Condition (F(1,11) = .53, p = .480, η 2 = .05), main effect Group (F(1,11) = .37, p = .556, η 2 = .03), or the Group x Condition interaction (F(1,11) = .53, p = .480, η 2 = .05). Figure ?? shows the means and SDs for the realism rating as a function of condition and subject group. The ANOVA for the realism rating revealed a significant main effect Group (F(1,11) = 5.28, p = .042, η 2 = .32). NOCs (M = 3.57, SD = 1.02) rated all in the baseline and scenarios used virtual characters as significantly more realistic than SOCs (M = 1.92, SD = 1.62). The main effect Condition (F(1,11) = 2.46, p = .145, η 2 = .18) and the interaction Group x Condition (F(1,11) = .18, p = .677, η 2 = .02) revealed no significance. Figure ?? shows the means and SDs for the viewing time as a function of condition and subject group. The ANOVA for Viewing Time revealed a significant main effect Condition (F(1,11) = 6.37, p = .028, η 2 = .37) and a significant Group x Condition interaction (F(1, 11) = 5.79, p = .035, η 2 = .34). The main effect Group was not significant (F(1, 11) = .04, p = .85, η 2 ¡ .01). Post-hoc t-tests revealed, that SOCs tend to look longer on child characters (M = 31088.82 ms, SD = 14408.53 ms) than on adult characters (M = 17880.66 ms, SD = 10777.90 ms; t(5) = 2.50, p = .055). NOCs showed no difference between the Viewing Time for child characters (M = 23402.16, SD = 12340.17) and adult characters (M = 23090.18 ms, SD = 12155.80 ms; t(6) = 0.15, p = .886).
Discussion
Main goal of the initial rating was to detect the virtual child character with the highest sexual salience for each individual CA. The developed virtual situation becomes only risky for SOCs, if the presented virtual child character has a high sexual salience and can therefore serve as a trigger. The sexual attractiveness rating of the virtual characters showed no significant difference between the subject groups as well as between adult and child characters. Furthermore, the sexual attractiveness ratings were very low (all ratings were on average below 1 on a 6-point Likert scale between 0 and 5). For healthy homo-and heterosexual subjects, Fromberger et al. (2015) showed that the sexual attractiveness rating of naked virtual adult characters corresponds with the sexual orientation of the subjects. In addition, sexual attractiveness was significantly higher in high-immersive presentation modes (HMD) than in 2D presentation mode (desktop monitor). Possibly, the higher sexual attractiveness ratings in the Fromberger et al. (2015) study in comparison to the current study are a consequence of the more prominent secondary sexual characteristics in the naked virtual characters compared to the clothed characters in the current study. Furthermore, the characters in the current study were animated with neutral poses and facial expressions. Dennis et al. (2014) demonstrated, that heterosexual subjects responded with significant sexual arousal (measured via penis plethysmography) only when virtual adult female characters were depicted as sexually open (e.g. joyful or seductive), rather than sexually closed or neutral. Thus, neutral animations of the virtual characters may have reduced the sexual salience of the stimuli. The detection of the most sexual salient child character was based on the VT paradigm, which assumes, that the virtual character with the longest VT has also the highest sexual salience (Schmidt et al., 2017).
Recently, it was shown that the viewing time effect can be replicated for heterosexual and homosexual healthy subjects in high-immersive environments and with virtual characters (Fromberger et al., 2015). In the current study, the viewing time paradigm was for the first time used in a high-immersive environment with forensic inpatients and in order to detect the most attractive virtual character at an individual level. To the best of our knowledge, the VT paradigm was until now only used to detect the sexual interest of SOCs based on the average VT revealed by building the mean of the VT for several pictures of the same category (e. g. children vs. adults) (Schmidt et al., 2017). Thus, the validity of the approach used in the study to detect the most sexual salient child character can be criticized. Nevertheless, the results demonstrated that the VT for the child characters was significantly longer than the VT for adult characters used in the risk scenarios in the group of SOCs. NOCs rated the adult and child virtual characters used in the risk scenarios as more realistic than SOCs. However, there was in both groups no significant difference with regards to the subjective realism between adult and child characters. Thus, one can assume that a different realism level of child and adult characters was not the reason for the significant viewing time effect within the SOC group. Therefore, it is reasonable to assume, that the virtual child character used in the risk situations, had the the highest individual sexual salience for each SOC.
|
2018-03-06T14:09:06.863Z
|
2018-03-06T00:00:00.000
|
{
"year": 2018,
"sha1": "2d3cd27ba489898f357d190205e55d6267a0cb14",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00224/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "623be05fff56233edeee072cb8980bb1fc372732",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
260307675
|
pes2o/s2orc
|
v3-fos-license
|
The Transmission Patterns of the Endosymbiont Wolbachia within the Hawaiian Drosophilidae Adaptive Radiation
The evolution of endosymbionts and their hosts can lead to highly dynamic interactions with varying fitness effects for both the endosymbiont and host species. Wolbachia, a ubiquitous endosymbiont of arthropods and nematodes, can have both beneficial and detrimental effects on host fitness. We documented the occurrence and patterns of transmission of Wolbachia within the Hawaiian Drosophilidae and examined the potential contributions of Wolbachia to the rapid diversification of their hosts. Screens for Wolbachia infections across a minimum of 140 species of Hawaiian Drosophila and Scaptomyza revealed species-level infections of 20.0%, and across all 399 samples, a general infection rate of 10.3%. Among the 44 Wolbachia strains we identified using a modified Wolbachia multi-locus strain typing scheme, 30 (68.18%) belonged to supergroup B, five (11.36%) belonged to supergroup A, and nine (20.45%) had alleles with conflicting supergroup assignments. Co-phylogenetic reconciliation analysis indicated that Wolbachia strain diversity within their endemic Hawaiian Drosophilidae hosts can be explained by vertical (e.g., co-speciation) and horizontal (e.g., host switch) modes of transmission. Results from stochastic character trait mapping suggest that horizontal transmission is associated with the preferred oviposition substrate of the host, but not the host’s plant family or island of occurrence. For Hawaiian Drosophilid species of conservation concern, with 13 species listed as endangered and 1 listed as threatened, knowledge of Wolbachia strain types, infection status, and potential for superinfection could assist with conservation breeding programs designed to bolster population sizes, especially when wild populations are supplemented with laboratory-reared, translocated individuals. Future research aimed at improving the understanding of the mechanisms of Wolbachia transmission in nature, their impact on the host, and their role in host species formation may shed light on the influence of Wolbachia as an evolutionary driver, especially in Hawaiian ecosystems.
Introduction
The Hawaiian Drosophilidae, long recognized as a striking example of adaptive radiation, are of considerable interest as model systems for understanding the underlying mechanisms of insular speciation [1].Comprised of up to 1000 species in two major genera (Scaptomyza and Drosophila), which are believed to have diverged within the Hawaiian archipelago approximately 23.4 million years ago, this taxonomic grouping represents approximately 10% of the insect fauna endemic to the Hawaiian Islands [2,3] and one third of the world's Drosophila species [4].Numerous mechanisms have been proposed to explain the explosive lineage diversification of Hawaiian Drosophilidae, including isolation, niche availability [5], sexual selection [6], and host plant and substrate shifts [1,3]; however, data are lacking on the potential role of symbiont pressures, despite recognition that symbionts, especially those associated with reproduction, could be a major contributor to insect species formation [7].In particular, a growing body of empirical evidence suggests that the reproductive endosymbiont Wolbachia may play a role in the speciation process of some arthropods [8][9][10], including Drosophila [11].
Wolbachia is a widespread and common α-proteobacterium (order Rickettsiales) that infects arthropods and nematodes [12].The relationship between Wolbachia and its host can span from parasitism to facultative or obligate mutualism to ultimate mutualism, and in some cases, beneficial and detrimental effects can be simultaneously conferred [13].Wolbachia strains possess a remarkable ability to significantly alter the reproductive functions of its host in ways that serve to enhance the rate of Wolbachia's transmission, be it through cytoplasmic incompatibility, male-killing, feminization of genetic males, increased fecundity of host, and parthenogenesis [13,14].Thus, through multiple mechanisms, Wolbachia possess the means to give rise to reproductive isolation barriers, which could contribute to the divergence of populations into new species [15].Consistent with that notion, cytoplasmic incompatibility is known to have a direct effect on gene flow and can serve as a mechanism of reproductive isolation between populations [11,16,17].
The primary mode of Wolbachia infection is vertical transmission to the host's progeny through the cytoplasm of the egg [14].Horizontal transmission is believed to occur as well, especially in arthropods, as evidenced by the widespread distribution of Wolbachia and its potential to infect new host species [8,18], phylogenetic incongruence between hosts and endosymbionts [12,19]), and evidence for species sweeps [20,21].In contrast, within filarial nematodes hosts, strict vertical inheritance of Wolbachia endosymbionts is evidenced by high levels of co-phylogenetic concordance for certain clades [22,23].At present, the community-level interactions required for Wolbachia strains to be successfully transmitted horizontally and become stable within a new host species remain largely unknown, but in some cases, they are believed to involve transfer through plant tissues or parasitoids of insects [24,25].
Molecular methods have been invaluable for the study of Wolbachia because of an inability to culture it outside of its host or host cells, owing to its obligate intracellular status [14].Based on molecular diversity analysis, the genus Wolbachia is subdivided into at least 17 possible supergroups [26,27], with terrestrial arthropods most commonly infected by Wolbachia belonging to supergroups A and B [28].Estimates for the incidence of Wolbachia in terrestrial arthropod species worldwide range between 40-76% [13,29,30], whereas within-species estimates for Wolbachia incidence indicate that infection rates tend to be either exceedingly high (>90%) or considerably low (<10%), depending on the surveyed insect system [13,30].In native Hawaiian insects, the overall incidence of Wolbachia infection at the species level was estimated to be ~14%, and for native Dipteran species (e.g., Drosophilidae and Calliphoridae), 12% [2].
Although many mechanisms have been proposed to explain the rapid and extensive diversification of the Hawaiian Drosophilidae, the potential contribution of Wolbachia as a driver of speciation and patterns of Wolbachia transmission have yet to be examined.Using a single gene marker, Wolbachia surface protein (wsp), Bennett et al. [2] found the incidence of infection within Hawaiian Drosophilidae, including genera Drosophila and Scaptomyza, was ~18%.Wolbachia's presence in the Hawaiian Islands, and the knowledge of the potential impacts that it can have on host reproductive strategies, give rise to the question: could Wolbachia have played a role in the diversification of the native Hawaiian insects?To begin to address this larger question, in this study we conducted genetic analyses of Wolbachia and its Hawaiian Drosophilidae hosts to examine: (1) the Wolbachia strain diversity and phylogenetic affiliations; (2) the co-phylogenetic diversification patterns of Wolbachia and hosts; and (3) Wolbachia host-switching mechanisms through stochastic character trait mapping to construct host ancestral traits.
Biological Specimens Screened for Wolbachia Endosymbionts
The Hawaiian Drosophila, many of which are primarily single island endemics that have high levels of host plant specificity, can be subdivided into 4 main groups: modified mouthparts, haleakalae, picture wing, and the AMC clade (comprising the groups antopocerus, modified tarsus, and ciliated tarsus) [31].The genus Scaptomyza is divided into 21 subgenera, 10 of which contain native Hawaiian species [31].A total of 399 Hawaiian Drosophilidae specimens representing a minimum of 136 species of Drosophila and 14 species of Scaptomyza collected from Kaua'i (n = 50), Lāna'i (n = 1), Maui (n = 68), Moloka'i (n = 17), O'ahu (n = 29), and the Island of Hawai'i (n = 234) were screened for Wolbachia infections (Supplementary S1).A number of undescribed morphospecies in the Scaptomyza, modified tarsus and modified mouthparts groups of Drosophila are included.These Drosophilidae specimens were components of biological collections described in Magnacca and Price [3].Additional screens for Wolbachia infections were conducted from DNA extracts of three species of insects that have invaded the Hawaiian archipelago: D. suzukii (n = 68 specimens from Kaua'i, O'ahu, and the Island of Hawai'i [32], Supplementary S1), Aedes albopictus (n = 1, collected on the Island of Hawai'i), and Culex quinquefasciatus (collected on the Island of Hawai'i, sample 6771, [33].The Wolbachia DNA was sourced from whole-body soaks or digests of individual body parts (e.g., genitalia or abdomen) and DNA extractions were performed using Qiagen DNeasy Blood and Tissue Extraction Kits.For notation purposes, Wolbachia strains having published lineage assignments are denoted by their host following established practices, e.g., a Wolbachia endosymbiont of Drosophila recens is written as wRec, or in the case of this study, sample number followed by host species name.
Using seven Wolbachia amplification targets (see below) and Sanger sequencing, individual specimens were classified as testing positive for a Wolbachia infection if any single amplification target was visible by gel electrophoresis and the sequenced amplicon matched to a Wolbachia sequence contained in the National Center of Biotechnology Information (NCBI) GenBank nucleotide sequence repository (approximate search dates: February 2018 to March 2019).
Amplicon Sequencing and Primer Redesign
We aimed to characterize Wolbachia allele diversity and determine the phylogeny of Wolbachia by sequencing seven gene targets, five of which are components of the widelyaccepted universal Multi-Locus Sequence Typing (MLST) system that assigns Wolbachia to a strain type using five housekeeping genes: coxA [cytochrome C oxidase subunit A], fbpA [fructose-bisphosphate aldolase], hcpA [hypothetical conserved protein], ftsZ [cell division protein], and gatB [aspartyl/glutamyl-tRNA aminotransferase subunit B] [34].The sixth and seventh gene target, wsp [Wolbachia surface protein] [34], is duplicated in Wolbachia endosymbionts of Drosophila, with the paralogous gene denoted wspB [Wolbachia surface protein (duplicate)] [35].The gene targets were amplified from DNA extracts using polymerase chain reaction (PCR), visualized using electrophoresis with 1.5% agarose gels, and amplification products purified in preparation for Sanger sequencing on an Applied Biosystems 3500 Genetic Analyzer (see Supplementary Information for details).The chromatograms were viewed and edited using Sequencher version 5.2.4 (Gene Codes Corporation).Based on chromatogram visualization, samples that showed evidence of a double Wolbachia infection were sequenced from clones generated with a TOPO-TA Cloning Kit using One Shot Chemically Competent TOP 10 Escherichia coli cells.
Preliminary amplification results showed high rates of amplification failures; therefore, to increase primer specificity, we redesigned primers for supergroups A and B in insect hosts.Primer re-design efforts utilized a combination of sequence data obtained from: (a) the n = 31 sequences generated in this study using original primer pairs, (b) wDrosophila gene sequences (n = 195) downloaded from the National Center for Biotechnology Information (NCBI), and (c) nucleotide sequences in silico extracted from five wDrosophila reference genomes belonging to supergroups A and B (Table 1).Those included: Wolbachia endosymbionts of D. recens (wRec), D. melanogaster (wMel), D. simulans (wNo), D. suzukii (wSuzi), and D. ananassae (wAna).Target regions within genomes were identified by BLASTn (v 2.2.30) using the 231 available sequences as queries, with the per-gene number of query sequences ranging from three (fbpA) to 141 (wsp) (accessions available from Supplementary S2).The BLASTn hits were filtered using a threshold e-value <0.001, and gene target regions were excised in silico along with 200 base pair regions flanking the 5 and 3 reading frames.Next, multiple sequence alignment was conducted for each gene in MEGA7 [36] and candidate primers were designed across sites internal or external to the MLST gene targets.Finally, all pairwise combinations of redesigned and original primers were tested for improved amplification and sequencing efficiency (see Supplementary Information).These efforts increased data for hcpA, fbpA, and ftsZ by 54 sequences obtained from 93 additional amplifications, yet the re-designed primers failed to improve amplifications for genes coxA and gatB.The overall poor amplification success for wsp and wspB (consistent with findings by Wu et al. [35]), led to the exclusion of those two genes for phylogenetic and strain typing analyses, while poor amplification of gatB led to the exclusion of that gene from phylogenetic analysis.The primer design strategy, PCR conditions for the original and modified primers, primer sequences, and those re-designed for this study are available from Supplementary Information.
Wolbachia Sequence Datasets
The final Wolbachia dataset included MLST genes amplified and sequenced from DNA extracts of native Drosophila spp., Scaptomyza spp.and invasive species D. suzukii, C. quinquefasciatus, and A. albopictus hosts as described above, plus published Wolbachia nucleotide sequences downloaded from the MLST database or extracted from genomes (Table 1).The published sequences were used as references for assigning Wolbachia alleles to supergroups and used as outgroups in phylogenetic reconstructions, plus represent Wolbachia endosymbionts of Drosophila hosts and mosquitoes sampled from around the world.After aligning sequences in MEGA7 using the ClustalW algorithm [36], the sequences were manually adjusted to ensure that all codons were in the correct reading frame and trimmed so that each sequence began and ended with a codon.The Wolbachia sequence data generated for this study are available from Supplementary Datafile S1.Previous studies have shown that phylogenetic clustering of individual MLST genes is sufficient for the classification of Wolbachia alleles into supergroups A and B [45].To evaluate if sequence data from re-designed MSLT primers performed similarly well, we reconstructed single-gene phylogenies using our sequence data and eight published reference sequences.These included the following: supergroup A, wMel, wSuzi (strain valsugana), endosymbionts of D. simulans (wHa) and A. albopictus (wAlbA); supergroup B, wNo, endosymbionts of C. quinquefasciatus wPip (sample 6771, [33]); and supergroup D and F outgroup sequences from Wolbachia endosymbionts of B. malayi (nematode, wBm) and C. lectularius (bed bug, wCle) (Table 1).Phylogenetic patterns for individual gene trees were inferred using a Bayesian methodology implemented in MrBayes (v3.2.5) [46] and the Maximum-Likelihood methodology implemented in RAxML (v1.5b2) [47].
Wolbachia Strain Typing
The MLST strain typing protocol established by Baldo et al. [34] defines an 'allele' as a nucleotide sequence that differs by at least 1 nucleotide base, and it classifies a 'strain' as unique if any individual possesses at least one different allele across any of the five loci, with data at all five loci required for strain assignment.We were unable to apply established MLST conventions (http://pubmlst.org/wolbachia;accessed on 1 July 2017; [34]) for allele and strain categorizations for two reasons: the universal MLST primer sets failed to produce amplifications at all five loci across the majority of our samples, and the amplicon products produced with redesigned primers did not span the full-length of MLST gene sequences.Therefore, we categorized each allele by supergroup affiliation based on single-gene trees and assigned each allele an arbitrary numeric code, which permitted comparison of allele variability and supergroup designations across species.
Phylogenetic Reconstructions
Evolutionary relationships and genetic similarity of Wolbachia strains can be inferred through phylogenetic analyses, and phylogenetic concordance between host and symbiont phylogenies can indicate co-speciation or horizontal transfer events between the two groups.We performed phylogenetic reconstruction for Wolbachia strains and their hosts, including Hawaiian Drosophilidae, invasive Drosophila flies and mosquitoes, and outgroup taxa, using Bayesian methodologies implemented in MrBayes (v3.2.5) [46] and the Maximum-Likelihood methodology implemented in RAxML (v1.5b2) [47].Model selection and procedures are available from Supplementary Information, and the final set of trees were visualized and edited in FigTree v1.4.3 [48].
Wolbachia Phylogenetic Signals
The five Wolbachia MLST gene targets were not successfully amplified in all samples.Therefore, to assess the impact of missing sequences on phylogenetic reconstructions, we examined concordance of Wolbachia supergroup designation based on single and concatenated gene trees.Phylogenetic reconstructions for 5-, 4-, and 3-gene MLST data sets revealed that strain assignments and tree topologies were consistent in nearly all cases (see Supplementary Information); therefore, we applied the 3-gene MLST dataset for co-phylogenetic reconciliation analysis and stochastic character trait mapping.
Host Sequence Data Set
Phylogenetic reconstruction for Hawaiian Drosophila and Scaptomyza was inferred using a sequence data set previously shown to produce a well-resolved Hawaiian Drosophilidae phylogeny [3].However, we used only four of the five genes published in that study (EF1g [elongation factor 1-γ], Gpdh [glycerol-3-phosphate dehydrogenase], Pgi [phosphoglucose isomerase], Yp2 [yolk protein 2]).The gene Fz4 (frizzled 4) was excluded because of high levels of missing data in the original published dataset, which had negligible effects on the tree topology (compared to [3]).Only Hawaiian Drosophilids having confirmed Wolbachia infections with three or more sequences were utilized for phylogenetic reconstructions, along with host sequences obtained by a BLASTn search of genome contents for D. suzukii, D. melanogaster, D. simulans, and two mosquito species, A. albopictus and C. quinquefasciatus (accessions available from Table S1).Searches for genes in mosquitoes recovered genes EF1g, Gpdh, and Pgi but not Yp2 (or Fz4).The concatenated host sequence data set totaled to 1812 bp across the 4 genes (EF1g [507 bp], Gpdh [363 bp], Pgi [306 bp], Yp2 [636 bp]).
Co-phylogenetic Assessment of Host Species and Wolbachia Strains
To evaluate biological events that might influence associations between host and symbiont phylogenies, we conducted co-phylogenetic reconstruction analyses for Wolbachia and the Hawaiian Drosophilidae, as well as Wolbachia and the 2 mosquito host species collected on the Island of Hawai'i.By considering five possible biological events (cospeciation, duplication, duplication and host switch, loss, and failure to diverge) and applying each a cost, JANE [49] used a heuristic approach to evaluate and find minimal cost solutions that best explain associations between host and endosymbiont phylogenies [49].Two models were considered by setting the co-speciation cost parameter to 0 or 1, while keeping all other parameters fixed as follows: loss, failure to diverge, and duplication were each set to a cost of 1, and the parameter duplication and host switch was set to a cost of 2 [49,50].The genetic algorithm parameters were set to a population size of 23 and the number of generations set to 45, as suggested by Conow et al. [49].Additional statistical parameters included selecting the random tip mapping procedure with 1000 replicates.Data inputs included host and endosymbiont trees based on Bayesian inference using the codon position data set for the host species and the 3-gene, gene + codon position data set (coxA, hcpA, and ftsZ) for Wolbachia (see Supplementary Information for justification).Additionally, a co-phylogenetic tanglegram was produced using the cophylo function in the phytools v0.6-44 package in R [51].
Stochastic Character Mapping
Potential host-switching mechanisms were evaluated using stochastic character trait mapping [52], which characterizes associations between Wolbachia phylogenies and host species characteristics.When co-speciation can be explained by a particular host trait, evolutionarily conserved characters of the hosts are reflected in the phylogenetic reconstruction of their endosymbionts.Data inputs included three host species traits, island of collection, host plant families, and preferred ovipositional substrate [3], with analyses conducted using the Wolbachia 3-gene and gene + codon position data set (coxA, hcpA, and ftsZ) (see Supplementary Information for justification).The contemporary host character traits are depicted on branch tips as a pie chart, with a priori known character traits indicated by 1.0 probability (i.e., 100%) and unknown character traits depicted as equal probability across all possible categories (e.g., 0 ≤ x ≤ 1), with the sum of all character state probabilities equaling 1.The internal nodes (also a pie chart) depict the posterior probability of each host character trait being the ancestral state, which reflects the strength of the association between that host trait and the endosymbiont phylogeny.This analysis was performed using phytools v0.6-44 package in R [51].A total of 225 stochastic character maps were constructed using a model of even rates, as it was indicated to be the best model based on the computed Akaike Information Criterion (AIC) values using the phytools fitMK function (Table S3).
Incidence of Wolbachia Infection
Among the 150 species of Hawaiian Drosophilidae screened (including undescribed morphospecies), Wolbachia infections were confirmed for 30 species (20.0%species infection rate), and across the entire data set, infections were confirmed for 41 of 399 specimens (10.3% overall specimen infection rate) (Table S2).At a genus level, infection frequencies were higher in Scaptomyza (seven of 14 species screened, 50.0%) than Drosophila (23 of 136 species screened, 16.9%).An additional 24 Hawaiian Drosophilidae specimens belonging to 17 species (including five undescribed) showed evidence of infection by presence of PCR bands, but infection by Wolbachia could not be confirmed owing to the amplicons failing Sanger sequencing.Had those samples been included in the Wolbachia infection tally (65/399), the overall infection rate would increase to 16.3%.Some insights into the variability of infection status by species (and sequencing success) can be gleaned from species having data from multiple samples.For example, among 13 species with five or more samples screened (excluding the taxa resembling D. basimacula, a complex of undescribed species), the proportion of within-species infections ranged from 0% to 29% (Table 2).A caveat to these findings is that within-species infection rates are known to vary widely (i.e., 10-90%), and a sample size larger than what was available in our specimen collection is required for a robust assessment of infection rates.Screens of the invasive D. suzukii indicated that 8 of 68 (11.8%) individuals possessed a Wolbachia infection, and that 20 additional individuals may have been infected based on PCR amplification alone.A record of PCR amplicons and sequencing is provided in Table S2.
Wolbachia Strain Typing and Supergroup Designations
A complete MLST profile (5 genes: coxA, fbpA, gatB, hcpA, and ftsZ) was obtained for Wolbachia endosymbionts of only 9 individual Hawaiian Drosophilidae, all of which belonged to supergroup B, plus wBm and wCle outgroup taxa belonging to supergroups D and F. The gatB gene failed PCR amplification across the majority of individual Drosophilidae and was not recovered from endosymbiont genomes belonging to hosts D. suzukii, A. albopictus, and C. quinquefasciatus, leaving only genes coxA, fbpA, hcpA, and ftsZ available for analytical inferences across the majority of Wolbachia datasets.Individual-gene phylogenetic reconstructions of Wolbachia based on coxA, fbpA, hcpA, and ftsZ gene sequences (n = 46, 33, 44, and 28 sequences, respectively) showed strong support for the clustering of alleles by supergroup, although supergroup sister status and placement relative to the supergroups D and F outgroups was inconsistent across trees, and placement of some individuals within supergroup clusters varied slightly (Figures S1 and S2, Bayesian and Maximum-Likelihood trees).
Phylogenetic reconstructions of Wolbachia based on the concatenated data set comprised of coxA, hcpA, and ftsZ genes, and the 25 individuals with data available at all three genes (including outgroups), showed clear separation between supergroups A and B (Figure S3), consistent with the four-gene dataset (Figure S4).However, the three-gene dataset showed supergroup B placed interior to supergroup A (instead of sister), possibly driven by inclusion of the additional set of Wolbachia sequences (247wD.engyochracea, 266wD.Hawaiiensis, and the invasive wAlb collected on the Island of Hawai'i) that had conflicting supergroup assignments and were positioned intermediately between supergroups A and B. Given that the three-gene data set recovered a reasonable degree of phylogenetic structure, and allowed use of the maximum available data, we selected that dataset, using the Bayesian method and partition scheme 'gene and codon position', for co-phylogenetic reconciliation analyses and stochastic character trait mapping.The analysis method (Bayesian versus Maximum-Likelihood analyses (Figures S3 and S4) had little effect on tree topologies, and no significant statistical differences were detected between their top likelihood scores (See Supplementary Information for model selection justification).
Strain Typing
A total of 41 Hawaiian Drosophilidae were confirmed as having Wolbachia infections, with four individuals (w16 D. large spots, w208 D. apodasta, w215 D. nr.perissopoda #1, w250 D. engyochracea) doubly infected (Table 3, Table S2).Among the 44 Wolbachia typed with MLST markers, a minimum of 27 unique strains were present based on Wolbachia allelic diversity analysis.This minimum number of strains is conservative because only nine Wolbachia (representing seven unique strains) could be sequenced across all five gene targets (Table 3).Patterns of infection varied by species, for example, one individual of D. engyochracea was doubly infected, one was single-infected, and one showed a PCR amplification, but the PCR product failed to sequence.The majority (30/44, 68%) of Wolbachia alleles belonged to supergroup B across all loci (Table 3), based on individual gene trees, while only five (5/44, 11%) belonged to supergroup A, including two from within the double-infected D. engyochracea.A modest amount (9/44, 20%) of Wolbachia strains were characterized as having supergroup A and B alleles that conflicted across individual gene trees, including two Drosophila spp.(of four) with double-infections.The hcpA allele 11 was responsible for seven of the nine observed A/B allelic conflicts, and one allele (allele 3) did not clearly assign to supergroup A or B in the single-gene phylogeny (Figures S1 and S2).Additional patterns of interest were that the hcpA allele 14 was shared by the Wolbachia endosymbiont of native S. undulata and invasive D. suzukii hosts, and that allele 13 was detected in Wolbachia of two distantly related invasive host flies sampled in Hawai'i: D. suzukii and D. simulans.For C. quinquefasciatus host specimens collected on the Island of Hawai'i, South Africa and Sri Lanka, only a single strain of Wolbachia was detected.Two alleles, at two genes (coxA, allele 13; hcpA, allele 11), were detected in Culex and also >10 Hawaiian Drosophilidae, but in no cases were those two alleles observed in the identical combination in flies as was observed in mosquitoes.Conversely, wAlb, isolated from the A. albopictus specimen collected on the Island of Hawai'i(sequenced for this study), had no alleles in common with the other wAlb sample [34] or even with any Hawaiian Drosophilidae.A limitation to our study is that we were unable to match allele names to those contained in the online MLST database curated by Baldo and colleagues (http://pubmlst.org/wolbachia/,[34])) because we had to use redesigned primers to successfully sequence the genes in Hawaiian Drosophila.Therefore, the gene sequences in our dataset are of different sequence lengths compared to the MLST database and we could not determine if the alleles that were sequenced in this study are "novel" to Hawai'i or to what parts of the world they are most similar.Table 3.A list of Hawaiian Drosophilidae, invasive mosquito, and outgroup host species screened for Wolbachia infections using PCR amplification and verified by Sanger sequencing.The five gene targets were amplified using a modified version of the multi-locus strain typing (MLST) approach for strain assignment to supergroup (see text for details).For each gene, alleles were assigned to a supergroup based on single-gene phylogenetic reconstructions, and unique sequences were assigned an arbitrary allele number.In some cases, supergroup assignments were discordant across alleles, and alleles that could not be assigned to a supergroup are denoted as (?).Wolbachia endosymbionts of double-infected hosts are denoted by bold font.MLST genes that failed amplification and/or sequencing are denoted as '---'.Patterns of Wolbachia strain diversity corresponded to host relatedness in some, but not all cases.Two closely related, sympatric host species, D. hawaiiensis and D. engyochracea were possibly infected with the same, or if not the same, a similar Wolbachia strain (at 3-identical alleles, Table 3).Furthermore, within the same population, an additional D. engyochracea specimen was doubly infected with one Wolbachia strain identical to D. engyochracea and D. hawaiiensis (at two alleles), plus a second strain with two unique alleles, both belonging to the uncommon supergroup A. Evidence of infection by identical Wolbachia strains (at five loci) was found for the distantly related host species S. caliginosa and D. seclusa, both collected on the island of Hawaii.Interestingly, it was also found that the five members of the D. basimacula/perissopoda "bristle tarsus" complex were each infected by a different Wolbachia strain, while a sixth (D. nr.perissopoda #5) was not infected.Each is only represented by one or two individuals, but the strains appear to be the same within each taxon.
Phylogenetic Reconstruction Analysis
Phylograms for Hawaiian Drosophilidae host species showed nearly identical topologies between inferences made with Bayesian and Maximum Likelihood analyses (Figure S5) and were approximately concordant with the Hawaiian Drosophilidae phylogram previously published by Magnacca and Price [3].The only discrepancy is the placement of the modified mouthparts group (represented here by D. nigrocirrus and D. "large spots") as sister to the picture wing group with the AMC clade basal, rather than with the picture wing group basal as they were found.However, Magnacca and Price [3] noted that the phylogenetic position of the modified mouthparts and AMC clade (outgroups) relative to the picture wing species group were not well supported, and in fact the arrangement found here is the same as in their analysis using BEAST.The addition of A. albopictus and C. quinquefasciatus had minimal effect on tree topology.
Co-Phylogenetic Reconciliation
The co-phylogenetic reconciliation analysis run in JANE [49] determined that the optimal solutions consisted of two main biological events: co-speciation and duplication with host switches (Table 4, Figure 1A).The co-phylogenetic reconstructions for the dataset consisting of only Hawaiian Drosophilidae and their Wolbachia endosymbionts resulted in identical optimal solutions, regardless of co-speciation being assigned a cost of 0 or 1, and the pattern of events was similar, differing only slightly by the projected timing of events (Figure S6, Panels A and B).When invasive mosquitoes collected in Hawaii and their Wolbachia endosymbionts were added to the data, the optimal solutions differed slightly by the number of each event, and the optimized cost between the two models differed significantly (p < 0.01), indicating that co-speciation has a significant effect on the overall model (Table 4, Figures 1A and S6).Lastly, a tanglegram illustrates that cophylogenetic relationships between Wolbachia and its host show patterns consistent with both co-evolution (parallel connections) and horizontal transfer (crossed lines) (Figure 1B).
Stochastic Character Trait Mapping
The modeled ancestral state of host ovipositional substrate showed high posterior probabilities (depicted on the interior node) when mapped to the unrooted Wolbachia phylogeny, reflecting a phylogenetic signal for this character among similar Wolbachia strains (Figure 2).The bark and sap flux ancestral traits were, for the most part, conserved among supergroups A and A/B, while the bulk of supergroup B Wolbachia was affiliated with the trait leaf.In contrast, little support was evident for host trait associations to Wolbachia phylogenies for the host traits island of collection and host plant family (Figure S7).
Discussion
Our assessment of Wolbachia within the Hawaiian Drosophilidae family contributes to the understanding of endosymbiont transmission and its potential role in speciation.Using a modified MLST strain typing protocol, and through phylogenetic analyses, we
Discussion
Our assessment of Wolbachia within the Hawaiian Drosophilidae family contributes to the understanding of endosymbiont transmission and its potential role in speciation.Using a modified MLST strain typing protocol, and through phylogenetic analyses, we found evidence for both coevolution and horizontal transmission of Wolbachia within Drosophila sampled across the Hawaiian archipelago.Our study complements the singular previous broad-scale study of Wolbachia within natural populations of Hawaiian insect taxa by Bennett et al. [2], in which strain diversity was characterized using a single gene marker, wsp.These studies differed by taxonomic scope, in that our primary focus was to investigate Wolbachia strain diversity among members of native Hawaiian Drosophilidae (and select invasive insects), and we used a modified version of the MLST strain typing scheme developed by Baldo et al. [34].Despite study design differences, findings across studies were largely concordant, with Bennett et al. [2] determining the species-level incidence of Wolbachia infection for native Hawaiian Drosophilidae to be 18.1%, compared to our finding of 20.0%.Across all samples screened, we found an infection rate of 10.3%, which is lower than Bennett et al.'s [2] incidence of infection at 18.1%.That difference in infection rate can be attributed to the sampling of different taxa, along with uneven sample numbers within individual species.We caution that many species considered in this study were represented by only a single individual; thus, infection status is not representative of the species as a whole.Indeed, we found strong differences in percent infection rate within individual species having data available for five or more individuals.Additionally, although our efforts to re-design Wolbachia MLST primers improved amplification efficiency and increased the number of confirmed infections, the amplification and sequencing of Wolbachia alleles still proved to be difficult and infection rates may thus be an underestimate.A few of the species (namely D. claytonae and D. setosifrons) are also represented only by older specimens with poor DNA extractions, which may not have yielded enough to detect Wolbachia.If specimens with PCR bands only (absent sequencing results) were to be counted as positive infections, the incidence of Wolbachia at both the species and individual level would increase to 28.1% and 16.3%, respectively.Between supergroups A and B, the majority of Wolbachia strains in Hawaiian Drosophilidae were determined to belong to supergroup B (at 68%), consistent with previous screens in native Hawaiian insect taxa, using wsp, at ~75% [2].Among the species included in Bennett et al.'s [2] study, and also screened here, the Wolbachia supergroup designations were concordant for endosymbionts of D. basimacula, D. nr.basimacula, D. redunca, and D. ancyla, which harbored Wolbachia from supergroup B, and D. nigrocirrus, which harbored Wolbachia from supergroup A. With regards to invasive Drosophila, Bennett et al. [2] found that D. suzukii was infected only by Wolbachia belonging to supergroup A, whereas we found individuals harboring infections belonging to supergroups A (n = 5) and B (n = 3).Interestingly, we observed that a Wolbachia infecting a D. suzukii individual collected from Hawai'i shared at least two identical alleles (coxA and hcpA) with the non-native species D. simulans that was also collected from Hawai'i by Ellegaard et al. [38]).
Mechanisms of Wolbachia Transmission
In the case of purely vertical transmission of Wolbachia within the Hawaiian Drosophilidae, the expectation is that Wolbachia strains would be most similar between closely related host species and that phylogenetic reconstructions of the host and endosymbiont would be fully congruent [18].The alternative hypothesis is that host-switching may play a role in transmission, in which case host and endosymbiont phylogenies would be discordant.Using co-phylogenetic reconciliation analysis, we found that optimal solutions generated by JANE consistently showed co-speciation (i.e., vertical transmission) and duplication with host switching (i.e., horizontal transmission) events as significant parameters despite the costs associated with them.Further evidence for both scenarios-vertical and horizontal transmission-are evidenced through strain typing results.For example, the distantly related species D. seclusa and S. caliginosa possessed seemingly identical Wolbachia strains, and conversely, individual hosts belonging to the same species harbored differing Wolbachia strains (e.g., D. engyochracea).Mechanisms for horizontal transmission are suggested by stochastic character trait mapping results, which revealed a positive association between phylogenetic patterns of Wolbachia and their hosts' ancestral trait preferred host ovipositional substrate, a trait that is more evolutionarily conserved than affiliations with host plant families [3,31].For preferred ovipositional substrate, in general, Hawaiian Drosophilidae from the genus Scaptomyza use flowers or rotting fruits (as well as many unusual substrates, such as living Cyrtandra leaves), the AMC clade (i.e., antopocerus, modified-tarsus, ciliated-tarsus) utilizes rotting leaves, the picture wing species group uses rotting bark or sap-flux, and the modified mouthparts clade (e.g., D. nigrocirrus and D. large spots) uses a range of ovipositional substrate types [31].High posterior probabilities for ancestral states of host ovipositional substrate indicated associations between the trait 'bark' and 'sap flux' for supergroups A and A/B and the trait 'leaf' for supergroup B. This pattern was consistent even for the single D. large spots specimen doubly infected by Wolbachia strains belonging to supergroups A and B. Notably, the only other Wolbachia belonging to supergroup A isolated from Hawaiian Drosophila was isolated from D. nigrocirrus, also a member of the modified mouthparts sub-group.The host plant and substrate are unknown for both of these species.Bennett and colleagues [2] noted that phylogenetically, wsp alleles amplified from Hawaiian taxa tended to group closely together, and they found evidence for sharing of identical or similar wsp alleles between close and distantly related Hawaiian insect species.They postulated that this observation can be explained by Wolbachia infections persisting through speciation, as well as horizontal transmission occurring between host taxa.An association of Wolbachia supergroup B with the decaying leaf substrate could play a role in one of the evolutionary puzzles of Hawaiian Drosophilidae, namely, why there are so many closely related, sympatric species utilizing the same host substrate.This is most readily seen in the spoon tarsus subgroup on Hawai'i and the bristle tarsus subgroup on Kaua'i.The latter is represented here by six members of the D. basimacula-perissopoda species complex, which can be distinguished by the number and arrangement of thickened bristles on the modified front tarsus of the male.Each was found to carry a different strain of Wolbachia, or none.Novel infection or loss of infection may initiate the localized equivalent of "founder events", leading to rapid speciation and maintenance of species boundaries when combined with the sexual selection for which Hawaiian Drosophila are well known [53].Consistent with our findings, plants are thought to play key roles in the horizontal transmission of Wolbachia strains between infected and uninfected individuals, as well as between diverse insect species.For example, Sintupachee et al. [54]) found that distantly related species of arthropods found to co-occur on pumpkin leaves harbored Wolbachia with similar wsp sequences, and Li et al. [25] showed under a controlled experimental laboratory setting that a stable Wolbachia infection could be attained by uninfected whitefly individuals through feeding on the same leaf substrate previously exposed to Wolbachia infected individuals.In that study, Wolbachia was documented as dispersing to adjacent leaves within just a few days of the initial plant infection, where it remained within the phloem of the plant for a minimum of 50 days [25].In Hawaiian insects, Bennett et al. [2] found that nearly identical Wolbachia wsp alleles were shared between some Diptera species (e.g., Drosophila forficata) and Hemiptera (Nesophrosyne craterigena), which they propose is explained by a reliance of both Drosophila and Nesophrosyne species on shared host plants across their ranges.Together, plant utilization and feeding habits may help explain why most native Drosophilidae species were infected with Wolbachia from supergroup B, why some members were infected with supergroup A (modified mouthparts group), and why identical alleles were shared between some distantly related taxa.Our findings are thus congruent with Bennet et al. [2], who proposed that horizontal transmission of Wolbachia occurs between Hawaiian taxa at multiple taxonomic scales.Insects that possess piercing-sucking mouthparts may be more apt to transmitting Wolbachia to plants through feeding [19,54], and Wolbachia has been found to exist within insect salivary glands in addition to other somatic tissues [24,55].Additionally, honeydew and infected leaves have been implicated in previous studies as a potential means of horizontal transmission [25,56].Most non-native Drosophila included in this study were infected with supergroup A; however, infection by supergroup B Wolbachia within non-native D. suzukii individuals could be explained by their occasional use of native plants [31].Full strain typing profiles, if available, could be used to test this idea.In other biological systems, although extremely rare, Wolbachia strains have been known to rapidly displace other strains, often in association with insect invasions.For example, the Wolbachia variant wRi rapidly displaced wAu within their host D. simulans [57], and horizontal transmission occurred for Wolbachia endosymbionts and their host silverleaf whitefly (Bemisia tabaci), in which a host shift event occurred in China from indigenous members of the complex to the invader as well as from the invader to indigenous relatives [24].An alternative explanation to plant-mediated horizontal transfer of Wolbachia is through non-lethal probing of infected nymphs and uninfected nymphs by parasitoid wasps ( [24], reviewed by Sanaei et al. [58]).That mechanism for transmission is consistent with Bennett and colleagues [2] who postulated parasitoids to be a potential mechanism of horizontal transmission for Wolbachia in Hawaiian taxa, in addition to plant associations.They found that parasitoids, along with native and non-native Drosophila species, were grouped closely together based on the phylogenetic reconstruction of the wsp gene.
Discrepancy in Supergroup Designation of Loci
Whether supergroups can recombine has been the subject of debate.Ellegaard et al. [38] proposed that Wolbachia supergroups are irreversibly separated, and that barriers other than host-specialization are able to maintain distinct clades in recombining endosymbiont populations.Their conclusion was based on naturally occurring double-infections of Wolbachia strains wHa and wNo endosymbionts of D. simulans.Recent findings from a survey of 33 genome-sequences for Wolbachia strains belonging to supergroups A-F found that strains maintained a supergroup relationship across 210 conserved singlecopy genes, yet an analysis of interclade recombination screening revealed that 14 intersupergroup recombination events had occurred in six of the 210 core genes (6/210 = 2.9%) [59].Consistent with recombination events, Baldo et al. [60]) found evidence for recombination between gatB and fbpA alleles, and intragenic re-combination was detected by comparing patterns of gltA to other housekeeping genes [60].In this study, among the 44 Wolbachia strains isolated from Hawaiian Drosophilidae hosts, conflicting supergroup designations were observed for 20.4% of the strains (with data available at two or more genes), which in some cases resulted in an intermediate phylogenetic placement between supergroups A and B. In particular, coxA and hcpA alleles exhibited discordance between supergroup placement, congruent with discordance in supergroup designation for coxA and hcpA alleles observed within Lepidoptera species collected from West Siberia [61].Although we cannot fully rule out that allelic discordance across strains may be a result of preferential amplification of certain alleles by primers in the presence of multiple infectionsfor example, double infections by strains belonging to supergroups A and B were observed to occur within w208 D. apodasta and w215 D. nr.perissopoda-the majority of individuals with conflicting alleles lacked evidence for the presence of a double infection.Therefore, the discrepancy in supergroup assignment between alleles may have resulted from a recombination event that occurred within a doubly infected host species and subsequent fixation of alleles.Further research could help to elucidate the complex interactions of endosymbionts and host taxa occurring within Hawaiian insect communities.
Conservation Implications
The rapid diversification of Hawaiian Drosophila results from a combination of evolutionary-time scale island isolation, rugged topography, and development of novel host plant associations that have persisted for millions of years [3].Many species are single-island endemics with narrow ranges and are restricted to the natural distribution of their host plants, which makes populations especially vulnerable habitat degradation and climate change.At present the US Fish and Wildlife Service lists 13 Hawaiian Drosophilds as endangered (D. aglaia, D. differens, D. digressa, D. hemipeza, D. heteroneura, D. montgomeryi, D. mulli, D. musaphilia, D. neoclavisetae, D. obatai, D. ochrobasis, D. sharpi, D. substenoptera, and D. tarphytrichia) and one as threatened (D. musaphilia).These listed species represent 14.4% of all insects, and 4.8% of all listed invertebrates, within the USA (ECOS Environmental Conservation Online System https://ecos.fws.gov/ecp,accessed on 5 March 2023).Given Wolbachia's impacts on reproduction, consideration of host-symbiont relationships and infection status might increase success of breeding programs and ensure that translocation efforts do not suffer from effects of cytoplasmic incompatibility.With regards to climate change, experimental data for Hawaiian Drosophila has demonstrated that species are locally adapted [62,63], thus, resilience to warming temperatures could perhaps be enhanced by manipulation of the host microbiomes, including Wolbachia endosymbionts.Endosymbiont-mediated responses to temperature stress are known to include transcription response and behavior [64,65].
Conclusions
This study sheds light on the infection status and coevolutionary history of Wolbachia endosymbionts within their Hawaiian Drosophilidae hosts.Co-phylogenetic reconciliations and comparative phylogenetic analyses indicate that the transmission patterns of Wolbachia is best explained by both co-speciation and host-switching events.Future studies that survey Wolbachia from a greater breadth of native Hawaiian arthropod taxa, as well as introduced arthropod invasive taxa, may help to improve our understanding of how Wolbachia transmission has occurred in Hawaiian ecosystems.Insights into Wolbachia infections and strain types could help guide conservation programs, possibly enhancing translocation efforts, impacting host behavioral response to temperatures, and conferring host thermal tolerance.
Supplementary Materials:
The following supporting information can be downloaded at https:// www.mdpi.com/article/10.3390/genes14081545/s1,Table S1.National Center for Biotechnology Information (NCBI) accession numbers for Hawaiian Drosophilidae gene sequences selected for phylogenetic reconstruction of individual species having a verified Wolbachia infection, and genome accessions for outgroup taxa (also infected) [3,[66][67][68][69][70]. Table S2.Records for amplification and sequencing of Wolbachia endosymbionts of Hawaiian Drosophilids.Table S3.Record of Akaike information criterion (AIC) values obtained using the fitMK function in phytools v0.6-44 (Revell 2012) [51] package in R to determine the best rates model to apply to each data set for stochastic character mapping analyses.
Table 4 .
Co-phylogenetic reconstructions implemented in JANE4 (see text for details) with costscheme parameters loss, failure to diverge, and duplication each set to 1, duplication and host switch set to 2, and varying the co-speciation cost (Cost) by 0 or 1.
23 Figure 1 .Figure 1 .
Figure 1.(A).Co-phylogenetic reconciliation analysis for Hawaiian Drosophilidae and two species of invasive mosquitoes and their Wolbachia endosymbionts based on the following cost scheme: cospeciation: 1; duplication: 1; duplication and host switch: 2; loss: 1; failure to diverge: 1.The estimated biological events that best describe the data are depicted on the phylogeny [open circle: cospeciation; closed circle: duplication; closed circle with arrow: duplication and host switch; dashed line: loss].Red indicates that the event is optimally placed, whereas yellow indicates that another placement exists that is equally valid.(B).A tanglegram depicting the co-phylogenetic relationship between the Hawaiian Drosophilidae and invasive mosquito phylogeny (left) and their WolbachiaFigure 1. (A).Co-phylogenetic reconciliation analysis for Hawaiian Drosophilidae and two species of invasive mosquitoes and their Wolbachia endosymbionts based on the following cost scheme: co-speciation: 1; duplication: 1; duplication and host switch: 2; loss: 1; failure to diverge: 1.The estimated biological events that best describe the data are depicted on the phylogeny [open circle:
Figure S1.Phylogenetic reconstruction of Wolbachia housekeeping genes based on Bayesian inference analyses (see main text for details): (a) cytochrome C oxidase subunit A (coxA) [378 bp], with 46 sequences, (b) conserved hypothetical protein (hcpA) [381 bp], with 44 sequences; (c) fructose-bisphosphate aldolase (fbpA) [417 bp], with 30 sequences, and (d)) cell division protein (ftsZ) [354 bp], with 28 sequences.Individuals consistent in their supergroup designations across all genes considered are indicated as either pink for supergroup A or purple for supergroup B. Individuals that showed conflicting supergroup designation between genes are shown in grey.Outgroup taxa belonging to supergroups D and F are shown in green.A solid line indicates that supergroup designation was based on three or more genes, whereas a dotted line indicates that data for 2 of fewer genes were available for super group designation.The taxonomic standing is uncertain for Wolbachia endosymbiont host species Drosophila basimacula #5 and #2 (samples 5 and 41), D. quasiexpansa sample 145, D. redunca sample 216 and D. perrisopoda sample 215 (see main text for details).
Figure S2.Phylogenetic reconstruction of Wolbachia housekeeping genes based on Maximum Likelihood analyses (see main text for details]): (a) cytochrome C oxidase subunit A (coxA) [378 bp], with 46 sequences, (b) conserved hypothetical protein (hcpA) [381 bp], with 44 sequences; (c) fructose-bisphosphate aldolase (fbpA) [417 bp], with 30 sequences, and (d) cell division protein (ftsZ) [354 bp], with 28 sequences.Individuals consistent in their supergroup designations across all genes considered are indicated as either pink for supergroup A or purple for supergroup B. Individuals that showed conflicting supergroup designation between genes are shown in grey.Outgroup taxa belonging to supergroups D and F are shown in green.A solid line indicates that supergroup designation was based on three or more genes, whereas a dotted line indicates that
Table 1 .
Data for Wolbachia genetic sequences used for the purpose (Purpose) of in silico extraction of sequence from genomes for primer redesign (PR) or Wolbachia allele strain typing and/or phylogenetic analysis (A/P).Shown are Wolbachia host species names, Wolbachia strain abbreviations, host collection locations or laboratory sources if known, National Center for Biotechnology Information (NCBI) accessions, genome references, and Wolbachia supergroup designations.
Table 2 .
A comparison of numbers of individual Hawaiian Drosophildae (genus Drosophila) species with at least five specimens per species screened, per-species total numbers of individuals screened, number of individuals with confirmed Wolbachia infections, numbers of individuals having no confirmed infections but positive for PCR amplifications that failed sequencing, total number of individuals having zero amplifications across all loci, and the proportion of infected individuals by species.
n/a = not applicable.
|
2023-07-30T15:09:58.084Z
|
2023-07-27T00:00:00.000
|
{
"year": 2023,
"sha1": "bc803f30ce9def77438badcec274a369e9da57da",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/14/8/1545/pdf?version=1690531421",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e64c8a4540079e17e1fc8446fbcf54c3ce532c27",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
139437402
|
pes2o/s2orc
|
v3-fos-license
|
Toward Subtle Manipulation of Fine Dendritic β-Nucleating Agent in Polypropylene
Dendritic β-nucleating agent (β-NA) can readily manipulate the formation of dendritic β-crystal with a unique toughening effect on polypropylene (PP) to drastically enhance the ductility. However, by the current method, the geometric size is too large to fully perform the nucleating efficiency. In this study, by comparatively investigating the effect of molecular weight of PP and diffusion of β-NAs in a PP melt, we proposed a novel carrier strategy that selective enrichment of β-NAs in a PP carrier was followed by directed migration into polymer matrix. Accordingly, the growth of NAs was controlled by the release from the PP carrier, which decreased the available amount of β-NAs during the growth stage. In this case, the viscosity difference between PP carrier and matrix determined the interfacial movement of β-NAs. When the PP carrier and matrix had same molecular weight, the diffusion and release became favorable to facilitate the formation of the dense and fine dendritic aggregates. As a result, the relative content of β-crystals reached 92%, with a drastic increase of ∼82% in the optimal condition compared to the directed compounded PP/β-NAs sample. This study can open a new avenue to tailor the topologies of β-NAs and the ensuing β-crystals for high-performance PP products.
■ INTRODUCTION
Polypropylene (PP) is a typical polymorphic material, and the crystalline compositions including crystalline modification and topological structure determine the final properties. 1−3 β-Crystal can overcome the intrinsic inferior toughness of αcrystal and, thus, is more attractive in real practical applications. 4,5 With respect to the stable α-crystal formed under common processing conditions, β-crystal is thermodynamically metastable and can be obtained only under some specific crystallization conditions. Generally, incorporation of βnucleating agents (β-NAs) is the most feasible industrial way to boost the number of β-crystals and enhance the toughness of PP products. 6−8 In recent years, some soluble β-NAs have been found to dissolve in PP melts at high temperature and recrystallize into various morphological aggregations upon cooling, providing a facile and effective tool to tailor the crystalline morphology and the resulting performance of PP products. 9−11 By adjusting the final solubility of β-NAs, Varga firstly obtained dot, needle-like, dendritic NA aggregates and revealed the template effect on the formation of dendritic β-crystals. 12 Luo demonstrated that the PP sheet containing dendritic β-crystals exhibited an increment of 76% in the impact strength due to better connection between the dendritic crystallites. 13 Although soluble β-NAs and their thermally induced morphologies have been widely studied, 14−16 the researches on the β-NAs morphological regulation mainly focused on relatively high-molecular-weight PP resins. 17,18 In these cases, dendritic β-NAs, which were generated via dissolution−recrystallization process, often existed in the form of micrometer-sized aggregates with thick stems, affording a low specific surface area for heterogeneous nucleation of β-crystals. As a result, the nucleating efficiency of β-NAs reduced and some α-crystals were inevitably generated in the blank zone without β-NAs, which had a negative affect on the resulting mechanical properties of PP products. Therefore, it is of great importance to achieve a controllable formation of fine dendritic aggregates of β-NAs.
Two factors may be taken into consideration to seize the clue of minifying the dendritic aggregates. On the one hand, similar to polymer crystallization, the formation of dendritic β-NAs aggregates in the PP melt follows the homogeneous nucleation mechanism. 19 The size of the resulting β-NAs is determined by the gap between dissolution and recrystallization temperatures, which, in turn, depends on the concentrations of β-NAs and molecular weights of PP. 20−22 On the other hand, decreasing the available amount of β-NAs can restrict the furcating growth of the β-NAs to generate smaller aggregates. 23 Enlightened by the diffusion-controlled release technology, 24,25 a novel carrier strategy that selective enrichment of β-NAs in a carrier is followed by the directed migration into polymer matrix should be feasible to regulate the growth of the dendritic aggregates. To this end, we first study the impact of the molecular weight on the dissolution and crystallization temperatures of the PP/β-NA system by constructing a ternary experimental phase diagram of temperature/concentration/molecular weight. Then, PP resins rich in β-NAs are proposed as a novel carrier for slowly supplying β-NAs to the PP matrix via thermal diffusion, and the effects of molecular weights of PP carriers were investigated to reveal the diffusion mechanism. Finally, the submicron dendritic β-NAs aggregates with high nucleating efficiency were obtained, providing a practical instruction for the manipulation of PP crystalline composition and morphology by varying the topologies of β-NAs.
■ RESULTS AND DISCUSSION
Ternary Experimental Phase Diagram of PP/β-NAs Sample: Dependence of Concentration and Molecular Weight. To study the morphological evolution of β-NAs, it is necessary to comprehensively understand the underlying correlation between the dissolution and crystallization of β-NAs in PP melts. We first investigated the states of β-NAs in PP melts during the heating/cooling processes. As demonstrated in many studies, 26−28 there are three different physical states in the corresponding PP melts based on the solubility of β-NAs, namely solid, solid-saturated solution coexistence, and unsaturated solution, determining the ensuing morphologies with the recrystallization during the cooling. A typical growth process is presented in Figure 1. When β-NAs do not dissolve in PP melt and stay in solid state at low temperature, they keep the original morphology. Once β-NAs start to dissolve with the increasing temperature, β-NA/PP solution becomes saturated. Upon cooling, the dissolved molecules are prone to crystallization on the nondissolved ones along the preferred direction under the directing effect of hydrogen bonding between the amide groups attached to β-NAs, forming the needle-like morphology. With the complete dissolution, i.e., unsaturated solution, the homogeneous nucleation and growth will result in the highly branched dendritic aggregates. The surface of the aggregates hosts a number of nuclei for the growth of β-crystal, so the topologies of the β-NAs aggregates can be transformed into the morphologies of the resulting βcrystal via epitaxial crystallization on the surface. Accordingly, three morphological β-crystals can be obtained by adjusting the solution state in PP melt: spherulite, fibrous, and dendritic crystals.
The prerequisite for dendritic β-crystals is the complete dissolution of β-NAs in the PP melts during heating, and the size is strongly dependent on the undercooling, which represents the gap between dissolution and recrystallization temperatures. Because the dendritic β-NAs are generated during the cooling process only when they are completely dissolved in PP melts, the heating temperature of the special structure forming is defined as the dissolution temperature (T d ). The recrystallization temperature (T c ) corresponds to the temperature at which the dissolving β-NAs initially re-appear. Accordingly, a ternary experimental phase diagram as function of NAs concentration and molecular weight of PP was constructed, as shown in Figure 2. One can observe that both T d and T c shift to higher values with increasing β-NAs concentration and molecular weight.
The dissolution is a substantial movement and exchange process at the interface between the given solute and solvent. It is reported that solute molecules pass through a stagnant film composed of solvent molecules surrounding the solid solute surface and then diffuse into the solvent. 29,30 Therefore, the diffusion is a limiting factor determining the dissolution of β-NAs in the PP melt. On the basis of Einstein−Sutherland equation, 31 the diffusion rate is related to the dissolution temperature and the viscosity of the polymer melt.
where D is the diffusion constant, k B is Boltzmann's constant, η is the viscosity, T is the absolute temperature, and r is the radius of the solute particle. Accordingly, variation in the dissolution temperature at the different β-NAs concentration and PP molecular weight can be well understood. Based on dissolution equilibrium theory, at a given temperature, the solubility of β-NAs in PP melts is constant, and the thermal molecular motion is intensified and D value increases with increasing temperature, which is favorable for the dissolution of β-NAs. Obviously, at high concentration of β-NAs, a higher temperature is required for dissolution. In addition, the viscosity of polymer melt is related to the molecular weight. With rise in the molecular weight of PP, the melt viscosity increases, as shown in Figure 3.
The D value decreases and the corresponding diffusion of β-NAs from the boundary layer becomes difficult. Compared to small-molecule solvent, polymer melt featuring long-chain and entanglement networks exhibits a unique cage effect on solute molecules, which need to overcome the large steric hindrance to diffuse through the boundary layer and finally dissolve into the melt. The constraining effect of the network structure on molecular motion has been well-discovered. 32,33 High-molecular-weight polymer possesses a longer chain with more interand intramolecular entanglements, magnifying the cage effect to constraint the movements of β-NAs. Therefore, with the increase in the molecular weight of PP, the dissolution temperature of β-NAs also rises.
Analogous to the results investigated by Kristiansen and his co-workers 34−36 for PP/α-NAs (di-benzylidene-sorbitol) mixtures, PP/β-NAs mixture follows a typical monotectic phase behavior, as the two components display totally solid immiscibility and homogeneous solution in liquid state. Upon cooling, the solubility of β-NAs in PP melts decreases, initiating recrystallization. The recrystallization of β-NAs in the PP melt is the homogeneous nucleation and growth process. 19 The supercooling expressed by the difference between dissolution and crystallization temperatures dictates the size of the resulting β-NAs aggregates. Specially, small aggregate is only generated at high supercooling. 21,37 The contour map of the gap between the T d and T c was plotted as a function of the concentration of β-NAs and the molecular weight of PP. As shown in Figure 4a, the high supercooling is only observed in the narrow region of the concentration lower than 0.2% and molecular weight less than 2.5 × 10 5 g/mol. Therefore, fine dendritic aggregates of β-NAs only can be obtained in PP of low molecular weight. Here, two key issues should be noted. First, for the low-molecularweight PP, the self-nucleation originating from quick molecular motion ability is evident, deteriorating β-nucleating efficiency of β-NAs. 38 In this case, α-crystals prevail over β-crystals, which are verified by dark α-crystals featuring positive bi-refringence in Figure 4b1. Second, with increasing either molecular weight or β-NAs concentration, the nucleating efficiency is promoted, but it is inevitable to cause the supercooling to decrease. As a result, β-NAs aggregates in Figure 4b2−b5 are more than 200 μm and even reach 600 μm in the lowest supercooling. The βcrystals grow in the zone adjacent to the β-NAs, whereas αcrystals appear in the blank zone without β-NAs. Unambiguously, it is extremely difficult for PP to generate a large number of fine dendritic β-crystals due to the mismatch between supercooling and nucleating efficiency.
Morphological Manipulation of β-NAs via Diffusion-Controlled Release Technology. In the conventional compounding samples, all of the NAs are dispersed homogeneously in the matrix and the size of the resulting β-NAs aggregates is determined by the gap between dissolution and recrystallization temperatures. However, the gap of polar β-NAs in a nonpolar PP, especially a high-molecular-weight one, is low, so the size of β-NAs often is large. Inspired by diffusioncontrolled release of drug, β-NAs were first distributed into one kind of PP to obtain a β-NA carrier (c-NA); then the c-NA was added into pure PP matrix. In the diffusion-controlled release process, the growth of β-NA is controlled by the releasing rate of NAs from PP carrier, which can decrease the available amount of β-NAs during the growth stage. As a result, the smaller aggregates can be obtained. In this study, the widely used PP resin having molecular weight of 477 100 g/mol was chosen as a model because β-NAs can promote the crystallization in the form of β-crystal. 39,40 Expectedly, the growth of dendritic β-NAs in PP matrix can be weakened via the diffusion-controlled release of β-NAs to obtain homogeneous fine dendritic β-NAs and the ensuring β-crystal. First, the effect of the diffusion between same molecular weight PP on the morphology of β-NAs was revealed. In this case, the molecular weight of β-NAs-loaded PP carrier and PP matrix is the same. Figure 5 displays the morphological evolution of β- NAs during the cooling process. Initially, needle-like β-NAs appeared at the interface between the two PP phases (Figure 5b,c). With the decrease in the temperature, the fibrous β-NAs branched and large numbers of three-dimensional dendrites evolved from the melt. Compared to the conventional compounding way in which β-NAs were directly incorporated into PP matrix (Figure 4b3), the mean size of dendritic β-NAs decreased from over 500 to 88.5 μm via the diffusion-controlled release technology. Obviously, the diffusion of β-NAs from the PP carrier containing β-NAs effectively controls the growth of dendritic β-NAs.
It should be noticed that the amount of β-NAs was 1% in the carrier, where β-NAs cannot dissolve completely in the PP melts. The diffused β-NAs are possibly stemmed from two forms, namely solid aggregated particles and the dissolving molecules. It must be answered that which kind of β-NAs dominates the diffusion. Carbon nanotubes (CNTs) are insolvable and easily assemble into micrometer-sized aggregates, which are similar to the insoluble β-NA aggregates. Therefore, CNTs can be utilized as tracing elements to reveal the diffusion behavior of the β-NAs in the PP matrix. As shown in Figure 6, the interface between CNTs and pure PP stays discernable and unchanged in the whole heating cycle. On the contrary, for PP/NAs sample, the interface extended gradually. It can be deduced that the solid β-NAs aggregated particles have no moving ability, and the dendritic β-NAs completely resulted from the diffusion of the dissolving β-NAs from the PP carrier. When it happens, the physical state of β-NAs-loaded carrier is transformed from solid-saturated solution to solidunsaturated solution, leading to the further dissolution of the aggregated particles. The dissolving β-NAs are supplied sequentially and diffused into PP matrix, so the nucleation and growth of the dendritic β-NAs proceed simultaneously. As a result, the resulting aggregates exhibit small size. The formation mechanism of fine dendritic β-NAs controlled via diffusion-controlled release is proposed as in Figure 7.
Further, the effect of the PP carriers with different molecular weight was investigated. As shown in Figure 8, when the molecular weight of the PP carrier is higher than that of PP matrix, fine dendritic aggregates of β-NAs are observed, similar to the result of Figure 5; in the case of the carrier with the molecular weight lower than that of PP matrix, only few dendritic aggregates of β-NAs are confined to the interface between the carrier and PP matrix. This variation can be ascribed to the viscosity difference between the PP carrier and matrix. The viscosity of polymer melt is highly related to the molecular weight. High-molecular-weight PP exhibits high viscosity. When the viscosity of the carrier is higher than that of PP matrix, the diffusion of β-NAs happens readily along the decreased-viscosity direction. On the contrary, there is the resistance effect induced by the increased viscosity on the diffusion of β-NA, which can compel the backflow to less viscous carrier in the form of favorable energy. The diffusion of β-NAs out of the carrier is difficult and the resulting dendritic aggregates are few (Figure 8c). This is similar to the dispersion of polyhedral oligomeric silsesquioxane with different substitutes in polystyrene bulk from Misra group, where the retardant dissolution caused the preferential surface aggregation of the diffusing particles. 41 Additionally, comparison between Figure 8a,b demonstrates that when the carrier has the same molecular weight as the matrix, more β-NAs are diffused from the carrier compared to the higher-molecular-weight carrier. This can be attributed to the strong mobility of β-NAs in the less viscous PP carrier. Accordingly, we can come to a conclusion that when the PP carrier and PP matrix share the same molecular weight, the fine dendritic β-NAs and β-crystals can be achieved via diffusion-controlled release technology.
Finally, the relative contents of β-crystals (K β ) in the samples were evaluated by differential scanning calorimetry (DSC) melting curves. The peaks below 155°C in Figure 9 correspond to the melting of β-crystals, whereas the melting peak above 155°C resulted from α-crystal. 42 When β-NAs are blended directly with the PP matrix, K β presents the lowest value, which should be attributed to low specific surface area of large β-NAs featuring more than 300 μm size generated at the low supercooling (Figure 4b2). On the contrary, with the diffusion-released strategy, the fine aggregates are generated, providing more available nuclei for PP crystallization to facilitate the formation of β-NAs; moreover, the size and content of β-NAs can be controlled by regulating the molecular weight of the PP carrier. When the carrier has the same molecular weight as the PP matrix, the diffusion of β-NAs from the carrier becomes more favorable, leading to dense and finer dendritic β-NAs. As a result, the K β reaches 92%, with a drastic increase of ∼82% compared to the directed compounded PP/ β-NAs sample (HPP). Moreover, it seems interesting that there are double peaks of β-crystals in the directed compounded PP/ β-NAs sample. As stated earlier, micrometer-sized β-NAs in the sample have a low specific surface area and thus low nucleating efficiency of β-NAs, resulting in imperfect β-crystals. 43 During the heating process, the less stable β-crystal will be transformed into the stable one, thus two melting peaks of β-crystal are observed at ∼155°C. For the samples prepared via the diffusion strategy, fine β-NAs exhibit a high specific surface area to facilitate sufficient crystallization of PP, generating more perfect β-crystals with single melting peak.
■ CONCLUSIONS
In this study, the experimental phase diagram for the binary system consisting of PP and β-NAs was constructed to reveal the dependence of the concentration and PP molecular weight on the solubility and crystallization of β-NAs in PP melts. The results showed that high supercooling was only observed in the narrow region of the concentration lower than 0.2% and molecular weight less than 2.5 × 10 5 g/mol. Nevertheless, fine dendritic β-crystals cannot be obtained in bulk matrix due to the rapid crystallization of α-form crystal. On the contrary, when β-NAs were selectively distributed in the PP carrier, the growth of dendritic β-NAs was determined by the diffusion of β-NAs out of the carrier, which decreased the available amount of β-NAs during the growth stage. As a result, submicron dendritic NAs and β-crystals were generated. Moreover, the releasing efficiency of β-NAs depended on the viscosity difference between the carrier and matrix. Only if the PP carrier and the matrix had the same molecular weight, the dense and fine β-NAs were formed. The fine dendritic aggregates featuring high specific surface area can provide more available nuclei for PP crystallization to facilitate the formation of β-crystals, which not only drastically increases the fraction of the β-crystals but also promotes the crystallization of more perfect crystal.
■ EXPERIMENTAL SECTION Materials. A series of commercial iPP resins were purchased in this study, and the detailed information is listed in Table 1. β-Nucleating agent (trade mark: TMB-5) was provided by Shanxi Chemical Industry Research Institute (China). Its chemical structure is N,N′-dicyclohexylterephthalamide. 44 Samples Preparation. To achieve a uniform dispersion of β-NAs in the matrix, a simple two-step method was applied to prepare β-NAs-containing PP in this study. First, the master batches of PP/β-NAs mixture with 1 wt % concentration were melting blended by a micro twin-screw extruder to achieve the desired dispersion. The temperature from barrel to die was set from 150 to 185°C, with a screw speed of 25 rpm. After granulating, the master batches were then diluted to the expected concentrations of 0.1, 0.2, 0.3, 0.4, 0.5, and 0.7 wt % by adding pure PPs, and extruded again via the same mixing process. For comparison purpose, PP with 0.1 wt % carbon nanotube (CNT) was also prepared in the same way.
Characterization. Polarized Light Microscope (PLM). The phase behaviors of the samples were directly observed by polarized light microscope (Leica DM2500P) connected to a hot stage (Linkam THMS600, Linkam Scientific Instruments Ltd., U.K.) and a Pixelink camera (PL-A662). Samples were initially heated to the expected temperature at a rate of 30°C/ min and then held for 5 min to realize thermodynamic equilibrium. Afterward, the samples were cooled to 135°C at a rate of 10°C/min, and the self-assembling morphologies of β-NAs were recorded. Figure 10 illustrates the observation process of the specimens used for diffusion behavior: (a) both pure PP and 1 wt % β-NAs-containing PP pellets were first compressed into thin film using two hot glass slides at 180°C. The β-NAs-containing PP film (c-NA) was then cut to the appreciated size and put in the center of the pure one via hot compressing at 180°C. (b) The specimens were heated to 250°C to partly dissolve β-NAs, thus triggering the diffusion from c-NA to PP matrix. (c) After cooling, the diffusion was terminated, and the β-NAs selfassembled to dendritic aggregates in the PP matrix. For convenience, PP x−y was defined, where x presented the kind of the carrier and y corresponded to the kind of the matrix.
In addition, the morphologies of the dendritic β-NA aggregates were first recorded by the Pixelink camera and then the Linkage software provided by Linkam Scientific Instruments was utilized to analyze the size of the dendritic aggregates. Over 100 dendritic crystalline aggregates were recorded and the average size and distribution were calculated.
Rheological Tests. The extruded pellets were first dried in a vacuum oven at 80°C for 4 h and then pressed into a 2 mm thick sheet at 180°C on a hot press. The zero-shear-rate viscosity was then measured by using a rotary rheometer (AR2000, TA instruments) in the steady sweep mode with a shear rate from 0.01 to 10 s −1 . The plate diameter was 25 mm and the gap was 1 mm. All of the samples were in equilibrium at 250°C in the oven for 5 min before start.
Differential Scanning Calorimetry (DSC). The relative fractions of β-crystal (K β ) of samples were investigated with a Q20 differential scanning calorimetry apparatus (TA), which was calibrated using indium and zinc standards. For the samples prepared via the diffusion-released technology, the content of β-NAs in the PP carrier was 1 wt %. The diffusion regions in Figure 10. Schematic illustration for the observing process of β-NA diffusion from the PP carrier to PP matrix: (a) placing the c-NA film in the center of the pure one; (b) heating and holding temperature to trigger the diffusion; (c) cooling to observe the self-assembling morphology of β-NAs in PP matrix.
the observed PLM specimens were cut and then heated from 40 to 200°C at a rate of 10°C/min. The relative fraction of βcrystal (K β ) is calculated according to the following equations = + β β β α K X X X (2) where X α and X β are the crystallinities of the αand β-crystals, respectively, and they can be obtained by where ΔH i is the measured fusion and ΔH i o is the standard fusion heat (177 J/g for the α-crystal and 168.5 J/g for the βcrystal 13 ).
For comparison, the homogeneous compounded one with 0.1 wt % NAs was also investigated at the same condition.
|
2019-04-30T13:04:41.098Z
|
2017-10-26T00:00:00.000
|
{
"year": 2017,
"sha1": "5ae0575ce9de43e80d9948f06b056555335c3db4",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.7b01036",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51fbdfdbaad91c7bef2223faaeb7db1518cd2963",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
266816688
|
pes2o/s2orc
|
v3-fos-license
|
Dynamics of Religious Moderation: Analytical Study of Islamic Religious Education Learning in Junior High Schools
ABSTRACT
INTRODUCTION
Multicultural Indonesia is a blessing in disguise that not many other nations have.Various ethnicities, cultures and religions are distributed in various regions.Almost every region has cultural characteristics.Even within one ethnic group, sub-ethnic groups may have different language accents, religions and beliefs.Along with the high mobility and migration of society, interethnic and religious interactions are increasingly intense.Not infrequently, inter-ethnic and religious relations give rise to social friction or even violence.This is where the position of educational instruments that teach religious moderation becomes important.Education, including Islamic religious education, is not just an increase in knowledge and understanding of religious, social, and cultural values, but the implementation of these values in life together, in society, as a nation (Azra, 2007;Faozan, 2020).Education plays an important role in the internalization of religious moderation.
The dynamics of religious moderation are a crucial aspect in the formation of people's character and identity.Religious moderation is a theory that contains the idea of being moderate, fair and middle in every aspect of life in this world (Muaz & Ruswandi, 2022).Religious moderation reflects an effort to achieve a balance between religious beliefs and the changing times.Learning Islamic Religious Education (PAI) in the school environment positions the dynamics of religious moderation as a critical aspect that cannot be separated from the role of teachers as learning designers.PAI teachers not only function as transmitters of religious information, but also have a central role in forming spiritual, moral and social values for the next generation.This function explains that the duties of PAI teachers are related to efforts to shape students' Islamic character, including forming moderate students (Haniyyah & Indana, 2021).This is the aim of implementing religious moderation with teacher support inside and outside learning.
Religious moderation emphasizes balanced understanding, tolerance, and an open attitude towards differences in beliefs.PAI teachers have a responsibility to facilitate a learning environment that supports students' spiritual and moral development, while fostering inclusive attitudes and respect for diversity.The results of research conducted by the SETARA Institute, teenagers in the passive intolerant category have transformed into active intolerance, as illustrated from 2.4 percent in 2016 to 5 percent in 2023.Likewise in the exposed category, there has been an increase from 0.3 percent to 0 .6percent (Hukmana, 2023).The change from passive intolerance to active and the increasing exposure to intolerance that occurs among students should raise concerns among teachers that the efforts they have made have not been optimal.This is what initiated researchers in an effort to analyze the dynamics of religious moderation in the context of Islamic Religious Education (PAI) learning in the school environment.
Research on religious moderation in educational institutions continues to be developed.This explains that religious moderation is an object that must continue to be observed and monitored in its development.Several researchers who study religious moderation include Hidayat & Rahman (2022) where in his research he explained that learning Islamic religious education in schools was effective in instilling the values of religious moderation in students in junior high school.Other research was conducted by Putri & Nurmal (2022) The results of his research show that the implementation of religious moderation in schools can be carried out through the Hidden Curriculum, with a cultural acculturation process through internalization and institutionalization of habits.Other research was conducted by Wardati (2023) where the results of the research show that the implementation of learning based on religious moderation involves a deep understanding of various religions, respect for diversity, an inclusive curriculum, interfaith dialogue, the practice of tolerance, collaboration with parents, and training for teachers.In contrast to previous research, this research focuses on strengths, weaknesses, opportunities and threats as well as strategies for implementing religious moderation contained in Islamic Religious Education learning.
This research aims to explain the religious moderation that occurs in Islamic Religious Education learning and the strategies applied in implementing religious moderation.This research was carried out at a junior high school in Palembang, where there is heterogeneity of students with different religions and ethnicities at this school.The reality that occurs makes PAI teachers strive to continue to foster the value of moderation in their students so that small or even large frictions do not occur between students and other students.
METHODS
This research is a qualitative descriptive research, which is included in the type of field research.This type of research is in view (Sugiarti et al., 2020) used when the required data is in the field.The location used in this research is SMP Negeri 3 Palembang.The data in this research were obtained from PAI teachers and students using data collection techniques, in the form of non-participant observation and semi-structured interviews (Sugiyono, 2020).The data that has been obtained is then tested for validity using time and source triangulation.The data was then analyzed using Miles and Huberman's theory, namely data reduction, data display and data verification.
SWOT Analysis of Religious Moderation in Islamic Religious Education
In efforts to analyze religious moderation in Islamic Religious Education learning at SMPN 3 Palembang, SWOT analysis was used.This analysis aims to understand the strengths and opportunities that exist along with the existing weaknesses and threats.This analysis is useful for knowing what parts teachers must pay attention to in implementing Islamic Religious Education learning related to religious moderation.SWOT analysis is an abbreviation of the internal environment (Strengths and Weaknesses) and the external environment (opportunities and threats) (Endarwita, 2021).SWOT analysis has the potential to be an effective instrument in identifying and exploring opportunities to design new strategies or initiate innovative program initiatives (Suci & Suwarta, 2019).
The strengths, weaknesses, opportunities and threats of religious moderation contained in Islamic Religious Education learning at SMP Negeri 3 Palembang, are presented in the following discussion:
Strengths
The main strength of religious moderation in Islamic Religious Education learning at SMP Negeri 3 Palembang is the existence of a curriculum that can be the basis for religious moderation.In its implementation, SMP Negeri 3 Palembang uses an independent curriculum.The Independent Curriculum gives educators the freedom to develop quality learning that suits the needs and context of students' learning environment (Kemendikbud, 2023).In this context, Islamic Religious Education (PAI) teachers take progressive steps by positioning religious moderation as the main approach in their methods.As P stated: "The implementation of the Independent Curriculum provides more space for the development of learning materials by integrating religious moderation.In the learning process, students are accustomed to respecting each other, deliberating, respecting each other, non-violence, and so on.then implemented using various learning models." The Merdeka Curriculum, which is planned to be inaugurated in 2024, has received initial support from the Minister of Education.As a proactive step, the Minister of Education gave permission to schools to adopt the Independent Curriculum starting in 2021 (Muhammad Syaifuddin Zuhri & Muhammad Nasir, 2023).This decision marks the government's commitment to giving schools the freedom to develop learning approaches that are more innovative and in line with local needs.
SMP Negeri 3 Palembang, as one of the pioneers in implementing the independent curriculum initiative, has taken concrete steps to ensure its successful implementation.The institution ordered all teachers in the school to start the Independent Curriculum learning process, by giving instructions to teachers to carefully study and apply the concepts contained in the curriculum.These instructions become a strength for PAI teachers in designing learning that is deemed to be in line with what is needed by students at the institution.
Another strength is the competence of qualified teachers regarding religious moderation both in understanding and implementing it in Islamic Religious Education learning at SMP Negeri 3 Palembang.Teacher competence itself is divided into four, namely professional, social, personality and pedagogical competence (Alfath et al., 2022).These four competencies are the main basis for teachers' ability to design learning, apply learning and even behave in their daily lives.As SAV put it: "Religious moderation continues to be socialized by the Ministry of Religion through religious teachers.I attended several seminars related to the implementation of religious moderation.Plus reading the ministry's religious moderation guidebook."This effort is part of increasing competence to become a professional teacher."
Weaknesses
Limited resources are the main obstacle in implementing a religious moderation approach.In particular, limited access to books, teaching materials and facilities that support a broader and deeper understanding of religious moderation.In developing this approach, the need for relevant literature and adequate libraries is crucial.This learning resource helps educators increase educational productivity, make learning more effective and efficient, provide opportunities for students to develop according to their abilities and potential, plan more systematic learning programs, and strengthen learning (Samsinar, 2019).As P stated: "The provision of resources in the form of books and teaching materials on religious moderation is still very limited at this time.Likewise, training and seminars are limited and if there is training, it is conducted online."For me, online training has disadvantages, including difficulty networking and understanding material from resource persons." Constraints in learning resources can be detrimental to the development of understanding of religious moderation, especially in educational environments.Lack of quality books and teaching materials can hinder the learning process (Herin, 2017).Apart from that, adequate facilities, such as discussion rooms or meeting places, play an important role in creating an environment that supports dialogue and tolerance between religious communities.
Teachers' ability to understand religious moderation is limited in the theoretical realm of the material they study.Teachers do not yet have an understanding of the reality of religious moderation in a wider scope.This means teachers need training and development in terms of religious moderation.However, unfortunately, inadequate training for teachers in integrating religious moderation in Islamic Religious Education (PAI) learning is one of the main challenges in improving the quality of religious education in schools.
Efforts to overcome this have actually been carried out, for example training carried out by the Ministry of Religion by collaborating with the TNI and Polri in terms of encouraging the strengthening of religious moderation (Asrori, 2023).However, this training has not been comprehensively provided for teachers who have the main focus on instilling a moderate spirit in students.Of course, teachers still do not have a platform to receive training in religious moderation.
Opportunities
Until now, the dominant Indonesian society still upholds moderate values by respecting differences, both in terms of culture, religion, race and social background.This is a form of actualization of the Indonesian nation's motto, namely Bhinneka Tunggal Ika, which has a multicultural spirit by respecting every difference that exists.(Rahman et al., 2020).Awareness of this diversity is the main key in creating an inclusive and harmonious environment.
Not only the general public but also students as agents of change in society are currently playing a role in multicultural awareness.Studies regarding religious moderation continue to be carried out by students, quite a few students and their organizations continue to study and advocate for religious moderation both among students themselves and in society.Insertion of religious moderation values into courses, internalization of religious moderation values through the learning process, integration of religious moderation values through various student activities, internalization of religious moderation values through field practice, community service and superior programs are also carried out by educational institutions high so that religious moderation is already ingrained in students when they hold a bachelor's degree (Sutarto, 2022).
The opportunities that exist in implementing religious moderation in Islamic Religious Education learning at SMP Negeri 3 Palembang are also supported by the existence of various external parties who are in line with this goal.Among them are FKUB or Religious Harmony Forum, Religious Organizations, Education Office, Ministry of Religion with the aim of instilling the value of religious moderation in their own way.As H stated: "In implementing religious moderation, it provides opportunities to collaborate with external parties."In its implementation, it needs a lot of support to make it easier to internalize the values of religious moderation in students." With support from external parties, the initiative or project can gain additional strength to achieve its goals.Synergy between religious educational institutions, religious organizations and the government creates an ecosystem that supports and strengthens a shared vision to achieve positive change in society.
Threats
Resistance to the religious moderation approach can arise from various parties who have different views.Some groups may not agree with the concept of moderation in a religious context because they consider it a form of sacrifice to values or principles that are considered essential.The most crucial domain is the realm of attitudes that arise in terms of religious moderation.
Views regarding the limits of respect for other religions are usually viewed differently by one person/group and another person/group.For example, wishing Christians a Merry Christmas in the context of maintaining harmony and having relationships is permissible (Aspandi, 2018).However, other views still do not allow this statement.These situations, when not addressed properly, can become material used by people who do not agree with the concept of moderation.
Information imbalance also threatens to create a learning environment that is vulnerable to the influence of extremism.In this context, a lack of balanced information can harm the learning process by presenting a perspective that is too biased or not objective.This can create distortions in students' perceptions and understanding of critical issues.The ease of accessing information makes radicalism and terrorism a place in the mass media (Tigor Sitorus, 2022).As P stated: "As a religion teacher, I am also worried about the imbalance of information which will have a bad impact on students."We, fellow religious teachers, remind each other to read information carefully from various opinions and do tabayyun." This threat arises from the unequal distribution of information online and offline.The availability of poorly verified sources of information can open the door to the spread of extreme or biased views.In these situations, students may have difficulty developing a balanced and critical understanding of various issues.
Strategy for Implementing Religious Moderation in Islamic Religious Education learning
Various strengths, weaknesses, opportunities and threats that arise in the implementation of religious moderation in Islamic Religious Education learning in public junior high schools 3. To achieve the goal of implementing a religious moderation approach in Islamic Religious Education learning in Palembang, a series of strategies are needed that can be implemented by teachers.Here are some steps you can take as P stated:
"The strategy used in implementing religious moderation is by attending various trainings on religious moderation, collaborating with various parties such as ministries, and using learning technology."
As stated by SAV: "I utilize electronic-based technology to deliver material that is integrated with the value of religious moderation."
Strengthening Teacher Training
Holding regular training for teachers is a very important initiative in improving the quality of Islamic Religious Education (PAI) learning.The main aim of the training is to provide teachers with an in-depth understanding of the concept of religious moderation, inclusive learning strategies, and the implementation of diversity values in the educational context.In the training, teachers will have the opportunity to explore the concept of religious moderation in more depth, which involves a balanced and thoughtful understanding of Islamic religious teachings.Through interactive discussions, they will be guided to explore the meaning of the value of religious moderation in everyday contexts and how to apply it in the Islamic Religious Education learning process.
Apart from that, training can also discuss inclusive learning strategies that enable every student, regardless of their background or differences, to experience the presence and benefits of Islamic Religious Education learning.The results obtained in the training are of course inclusive PAI, namely an explanation of Islam that recognizes differences so that the existence of other religions becomes a broad source of knowledge accompanied by the unification of religious and other knowledge so that it contains more meaning (Abdullah et al., 2021).
The training also provides teachers with practical guidance on how to create a friendly and supportive learning environment for all students, so that the value of religious moderation can be reflected in daily interactions in the classroom.The importance of understanding and applying diversity values will also be a main focus in training.Teachers will of course reflect on how to enrich Islamic Religious Education learning by including various religious and cultural perspectives.In this way, teachers can create a learning environment that respects diversity and builds tolerance among students.
Collaboration with External Parties
Close collaboration with religious educational institutions, religious organizations and the government is not just an additional element, but an integral foundation that is very important in carrying out successful implementation.The synergy between these institutions provides a significant positive impact, forming a solid foundation for achieving educational goals that are in line with the vision of religious moderation.In this context, support from external parties plays a key role.Providing additional resources such as books, equipment, and educational infrastructure not only improves the quality of teaching, but also creates an adequate learning environment.Facilities and infrastructure actually have an impact on student motivation in learning (Jannah & Sontani, 2018).More than that, a deep understanding of religious values promoted by religious educational institutions and religious organizations enriches learning perspectives, creates space for character development and a deeper understanding of cultural diversity.
The practical guidance provided to teachers by religious education institutions and religious organizations is a valuable investment in improving teaching skills and contextual understanding.Teachers who are supported by this practical guide can more effectively integrate religious values in the curriculum, create holistic learning, and facilitate positive dialogue between students from diverse backgrounds.No less important, collaboration with the government opens up opportunities for wider access to various education programs.This collaboration can involve funding allocation, supporting policies, and curriculum integration into the national education system.In this way, a framework is formed that supports the vision of religious moderation, strengthening the position of education as a means of character formation and cementing diversity.
Use of Educational Technology
The use of educational technology has become a major milestone in efforts to modernize the learning process.Technology in education has an important role, such as providing learning facilities through planning, development, utilization, management and evaluation of learning resources.Apart from that, technology also helps solve learning problems with a cross-disciplinary approach, increases work effectiveness and efficiency, provides alternative solutions for the performance of educational organizations, and creates new innovations in education to overcome existing challenges (Nurillahwaty, 2021).
Teachers now have the opportunity to optimize the use of online platforms that not only support interaction and collaboration but also integrate the value of religious moderation effectively.By utilizing this technology, the learning experience becomes more interesting and dynamic (Miasari et al., 2022).Teachers can create inclusive learning environments, allowing students to engage in in-depth discussions about the values of religious moderation.A diversity of cultures and beliefs can be brought together through the features of online platforms, creating space for a deeper understanding of the differences and similarities among students.
Apart from the interactive aspect, the use of educational technology also opens the door to wider access to educational resources.Students are no longer limited by geographic boundaries or local resources, as they can easily access learning materials from various sources around the world.This not only enriches learning content, but also helps engage students in lifelong learning, developing skills relevant for the future.Thus, the integration of educational technology in the learning process not only has a positive impact on the diversity of religious moderation values, but also brings about an era of inclusive and global resource-based learning.This step provides a solid foundation for the modernization of education, creating a generation that is skilled, open, and ready to face the challenges of an ever-evolving world.
CONCLUSION
An examination of the Islamic Religious Education curriculum at SMP Negeri 3 Palembang reveals considerable room for improvement in course quality if religious moderation serves as the overarching strategy.The independence in curriculum design that comes from using the Independent Curriculum and having teachers who are knowledgeable about religious moderation is where the programme really shines.Nevertheless, the lack of resources, particularly books and other materials that can help people comprehend religious moderation, is a shortcoming.Problems could develop when there is reluctance to use this method or when there is an imbalance of knowledge that hinders learning.The government, religious education institutions, and teachers themselves can work together to improve student learning through the strategic use of technology in the classroom, better teacher preparation programmes, and other measures.Regular professional development for educators allows them to deepen their knowledge and create inclusive pedagogical practices.Resource support and practical guidance are provided by collaboration with third parties.Using technology in the classroom does double duty: it improves students' learning experiences and makes world-class educational materials more accessible.This study has a caveat in that it doesn't provide any practical advice on how to incorporate moderate religious principles into Islamic religious education lessons.Consequently, this study recommends that Islamic religious education programs look into how students internalise religious moderation principles.
|
2024-01-07T16:55:21.134Z
|
2023-12-05T00:00:00.000
|
{
"year": 2023,
"sha1": "7c0121228b0b102ea3f47181274023b1af60b01c",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.35445/alishlah.v15i4.4516",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "36032176f0ed78e5756da34d11b784c304bc53c2",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
5220432
|
pes2o/s2orc
|
v3-fos-license
|
A superfolding Spinach2 reveals the dynamic nature of trinucleotide repeat RNA
Fluorescent imaging of RNA in living cells is a technically challenging problem in cell biology. One strategy for genetically encoding fluorescent RNAs is to express them as fusions with ‘RNA mimics of GFP’. These are short aptamer tags that exhibit fluorescence upon binding otherwise nonfluorescent fluorophores that resemble those found in GFP. We find that the brightest of these aptamers, Spinach, often exhibits reduced fluorescence after it is fused to RNAs of interest. We show that a combination of thermal instability and a propensity for misfolding account for the low fluorescence of various Spinach-RNA fusions. Using systematic mutagenesis, we identified nucleotides that account for the poor folding of Spinach, and generated Spinach2, which exhibits markedly improved thermal stability and folding in cells. Furthermore, we show that Spinach2 largely retains its fluorescence when fused to various RNAs. Using Spinach2, we detail the cellular dynamics of the CGG trinucleotide-repeat containing “toxic RNA” associated with Fragile-X tremor/ataxia syndrome, and show that these RNAs form nuclear foci with unexpected morphological plasticity that is regulated by the cell cycle and by small molecules. Together, these data demonstrate that Spinach2 exhibits improved versatility for fluorescently labeling RNAs in living cells.
Introduction
RNA localization is dynamically regulated in cells 1,2 . A major goal has been to develop genetically encoded systems analogous to green fluorescent protein (GFP) that enable imaging of tagged RNAs in living cells (see Supplementary Note 1). We developed Spinach, a 98-nt RNA aptamer that binds 3,5-difluoro-4-hydroxybenzylidene imidazolinone (DFHBI), a small molecule mimic of the GFP fluorophore 3 . Spinach and DFHBI are essentially nonfluorescent when separate, but interact to form a brightly fluorescent complex. Cells can be engineered to express RNAs fused to Spinach which can be imaged in live cells. We labeled the 5S RNA with Spinach in mammalian cells and observed changes in localization under stress conditions 3 .
Detection of 5S-Spinach in mammalian cells requires 1 sec exposure times despite its high expression level 3 . In contrast, imaging abundant GFP-tagged proteins in mammalian cells typically requires 10 -100 msec exposure times under these imaging conditions. Additionally, the low brightness of 5S-Spinach in cells contrasts with the high brightness of Spinach measured in vitro 3 .
Here we show that Spinach exhibits thermal instability and poor folding, which reduces its brightness. Moreover, Spinach fluorescence is reduced when it is fused to target RNAs. Using systematic mutagenesis guided by brightness, thermostability, and a novel assay to measure folding, we identified mutations that confer thermostability and substantially increase the fraction of properly folded aptamer. The resulting RNA, Spinach2, is a "superfolder" variant of Spinach, which exhibits reduced context-dependence and is markedly brighter than Spinach in living cells. Using Spinach2, we explored the localization and dynamics of toxic CGG-repeat-containing RNAs. Imaging of these RNAs using short exposure times reveals that these RNAs exhibit dynamic localizations, which can be readily altered by cell division and small molecules. These data show that the enhanced folding and thermal stability of Spinach2 make it a versatile tool for imaging RNA in living cells.
Low fluorescence of Spinach-tagged RNAs
We sought to use Spinach to label "toxic RNA" localization. To do this, we expressed an RNA containing 60 CGG repeats that was previously shown to form intranuclear foci that resemble those seen in Fragile-X tremor/ataxia syndrome (FXTAS) patients 4 with a 3′-Spinach tag. However, expression of the CGG 60 -Spinach construct did not result in readily detectable nuclear foci in COS-7 cells in the presence of DFHBI (Fig. 1). Although fluorescence was not detectable, Spinach-tagged RNA formed nuclear foci as measured by FISH ( Supplementary Fig. 1a).
We asked whether the Spinach tag was unstable or degraded from the CGG-repeat RNA. To test this, FISH was carried out with a probe against Spinach. This experiment confirmed that Spinach is present in these foci ( Supplementary Fig. 1a, bottom row). In addition, we tested whether Spinach-tagged CGG-repeat RNA is destabilized using quantitative RT-PCR (qRT-PCR). However, tagged and untagged versions of (CGG) 60 -repeat RNA were equally stable ( Supplementary Fig. 1b). The observation that the Spinach-tagged CGG-repeat RNA was abundant in foci but not fluorescent indicates that Spinach is not fluorescent in the context of the CGG-repeat RNA, and requires modifications to enhance its fluorescence in cells.
Folding and thermostability of Spinach
To understand the lower-than-expected fluorescence of Spinach in mammalian cells, we considered several factors that could affect its brightness. These include low DFHBI cell permeability, low intrinsic brightness, and poor folding in cells. DFHBI permeability is unlikely to account for the low fluorescence since permeability of DFHBI matches that of Hoechst in mammalian cells, with maximal fluorescence achieved in approximately 30 min ( Supplementary Fig. 2). Additionally, in vitro measurements of Spinach-DFHBI fluorescence show that its overall brightness is 80% of GFP and 53% of eGFP 3 , which is bright enough for imaging. We therefore considered the possibility that Spinach misfolds in cells, reducing the number of Spinach-tagged RNAs that can bind and activate the fluorescence of DFHBI.
We first asked if Spinach can fold in cells at 37°C. We determined the melting temperature (T m ) of Spinach by monitoring the fluorescence of the RNA-DFHBI complex in vitro between 20°C and 60°C. These experiments showed that Spinach has a T m of 34 ± 0.6°C (Fig. 3a, Supplementary Table 1), indicating that a substantial fraction of Spinach molecules may be unfolded when imaging at 37°C.
Mutational analysis of Spinach
We next sought to identify mutations that could increase the thermostability of Spinach by correcting bulges and mismatches in the predicted structure of Spinach (Fig. 1b). These results led to the generation of Spinach1.1 and Spinach1.2, which have perfect complementarity in stem 1 and stem 1 and stem loop 3, respectively (see Online Methods and Supplementary Fig. 3). Spinach1.1 showed slightly enhanced thermostability, with a T m of 35 ± 0.5°C and was as bright as Spinach (Fig. 2a, Supplementary Table 1). Spinach1.2 displayed higher thermostability relative to Spinach and Spinach1.1, with T m value of 38 ± 0.3°C (Supplementary Table 1). However, the observed brightness of Spinach1.2 was 16% lower than Spinach (Fig. 2a). Taken together, these data indicate that mutations in stem 1 and stem loop 3 enhance thermostability, but do not improve brightness.
Development of Spinach2
We next sought to understand the basis for the reduced brightness of Spinach1.2. We considered that these mutations either (1) reduce the extinction coefficient or quantum yield of Spinach-DFHBI; or (2) increase the misfolded fraction of Spinach that is unable to bind DFHBI. To determine if these mutations increase the percent of Spinach that is misfolded, we developed an assay to measure the fraction of Spinach that is properly folded (see Online Methods).
Using this assay with buffers that mimic ion concentrations normally found in the cytoplasm, we found that 32 ± 4.2% and 13 ± 2.8% of Spinach is folded at 25 and 37°C, respectively. Spinach1.2 was largely misfolded, with 27 ± 2.1 and 16 ± 2.3% folded at 25 and 37°C, respectively (Fig. 2b,c). These data indicate that Spinach folds poorly, and that the increased thermostability of Spinach1.2 did not correspond to a higher folded fraction of Spinach at 25°C. We next carried out systematic mutagenesis to identify Spinach mutants that exhibited the enhanced thermostability of Spinach1.2, but also exhibited improved folding (see Online Methods). Using this approach, we identified six positions in Spinach that maintained or enhanced brightness at 25°C and maintained Spinach1.2 thermostability. These mutations were tested alone and in combination ( Supplementary Fig. 4, Supplementary Table 2). The winner from this screen contained all six mutations and was 1.8-and 2.8-fold brighter than Spinach in vitro at 25 and 37°C, respectively, and has T m of 38 ± 0.4°C (Fig. 3a, Supplementary Table 1). We named this mutant Spinach2 (Fig. 1b).
Characterization of Spinach2 fluorescence properties
We next asked whether Spinach2 exhibits improved folding relative to Spinach using the assay described above. These experiments showed that a substantially higher fraction of Spinach2 is folded compared to Spinach, with 58 ± 4.8% and 37 ± 3.3% folded at 25 and 37°C, respectively (Fig. 2c). Thus, the mutations in Spinach2 result in markedly enhanced folding.
The mutations in Spinach2 could affect its ability to activate the fluorescence of DFHBI. To test this, we calculated its extinction coefficient and quantum yield of Spinach2. In these experiments, we used excess RNA and 0.1 μM DFHBI, so that we could compare the properties of 0.1 μM Spinach-DFHBI and 0.1 μM Spinach2-DFHBI, regardless of any difference in the percent of each RNA that is folded. We found that both Spinach and Spinach2 have nearly identical photophysical properties (Supplementary Table 1). Moreover the excitation and emission spectra, as well as the K D for DFHBI binding are nearly identical (Fig. 3b,c, Supplementary Table 1). These data suggest that the enhanced brightness seen with Spinach2 reflects an increase in the folding efficiency of this RNA.
Spinach2 retains fluorescence in diverse contexts
RNA folding can be affected by flanking sequences, which can potentially form interactions with the RNA aptamer. To test whether sequence context affects Spinach and Spinach2 folding, we monitored the fluorescence of Spinach and Spinach2 inserted into different RNAs. First, both Spinach and Spinach2 were synthesized with an additional 50 nt of RNA on both the 5′ and 3′ sides. We then compared the fluorescence of identical concentrations (0.1 μM) of flanked Spinach to Spinach alone. Flanked Spinach was only 20% as bright as Spinach (Fig. 2d). We next asked if Spinach2 is affected by these flanking sequences. Flanked Spinach2 was 90% as bright as Spinach2 alone, and 10-fold brighter than flanked Spinach (Fig. 2d), indicating that Spinach2 is relatively insensitive to flanking sequence.
Spinach fluorescence in vivo is improved by inserting Spinach into the tRNA Lys 3 sequence 3,5 , which acts as a folding scaffold 6 . In the case of Spinach, the folding is increased from 32 ± 4.2 to 50 ± 3.9% at 25°C and 13 ± 2.8 to 24 ± 2.4% at 37°C by the presence of the tRNA (Fig. 2c). In the case of Spinach2, the folding is increased from 58 ± 4.8 to 80 ± 6.1% at 25°C and 37 ± 3.3 to 60 ± 5.4% at 37°C. For this reason, we used tRNA Lys 3 -Spinach and tRNA Lys 3 -Spinach2 in all subsequent tagged constructs and imaging experiments. We next compared the folding of Spinach or Spinach2 fused to 5S. 5S-Spinach2 was 3-fold brighter than 5S-Spinach (Fig. 2d), indicating that Spinach2 folds better than Spinach when fused to this RNA, even when Spinach is present in the context of the tRNA Lys 3 scaffold. 5S-Spinach2 folding was only ∼30% lower than Spinach2 (Fig. 2d). We also examined the folding of Spinach and Spinach2 fused to the 5′ end of the nuclear 7SK RNA 7 . In this case, Spinach2-7SK was 6-fold brighter than Spinach-7SK in vitro (Fig. 2d). Moreover, Spinach2-7SK folding was only ∼25% lower than Spinach2 (Fig. 2d).
Lastly, we examined Spinach2 folding in the context of CGG-repeat-containing RNA. Spinach and Spinach2 were appended to the 3′ end of (CGG) 60 RNA. The in vitro synthesized (CGG) 60 -Spinach was nearly nonfluorescent, while (CGG) 60 -Spinach2 was 80% as bright as Spinach2 alone (Fig. 2d). Together, these data show that Spinach2 retains substantial fluorescence when tagged to diverse RNAs.
Spinach2 exhibits increased fluorescence in E. coli
We next asked whether the increased stability of Spinach2 in vitro would correspond to a brighter signal in E. coli. Spinach2 was 1.4-fold brighter at 25°C and 2.1-fold brighter at 37°C than Spinach (Fig. 3d). Aptamer abundance was normalized to 16S RNA and found to be essentially identical for all samples (Fig. 3e).
We also compared the brightness of Spinach-7SK and Spinach2-7SK in HeLa cells. 7SK localizes to nuclear speckles 8 . Expression of Spinach-7SK showed no detectable signal, but expression of Spinach2-7SK labeled intranuclear foci that colocalize with SC35, a known protein component of nuclear speckles 8,9 , tagged with mCherry (Fig. 4c). These data demonstrate improved RNA imaging in live cells using Spinach2.
Little is known about the dynamics of CGG-repeat-containing RNA localization in the nucleus. Because these RNA complexes are highly G/C rich, it has been proposed that they form highly stable hairpins that may be difficult to disrupt 4,15,18 . Previous studies have shown that the splicing factor Sam68 dynamically associates with CGG-repeat nuclear foci 4,19 . However, these studies do not address whether the RNAs themselves are dynamic or immobile in nuclei.
Thus, we tested whether Spinach2 could be used to image CGG-repeat RNA. Although (CGG) 60 -Spinach was not detected (Fig. 1a, top row), expression of (CGG) 60 -Spinach2 resulted in bright intranuclear foci that were readily detected using wide-field microscopy with 50-100 msec exposure times (Fig. 1a, middle row). These foci colocalized with mCherry-hSam68, a marker of CGG-containing nuclear foci 4 (Fig. 1a). The foci were highly heterogeneous in appearance ( Supplementary Fig. 4b). Thus (CGG) 60 -Spinach2 can be used to study the dynamics of toxic RNA aggregates.
Live-cell imaging of CGG-repeat RNA aggregates
We next monitored the formation of (CGG) 60 -Spinach2 foci in transiently transfected COS-7 cells. Spinach2 fluorescence was detectable as early as 3 h post-transfection. (CGG) 60 -Spinach2 signal was initially diffusely nucleoplasmic, with foci formation evident within 1 h (Fig. 5a, Supplementary Movie 1). Foci number, size, and brightness increased over the course of the experiment. These data indicate that CGG-repeat RNA aggregates rapidly following expression.
We next asked if the aggregated CGG-repeat RNA is highly stable in cells. To examine the stability of (CGG) 60 -Spinach2 RNA, we measured fluorescence after treatment of cells with actinomycin D, a potent transcription inhibitor. Here, we observed that Spinach2 signal was stable and remained unchanged for up to 8 h (Fig. 5c), at which point actinomycin Dmediated cytotoxicity was observed.
To test the stability of (CGG) 60 -Spinach2 foci over longer time periods, we controlled (CGG) 60 -Spinach2 transcription using the TET-Off system 20 (see Online Methods). Immediately following transcription inhibition, 94 ± 1.7% of transfected cells contained foci. These foci were long-lived, as 88 ± 5.6 and 82 ± 6.5% of cells retained foci after 24 and 48 h, respectively (Fig. 5d,e). These results were supported by qRT-PCR results, which demonstrate that (CGG) 60 and (CGG) 60 -Spinach2 RNA are highly stable. The stability of these RNAs is most likely due to its incorporation into nuclear foci as (CGG) 30 RNA, which do not form foci 4 , are markedly less stable ( Supplementary Fig. 1b).
CGG-repeat RNA undergo rearrangements during cell division
Since these RNAs are relatively resistant to degradation and form thermodynamically stable duplexes 18,21 , we sought to test whether they form static foci. To test this idea, we monitored foci morphology in transfected cells. Time-lapse imaging revealed that foci were mobile and can merge to form larger foci (Fig. 5a, Supplementary Movie 1).
This dynamic nature was also apparent in dividing cells (Fig. 5b). Prior to cell division, typical cells contain multiple foci. During division, the foci coalesce to form a large single aggregate that then extends into a long linear structure. This long aggregate is divided between daughter cells. The RNA then appears to become diffusely nucleoplasmic before reaggregating into foci. These results suggest that CGG-repeat RNA foci aggregate and disaggregate during the cell cycle.
A small molecule can disrupt RNA aggregates
We next asked if small molecules can induce disaggregation of CGG-repeat RNA foci. No molecules have been shown to disrupt existing aggregates, although two drugs prevent the formation of CGG-repeat RNA foci in transfected cells. These are tautomycin 4 and 1a, a small molecule that binds CGG-repeat RNA and disrupts its binding to a CGG-binding protein, DGCR8 4,22 . We confirmed that both drugs prevent (CGG) 60 -Spinach2 foci formation (Fig. 6a,b).
To determine whether 1a can disrupt existing foci, COS-7 cells expressing (CGG) 60 -Spinach2 were treated with 1a and imaged every 5 min for 2 h. No change in foci was observed under these conditions (Fig 6c). To test whether longer treatments were required for 1a to disrupt foci, cells expressing (CGG) 60 -Spinach2 were treated with 1a for 48 h. Immediately after addition of 1a, 94 ± 2.8% of examined nuclei contained foci. After 48 h of 1a treatment, 86 ± 3.5% contained foci, indicating that 1a does not substantially disrupt foci, even after long treatments ( Supplementary Fig. 5a). Furthermore, a 48 h treatment of 1a did not induce the dissociation of Sam68 from (CGG) 60 -Spinach2 foci ( Supplementary Fig. 5b). These results show that 1a can prevent foci formation, but does not readily disrupt existing foci.
In contrast, we observed that tautomycin induces disaggregation of foci in as little as 1 h (Fig. 6c, Supplementary Movie 2). The (CGG) 60 -Spinach2 remained as diffuse nucleoplasmic staining in cells (Fig. 6c). Removal of tautomycin after a 2 h treatment was not sufficient to restore foci formation ( Supplementary Fig. 6), suggesting that tautomycin induces cellular changes that prevent reaggregation.
To test if the effect of tautomycin on (CGG) 60 -Spinach2 foci was due to inhibition of its known targets protein phosphatase-1 (PP1) or protein phosphatase-2A (PP2A) 23 , we treated cells with okadaic acid at a concentration that also inhibits both PP1 and PP2A 24 . In this case, no foci disruption was observed over 4 h (Supplementary Fig. 7). These results suggest that the disaggregation effect of tautomycin is likely to be due to a different target than PP1 or PP2A.
Discussion
We found that Spinach exhibits poor thermal stability and folding when tagged to other RNAs. To resolve these issues, we developed Spinach2 by targeted mutagenesis of Spinach followed by screening for enhanced brightness and thermostability. Spinach2 has nearly identical photophysical properties to Spinach, yet displays enhanced folding both alone and in the context of flanking RNA. Moreover, Spinach2 exhibits improved folding at both 25°C and 37°C, yielding significant enhancements during imaging. Our results show that improvements in folding and thermostability enable imaging of RNAs that are otherwise not detectable with Spinach.
Although Spinach2 folds more efficiently than Spinach, the improved folding is more apparent when Spinach2 is fused to other RNAs. For example, Spinach2 retains 80% of its fluorescence when fused to the CGG-repeat RNA, while Spinach is essentially nonfluorescent in this context. Thus, the improved performance of Spinach2 in live cells reflects its improved folding when fused to other RNAs. However, it is possible that other flanking sequences will affect Spinach2 fluorescence. Therefore, the fluorescence of a Spinach2-tagged RNA should first be established by in vitro transcription of the Spinach2tagged RNA and compared with the fluorescence of untagged Spinach2. If the Spinach2-tagged RNA lacks fluorescence in vitro, inserting Spinach2 at other sites may restore fluorescence by providing flanking sequences that are more compatible with Spinach2 folding.
Both (CGG) 60 -Spinach2 and Spinach2-7SK form RNA-enriched foci within the cell, which make imaging straightforward. However, imaging RNAs present at lower concentrations may require longer imaging times. Since multimerization of fluorescent proteins has been successfully used to enhance the imaging of low abundance proteins 25 , an analogous strategy could be adapted to label RNAs with Spinach2. Tagging RNAs with multiple Spinach2 sequences may be valuable to enhance the brightness of tagged RNA and aid in imaging lower abundance RNAs.
In order to demonstrate the ability to use Spinach2 in diverse imaging experiments, we imaged and characterized the localizations of CGG repeat-containing RNAs in living cells for the first time. These RNAs were thought to form stable G/C-rich aggregates 18,21 . Our studies show that the RNA component of these foci is highly dynamic in cells and undergoes considerable morphologic rearrangements, especially during cell division. These results suggest that CGG-repeat RNAs bind to preexisting nuclear structures that are normally partitioned during cell division. This idea is supported by previous studies demonstrating colocalization of CGG-repeat RNAs with various intranuclear markers 4 .
By imaging (CGG) 60 -Spinach2 we were able to identify the first compound that can induce disaggregation of toxic RNAs. Previous studies have relied on imaging foci-associated RNA-binding proteins, such as Sam68 4,16 . Direct imaging of toxic RNA provides opportunities to identify small molecules and signaling pathways that affects CGG-repeat RNA localization dynamics in living cells. Assays using (CGG) 60 -Spinach2 may enable the identification of additional compounds that can disrupt foci and potentially serve as therapeutics for FXTAS.
Reagents and equipment
Unless otherwise stated, all reagents were purchased from Sigma-Aldrich. Commercially available reagents were used without further purification. Absorbance spectra were recorded with a Thermo Scientific NanoDrop 2000 spectrophotometer with cuvette capability. Fluorescence excitation and emission spectra were measured with a Perkin Elmer LS-55 fluorescence spectrometer.
Preparation and analysis of Spinach and Spinach mutants
RNAs were created by using the appropriate single stranded DNA templates (Integrated DNA Technologies) and PCR amplification using primers which included a 5′ T7 promoter sequence to generate double stranded DNA templates. PCR products were then purified with PCR purification columns (Qiagen) and used as templates for in vitro T7 transcription reactions (Epicentre) as described previously 3 . RNA is purified using ammonium acetate precipitation, and quantified using both absorbance values and the Riboquant Assay kit (BD Biosciences). Photophysical characterization of Spinach2 was carried out as previously described 3 .
Thermostability measurements
Spinach or Spinach2 (1 μM) was incubated in 20 mM HEPES pH 7.4, 100 mM KCl, 1 mM MgCl 2 , and 10 μM DFHBI. Fluorescence values were recorded in one degree increments from 20 to 60°C, with a 5 min incubation at each temperature to allow for equilibration. Fluorescence measurements were performed using a Perkin Elmer LS-55 fluorescence spectrometer using the following instrument parameters: excitation wavelength, 460 nm; emission wavelength, 501 nm; slit widths, 10 nm. Curves were fitted using the Boltzmann sigmoidal equation in GraphPad Prism 5 software. Values presented are mean and s.e.m. from three independent measurements.
Folding Assay
Our folding assay involves measuring fluorescence under two conditions, one in which RNA is in excess relative to DFHBI, and one in which the DFHBI is in excess relative to RNA. Since Spinach and DFHBI form a 1:1 stoichiometric complex, the maximum amount of complex that can be formed is determined by the limiting component. In the first condition, the fluorescence was determined by incubation of 0.1 μM DFHBI and 100-fold excess (10 μM) Spinach. This value is used to define the fluorescence of 0.1 μM Spinach-DFHBI complex. We assume that even if nearly all Spinach is misfolded or unfolded, there will be enough properly folded Spinach to stoichiometrically bind 0.1 μM DFHBI. We confirmed this by measuring fluorescence after doubling the RNA to 20 μM, which caused no increase in fluorescence (data not shown). In the second condition, we measure the fluorescence obtained using 10 μM DFHBI and 0.1 μM Spinach. In theory, up to 0.1 μM Spinach-DFHBI can form if all the Spinach is folded. However, if a portion of Spinach is unfolded, the fluorescence will be proportionately less than the fluorescence of 0.1 μM Spinach-DFHBI. Thus, this approach can reveal the fraction of Spinach that is folded under diverse conditions. Fluorescence was measured for each RNA under the following conditions: (1) 0.1 μM RNA and 10 μM DFHBI and (2) 0.1 μM DFHBI and 10 μM RNA. For each condition, the signal from DFHBI without RNA was subtracted from each signal. Fluorescence was measured in 20 mM HEPES pH 7.4, 100 mM KCl, 1 mM MgCl 2 at the designated temperature. Fluorescence measurements were performed using a Perkin Elmer LS-55 fluorescence spectrometer using the following instrument parameters: excitation wavelength, 460 nm; emission wavelength, 501 nm; slit widths, 10 nm. Signal from the first condition (limiting RNA) were divided by the signal from the second condition (limiting dye) to determine the fraction folded.
Generation of Spinach1.1 and Spinach1.2
Spinach is predicted to contain four stems (Fig. 1b). Some of the stems contain bulges and mismatches that likely reduce its thermodynamic stability; however, it is not clear whether these features are also necessary for Spinach-induced DFHBI fluorescence. Our previous work mutating Spinach and designing Spinach-based sensors demonstrated that stem 1 and stem loop 3 can tolerate various mutations and insertions 5,7 . Therefore, we considered the possibility that the mismatches in stem 1 adversely affect Spinach thermal stability. To test this idea, we generated a mutant of Spinach with perfect complementarity in stem 1. This mutant, called Spinach1.1, was also mutated to convert the last base pair in stem 1 from U-A to C-G, in an attempt to stabilize stem 1 (Supplementary Fig. 4). Spinach1.1 showed slightly enhanced thermostability, with a T m of 35 ± 0.5°C and was as bright as Spinach (Fig. 2a, Supplementary Table 1).
We next asked if stem loop 3 could be altered to increase Spinach thermostability. We previously found that alterations in stem loop 3 do not substantially reduce Spinach fluorescence 7 . In Spinach, stem loop 3 contains three mismatches and an internal bulge. We generated Spinach1.2 by retaining the mutations in Spinach1.1 and mutating stem loop 3 to eliminate this bulge and introduce perfect complementarity ( Supplementary Fig. 3).
Systematic mutagenesis of Spinach1.2
Because elevated G/C content can lead to stable misfolded structures 8,9 , we reasoned that decreasing the overall G/C content could promote proper folding. We carried out scanning mutagenesis, mutating every guanidine and cytosine to adenosine or uracil, respectively. In regions where G and C residues were predicted to form a base pair, both residues were mutated to A and U, in order to maintain the complementarity. Each of these 35 mutants was synthesized in vitro, and the fraction folded was measured at 25°C and 42°C. Fluorescence signals that were equal to or greater than Spinach at 25°C indicated equal or greater percent folded. A higher percent signal at 42°C indicated improved thermostability relative to Spinach.
Cloning Spinach2 for expression in E. coli
Spinach and Spinach2 were PCR amplified with primers containing the EagI restriction sites on both the 5′ and 3′ ends of the Spinach sequence. They were then cloned into a pET28cbased plasmid containing a chimera of the human tRNA Lys 3 scaffold, which we previously used for Spinach and Spinach-based metabolite sensors and has previously been shown to stabilize heterologous expression of RNA aptamers in E. coli 6 .
Whole-cell fluorescence measurements of E. coli
BL21 cells were transformed to harbor either pET28c-tRNA-Spinach or pET28c-tRNA-Spinach2, and grown in Luria broth + 100 μg/mL kanamycin to OD 600 0.4 at room temperature. The cells were then induced with addition of 1 mM IPTG for 2 h at room temperature. After induction, cells were normalized for cell density and split into two aliquots. One aliquot per sample was incubated at room temperature, and the other was incubated for 20 min at 37°C. Cells were then measured for total fluorescence using a Tecan SafireII plate reader with 460 ± 10 nm excitation and emission was recorded at 510 ± 10 nm. Data shown represent mean and s.e.m. values for three independent experiments.
qRT-PCR analysis of Spinach and Spinach2 concentration in E. coli
Total RNA samples were collected from E. coli at both 25 and 37 °C using the RNeasy Protect Bacteria Mini Kit (Qiagen). Reverse transcription was carried out on all samples using a reverse primer that bound in the tRNA portion of the tRNA-Spinach transcripts (5′-TGGCGCCCGAACAGGGAC-3′) and a reverse primer against 16S RNA (5′-GTATTACCGCGGCTGCTG-3′) according to the SuperscriptIII reverse transcription kit protocol. qRT-PCR was carried out according to the iQ™ SYBR® Green Supermix (Bio-Rad) protocol with forward (5′-GCCCGGATAGCTCAGTCGGTAG-3′) and reverse (5′-TGGCGCCCGAACAGGGAC-3′) primers against the tRNA portion of either transcript as well as forward (5′-CTCCTACGGGAGGCAGCAG-3′) and reverse (5′-GTATTACCGCGGCTGCTG-3′) primers against 16S RNA. In all cases, Spinach transcript levels were normalized to 16S RNA levels. Data represent mean and s.e.m. values for three independent experiments.
Imaging 5S-Spinach and 5S-Spinach2
Imaging of 5S-Spinach and 5S-Spinach2 was carried out as previously described for 5S-Spinach 3 . Cells were imaged for either 100 msec or 1 sec. Background signals from cells expressing pAV-5S incubated with DFHBI were also taken at 100 msec and 1 sec and subtracted from the corresponding images using NIS-Elements software.
For brightness quantification, fluorescence signal was measured for 20 background subtracted cells per sample and normalized for total area using NIS-Elements AR 3.2 (Nikon). 5S-Spinach2 signal was normalized to 1.0.
Imaging Spinach-7SK and Spinach2-7SK
HeLa cells (ATCC-CRM-CCL-2) were cultured and passaged in DMEM medium supplemented with 50 units of penicillin and 50 μg of streptomycin per mL. For imaging experiments, cells were grown on cells cultured on 24-well glass-bottom dishes and cotransfected with 0.3 μg of pLPC-Spinach-7SK or pLPC-Spinach2-7SK and 0.3 μg of pCDNA3.1-SC35-mCherry using FuGeneHD (Roche) per the manufacturer's instructions in DMEM medium lacking penicillin and streptomycin. Cells were imaged 24 h posttransfection. At 30 min prior to imaging, medium was supplemented with 25 mM HEPES, 5 mM MgSO 4 , and 20 μM DFHBI. Cells were imaged as described below using FITC and Texas Red filter sets.
Cloning of CGG 60 -Spinach and Spinach2
Spinach or Spinach2 in the context of the tRNA Lys scaffold was amplified by PCR using forward (5′-ATATATATCTAGAGCCCGGATAGCTCAGTCGGTAGAGCAG-3′) and reverse (5′-ATATATGGGCCCTGGCGCCCGAACAGGGACTTGAACCC-3′) primers and digesting the resulting PCR products with XbaI and ApaI to clone downstream of the 60 CGG repeats and upstream of the BGH polyadenylation sequence in pCDNA-60CGG to generate pCDNA-60CGG-Spinach and pCDNA-60CGG-Spinach2. For TET-Off experiments, the entire transcript from pCDNA-60CGG-Spinach2 (CGG60-Spinach2-BGHpolyadenylation signal) was excised using NheI and EcoRV and subcloned into pTRE2-Hyg (Clontech) that was cut with NheI and EcoRV.
COS-7 cells (ATCC-CRL-1651) were cultured and passaged in DMEM medium
supplemented with 50 units of penicillin and 50 μg of streptomycin per mL. For imaging experiments, cells were grown on cells cultured on 24-well glass-bottom dishes and transfected with 0.6 μg of pCDNA-60CGG-Spinach or pCDNA-60CGG-Spinach2 using FuGeneHD (Roche) per the manufacturer's instructions in DMEM medium lacking penicillin and streptomycin. Cells were imaged in CO 2 -independent medium (Invitrogen) supplemented with L-glutamine. At 30 min-1 h prior to imaging, medium was supplemented with 25 mM HEPES, 5 mM MgSO 4 , 1 μg/mL Hoechst 33342 (when appropriate), and 20 μM DFHBI or vehicle. Live fluorescence images were acquired in a temperature-controlled chamber at 35-37°C with a CoolSnap HQ2 CCD camera through a 60X oil objective (Plan Apo 1.4 NA) mounted on a Nikon TE2000 epifluorescence microscope and analyzed with the NIS-Elements software. Spinach was imaged with a filter cube typically used for fluorescein/EGFP, with a sputter coated excitation filter 470/40, dichroic mirror 495 (long pass), and emission filter 525/50 (Chroma Technology). DsRed-Max and mCherry were imaged using a filter cube typically used for Texas Red, with a sputter coated excitation filter 560/40 and emission filter 630/75 (Chroma Technology). Background intensity was subtracted from all pixel intensity measurements. Image analyses were completed with NIS-Elements AR 3.2 (Nikon). Drug treatments were carried out as specified in the text. Tautomycin was used at a final concentration of 5 μM in all cases. 1a was used at a final concentration 20 μM in all cases. DMSO was added to a final concentration of 0.1% for vehicle treatments.
For foci formation experiments, COS-7 cells were transiently transfected with a plasmid expressing (CGG) 60 -Spinach2. After 2 h, the transfection medium was replaced with imaging medium containing DFHBI. After a 1 h incubation in imaging medium, cells were imaged every 20 min for 6 h.
Analysis of DFHBI cell permeability
COS-7 cells were transfected with pCDNA-60CGG-Spinach2. 24 h post-transfection, cells were medium was supplemented with 25 mM HEPES, 5 mM MgSO 4 , 1 μg/mL Hoechst 33342, and 20 μM DFHBI. Images were acquired for Hoechst and Spinach2 signal every 5 min for 1 h for 20 cells. All signal were first normalized to area and then normalized to the highest signal for a given nucleus to determine the time for maximal signal to be reached.
FISH of CGG 60 RNA
COS-7 cells were grown and transfected as described above on glass coverslips. Cells were fixed and stained as previously described 4 . CGG repeats were probed using an (CCG) 8x -Texas Red DNA oligonucleotide probe (IDT). Spinach was probed using a 3′ Texas Redlabeled DNA oligonucleotide (5′-GCACTGCCGAAGCAGCCACACCTG-3′) (IDT). DAPI was contained in the mounting solution for DNA staining.
Reverse transcription was carried out on all samples using a reverse primer that bound downstream of the CGG repeats in all constructs (5′-CTAGAGATATCAGGCTGATCA GC-3′) and a reverse primer against GAPDH mRNA (5′-TCCACCACCCTGTTGCTGTA-3′) according to the SuperscriptIII reverse transcription kit protocol. qRT-PCR was carried out according to the iQ™ SYBR® Green Supermix (Bio-Rad) protocol with forward (5′-GTCAGCTGACGCGTGCTAGCG -3′) and reverse (5′-CTAGAGATATCAGGCTGATCAGC -3′) primers against all CGG transcripts as well as forward (5′-ACCACAGTCCATGCCATCAC -3′) and reverse (5′-TCCACCACCCTGTTG CTGTA -3′) primers against GAPDH mRNA. In all cases, CGG transcript levels were normalized to GAPDH mRNA levels. Data represent mean and s.e.m. values for three independent experiments. We also carried out qRT-PCR of sample RNA compared to in vitro transcribed control RNA to determine the approximate number of CGG repeatcontaining RNA in a cell. We obtained roughly 0.2 ng of (CGG) 60 -Spinach2 RNA from 0.2 × 10 6 transfected cells. We estimated the molecular weight of polyadenylated (CGG) 60 -Spinach2 to be roughly 280 kDa. Using these values, we calculated that each transfected cell contained roughly 2000 copies of (CGG) 60 -Spinach2. On average, each cell contains 10-15 foci, indicating that each aggregate contains roughly 150-200 RNA molecules. It should be noted that foci vary in size in different cells, and foci that are much smaller than the "average" size are readily detectable in cells. Moreover, not we observe some Spinach2 signal in the nucleoplasm that is not in foci. So 150-200 RNA molecules is unlikely to be the limit of detection at 50 ms; however, the precise limit will require more precise quantification methods of these foci that are closer to the limits of detection. Because foci were typically imaged at 50 ms, it is likely that smaller numbers of RNAs would be detectable at longer imaging times such as 500 ms or 1 sec.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. (c) HeLa cells were transiently transfected to express either Spinach-7SK or Spinach2-7SKunder the control of the CMV promoter. Cells were cotransfected with SC35-mCherry, which labels nuclear speckles. Cells were incubated with 20 μM DFHBI and imaged for 200 ms. Green and red fluorescence images are shown along with overlaid images. Scale bar, 10 μm.
|
2017-11-08T19:13:01.224Z
|
2013-10-27T00:00:00.000
|
{
"year": 2013,
"sha1": "8ac8b8e877a95c09ce27c859fc89aec7e90a76e5",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc3852148?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ac8b8e877a95c09ce27c859fc89aec7e90a76e5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
264802501
|
pes2o/s2orc
|
v3-fos-license
|
SparseByteNN: A Novel Mobile Inference Acceleration Framework Based on Fine-Grained Group Sparsity
To address the challenge of increasing network size, researchers have developed sparse models through network pruning. However, maintaining model accuracy while achieving significant speedups on general computing devices remains an open problem. In this paper, we present a novel mobile inference acceleration framework SparseByteNN, which leverages fine-grained kernel sparsity to achieve real-time execution as well as high accuracy. Our framework consists of two parts: (a) A fine-grained kernel sparsity schema with a sparsity granularity between structured pruning and unstructured pruning. It designs multiple sparse patterns for different operators. Combined with our proposed whole network rearrangement strategy, the schema achieves a high compression rate and high precision at the same time. (b) Inference engine co-optimized with the sparse pattern. The conventional wisdom is that this reduction in theoretical FLOPs does not translate into real-world efficiency gains. We aim to correct this misconception by introducing a family of efficient sparse kernels for ARM and WebAssembly. Equipped with our efficient implementation of sparse primitives, we show that sparse versions of MobileNet-v1 outperform strong dense baselines on the efficiency-accuracy curve. Experimental results on Qualcomm 855 show that for 30% sparse MobileNet-v1, SparseByteNN achieves 1.27x speedup over the dense version and 1.29x speedup over the state-of-the-art sparse inference engine MNN with a slight accuracy drop of 0.224%. The source code of SparseByteNN will be available at https://github.com/lswzjuer/SparseByteNN
Introduction
Deep convolutional neural networks (CNNs) have achieved extraordinary performance in computer vision tasks * Equal contribution Figure 1.SparseByteNN overview and become the fundamental element and core enabler of ubiquitous artificial intelligence.With the fast growth of embedded and mobile applications, executing CNNs on mobile platforms is becoming increasingly attractive, which will improve computing power utilization, enhance data security, and reduce dependence on the network [5] [21] [22].However, typical state-of-the-art(SOTA) CNNs models are computation-extensive and memory-hungry.Even mobile devices with advanced CPUs and GPUs are considered resource-constrained when executing them.Thus, achieving efficient inference with real-time performance is still a challenging task.
To achieve this goal, extensive efforts have been made for the optimization of algorithms, software, and hardware.Algorithm optimization includes efficient network backbone design and model compression.Early SOTA CNNs [36] [12] [37]usually have a backbone stacked by normal 3×3 convolution (CONV) layers.Their computationally prohibitive cost makes real-time deployment on mobile devices almost impossible.[15] [35] [9] [38] use depthwise separable convolution instead of normal CONV layers to build efficient models for mobile and embedded vision applications, which become the mainstream of mobile net-work design.In order to further reduce the redundancy of CNNs, model compression techniques, including model pruning [10] [11] [42] [24] [17] [26] [14] [31] and model quantization [8] [3] have been proposed and studied intensively for model storage reduction and computation acceleration.Weight quantization is less supported in mobile devices, especially mobile GPUs [33].Therefore, this paper leverages model pruning as the primary model compression technique.Recent developments in pruning can be mainly divided into weight pruning [10] [11] [42] and filter pruning [24] [31] [26] [14].Weight pruning directly removes weight values at any position in the network, which is demonstrated to achieve an extremely high compression rate with high accuracy performance.However, weight pruning is not friendly for hardware or software optimization.Specifically, the compression makes few contributions to memory access saving and calculation acceleration on general CPU(SIMD) and GPU(SIMT) architecture.In contrast, filter pruning directly removes the entire filter in the convolutional neural network, which can generate hardware-efficient regular models but fails to maintain accuracy beyond moderate sparsity ratios.Especially for mobile-oriented lightweight CNNs, such as Mobilenet [15], due to the small redundancy of model parameters, filter pruning encounters severe accuracy loss problems.
We notice that the pruning granularity of weight pruning and filter pruning represent two extremes in the design space, leading to the failure of balancing model accuracy and speedup gains.Besides, these optimization algorithms are isolated and have not been co-optimized with software and hardware optimization.In this paper, we introduce a new pruning strategy called fine-grained kernel group pruning(FKGP), whose sparsity granularity is between weight pruning and structured pruning, revealing a previously unknown point in the design space.In particular, for the core operators in the mobile network, including pointwise convolution(Conv1×1) and depthwise convolution(DwConv3×3), we designed diverse sparse patterns, which can have a better trade-off between accuracy and hardware efficiency.Our fine-grained kernel sparsification is implemented in groups, which means that kernels in the same group are kept or removed uniformly, and kernels in the kept group have the same sparse pattern.Compared with single kernel sparse, group kernel sparse has less precision loss but is more friendly to parallel acceleration.Based on this, we propose a whole network rearrangement strategy to derive a more influential kernel group for accuracy improvements.The above fine-grained sparse patterns cannot be directly accelerated by a general inference engine, so we introduce a family of efficient sparse kernels for ARM and WebAssembly to translate reduction in theoretical FLOPs to hardware efficiency.
In summary, we propose a novel end-to-end mobile ac-celeration framework named SparseByteNN.Combined with the improved algorithm optimization strategy and sparse engine implementation, SparseByteNN advances SOTA in model pruning and open source Inference engine.The overall framework of SparseByteNN is shown in Fig 1 .Our contributions can be summarized as follows: 1. We focus on the acceleration of mobile lightweight CNNs, and design fine-grained kernel group sparse strategies for Conv1×1 and DwConv3×3 respectively.The cooptimized sparse patterns achieve an extremely high compression rate with high accuracy performance.Moreover, with the high-performance sparse kernel implementation for ARM and WebAssembly, the designed patterns can recover the hardware efficiency lost due to the fine-grained patterns.For Conv1×1, we demonstrate a geometric mean of speedups of 26.80% compared to the dense network at 30% sparsity.In particular, we achieve high-performance compression of DwConv3×3, which can speed up by up to 49.6% at 33% sparsity.
2. We propose a whole network rearrangement strategy, which divides kernels with similar importance into a group, improves the accuracy of each group's importance evaluation and derives a more influential kernel group for accuracy improvements.
3. We propose an end-to-end model acceleration framework SparseByteNN, consisting of three components: a) compression algorithm component, which provides out-ofthe-box pruning capabilities for pre-trained models b) model conversion tool, which converts the model IR of the training framework into Model IR of sparse engine c)sparse inference engine, which provides efficient inference implementation compatible with CPUs for fine-grained kernel group sparsity.
Model Pruning
The improvement of neural network performance is usually accompanied by the increase of resource requirements such as params and flops, One popular approach for reducing them at test time is model pruning, which can be categorized into weight pruning and filter pruning.Weight pruning dates back to Optimal Brain Damage [23], which prunes weights based on the Hessian of the loss function.Many recent works [10] [11] [42] have further optimized the pruning evaluation criteria and pruning methods.For example, Han et al. [11] proposed a three-step strategy including training, pruning, and fine-traing to remove unimportant connections and restore accuracy.Michael et.al [42] proposed a gradual pruning technique that can be seamlessly incorporated into the training process.Although it is an adaptive in-training pruning strategy, it cannot recover from premature pruning.Lin et al. [25] proposed a dynamic allocation of sparsity patterns and incorporated feedback signals to reactivate pre- Figure 2. Illustration of the implementation form of fine-grained kernel group sparsity on core operators maturely pruned weights.Weight pruning focuses on pruning the fine-grained weight of filters leading to unstructured sparsity in models, which cannot be directly accelerated on general computing libraries.In contrast, filter pruning targets pruning the entire filter, which could achieve structured sparsity.[16] proposed to explore sparsity in activations for network pruning.[17] uses l2-norm to select unimportant filters and explores the sensitivity of layers for filter pruning.[26]introduces sparsity on the scaling parameters of batch normalization (BN) layers to prune the network.[31] proposes a Taylor expansion-based pruning criterion to approximate the change in the cost function induced by pruning.To reduce dependence on pre-trained models and improve model capacity, [13] [14]proposed soft filter pruning enables the pruned filters to be updated when training the model after pruning.Although the pruned model obtained by filter pruning can take full advantage of high-efficiency Basic Linear Algebra Subprograms (BLAS) libraries to achieve better acceleration but fails to maintain accuracy beyond moderate sparsity ratios.The pruning granularity of weight pruning and filter pruning represent two extremes in the design space, causing them to fail to balance accuracy and acceleration gains.
Recently, some work has noticed this problem and pro-posed some pattern-based or block-based weight pruning schemes with compiler-based optimizations [33] [32] [30].Similar to our work, their pruning granularity is between weight pruning and filter pruning to balance accuracy and inference.[30] describe a 2:4 pattern pruning scheme and NVIDIA Ampere architecture introduces Sparse Tensor Cores to provide dedicated acceleration capabilities for this sparse mode.Furthermore, PatDNN [33] uses Alternating Direction Methods of Multipliers(ADMM) and patternbased weight pruning schema to solve a fine-grained sparse model and performs compiler optimizations to achieve realtime mobile inference.PatDNN mainly optimized the performance of Conv3x3, but the principal layers of the mobile network represented by MobileNet-v1 [15] are Conv1×1 and DwConv3×3, which means that it suffers difficulties when generalized to mobile networks.In contrast, SparseByteNN focuses on the optimization of mobile networks, and designs customized 4×4 and 16×1 pattern-based sparsity for Conv1×1 and DwConv3×3 respectively, and replaces compilation optimization with expert-level manual optimization, achieving a more extreme performance.
Acceleration Frameworks on Mobile
On-mobile neural network deployment relies on the performance of inference framework, so on-mobile DNN inference frameworks have attracted more and more attentions [27].Representative DNN acceleration frameworks, such as TensorFlow-Lite [6], Pytorch-Mobile [28], and TVM [2] are designed to support inference acceleration of dense neural networks.Although these inference frameworks already incorporate several graph optimization and compilation optimization strategies, including layer fusion, constant folding and Auto-Tuning, they lack the ability to further accelerate sparse models.Similar to our work, MNN [19] recognizes the potential of sparse speedup and supports block-based sparse speedup based on expert hand-crafted optimization, with a sparse granularity of N×1.In order to improve the optimization efficiency, PatDNN [33] and Auto-PatdNN [41] realize the sparse model acceleration based on compilerbased optimization.Although these frameworks support sparse acceleration, they support limited types of sparse operators and suffer difficulties when generalized to DNN layers other than Conv3×3 layers(PatDNN) and Conv1x1 layers(MNN).In Section 4.2, we will discuss this issue and compare performance.
Method
In this section, we first introduce the mathematical representation of FKGP in Section 3.1.Then we introduce the sparsity patterns of Conv3×3, Conv1×1, and DwConv3×3 in Section 3.2, and the co-optimized implementation in Section 3.3.In Section 3.4, we describe a whole network rearrangement strategy, which can improve the performance of the sparse model.Finally, we introduce the overall framework of SparseByteNN in Section 3.5.
Preliminaries
For an L-layer pre-trained model, the weights and biases for the i-th layer are denoted by , where n i , kh i , kw i and c i stand for the output channel, input channel, kernel height, and kernel width respectively.The input for i-th layer is denoted by , where ih i , iw i stand for the input height and input width.To obtain a sparse model, a general approach is to prune part of W i , i.e., to set them to zero.This process can be implemented by applying a mask M i ∈ {0, 1} to the weights, resulting in a sparse model . Pruning without information loss corresponds to W i = W i , i.e., δ = 0. Thus, the pruning problem can be summarized as minimizing the δ at the pruning ratio ρ with the optimal mask, arg max For weight pruning, the weights could be removed at random locations.In this case, the mask tensor M i has the same shape as For filter pruning, the sparse granularity is the entire filter.Thus, each mask M i has the shape of R n i and K = n i .To facilitate the implementation of pattern-based pruning, we reformat the expression of i represents a kernel of shape n i × c i .Semantically, the kernel is a connection channel between the input feature map and the output feature map.For our FKGP, we further group the kernels on the input channel c i and output channel n i as a whole, which are simultaneously sparse or removed.Continuing the definition of PatDNN [33], we define the two cases of fixed pattern sparse and complete removal as pattern group pruning and connectivity group pruning, such that arg max where go and gi represent output channel and input channel group size respectively, and ρ is the sparsity rate.
Fine-grained Kernel Group Sparsity
As shown in Fig 2, our proposed FKGP strategy designs a customized sparse strategy for Conv3x3, Conv1x1, and DwConv3x3.
Conv3x3 is less computationally efficient than depthwise separable convolution, so it is not the core operator of mobile lightweight CNNs.For example, MobileNet-v1 [15] contains one layer of Conv3x3, and its calculation amount is only 1.91%.We focus on the acceleration of mobile lightweight CNNs so that the sparse mode of Conv3x3 is not carefully designed but directly adopts the 5:9 sparse mode proposed by PatDNN [33].As shown in Fig 2a, each kernel is either completely removed called connectivity pruning, or partially removed, and the remaining weights form specific kernel patterns called pattern pruning.Every kernel reserves 4 non-zero weights out of the original 3 × 3 kernel, which contains the central weight.PatDNN [33] elaborates on kernel patterns with more details.In order to better balance speed and accuracy, we regard g i × g o (4 × 4) kernels as a group, and each group is considered as a whole.Conv1x1 and Fc layer are commonly transformed into GEMM, i.e., the multiplication of a weight matrix and an input matrix.Each kernel of these layers contains only one weight, and only connectivity sparsity exists in these layers.As shown in Fig 2c, we divide the weight tensor into n i go × c i gi blocks with equal size(g i × g o ) and apply connectivity group pruning.The importance of each block is evaluated by l 1norm and the n i go × c i gi × ρ blocks with the lowest importance are removed.The value of g i × g o needs to comprehensively consider the model accuracy and acceleration friendliness.The larger the value is, the less sparse patterns exist in the model, which is not conducive to maintaining accuracy but more conducive to acceleration.We perform 30% connectivity group pruning on Conv1x1 in MobileNet-v1(ImageNet) to obtain accuracy with different group sizes.As shown in Table 1, there is only a slight loss of accuracy when the group size is no larger than 4 × 4. When the group size is further increased to 8×8, the accuracy loss increases by 3.72 DwConv3x3 is one of the components of depthwise separable convolution, which is difficult to compress.Due to the loss of accuracy, the previous similar work [33] did not realize the pattern-based pruning of the DW layer.On the contrary, we propose 3:9 sparse patterns for the DwConv3x3 layer, which can achieve near-lossless pattern pruning.As shown in Fig 2b, each kernel removes 3 weights from the original 3 × 3 kernel, which are taken from the first and third columns and distributed in three rows, in which case there are 2 3 potential kernel patterns.We regard g i × g o kernels as a group and each group selects the best kernel pattern by maximizing the l 1 -norm after sparse.It should be noted that the input channel of depthwise convolution is equal to 1, so a single kernel is essentially the entire filter, which means that the connectivity pruning will degenerate into filter pruning.In order to maintain accuracy, we only perform pattern group pruning for DwConv3x3, resulting in 33% sparsity.As For DwConv3x3, the calculation principle determines that g i is fixed at 1. To determine the best g o value, we study the impact of different g o on the accuracy when only pruning the DwConv3x3.As shown in Table 2, DwConv3x3 pruning is not sensitive to the group size.Considering the calculation friendliness, we set g o equal to 16.
Co-design Inference Engine
Unstructured Conv1x1 pruning provides unique advantages in accuracy compared with structured sparsity.However, the discontinuous weights pose a problem in vectorized parallel computing and lead to increased cache misses.The random connectivity in both n and c dimensions results in negligible or even negative performance effects due to irregular memory accesses.To guarantee the effectiveness of random pruning, this paper propohttps://www.overleaf.com/project/63b6d9cf9a3aed58c82f50fcsesa half-structured method with block sparsity units, ensuring a certain degree of continuity and effectiveness in both n and m dimensions.On one hand, the half-structured method substantially avoids the reduction in performance caused by completely random weights.On the other hand, the random with the block-sparsity level can reduce the loss of training accuracy compared to structured pruning.
The ARM instruction sets on mobile devices have a fixed 128-bit vector length.To adapt to hardware limitations and reduce network training loss, this study selects 4×4 pixels block as minimized sparsity unit for Conv1x1 as shown in Fig 2c .The performance bottleneck of mobile devices generally depends on memory access, especially on low-end devices.Therefore, the design of computing block size for Conv1x1 needs to minimize the memory access times with hardware limitations.The tiling size for input wh-dimension and output n-dimension needs to satisfy the inequality: where M = ih * iw, M p is the block size of wh-dimension, N p is n-dimension block size, K is the input channel, R is the number of registers for ARM architecture which is 32 for armv8 and 16 for armeabiv7 in general.
The M p and N p are obtained as 20 and 4 respectively by solving the inequality.For each cycle, the calculation for every 20 × 4 output results require the input size of 20 × K, and the weight size of K × 4. In this case, the output, input, and weight occupy 20, 5, and 4 registers respectively.29 registers are utilized which is close to the maximum number of registers supported by the hardware.The computation flow is shown as Algorithm 1.
Depthwise is another essential operator in lightweight networks and contains critical information.Inspired by PatDNN [33], this paper proposes a flexible sparsity method for DwConv3x3 pruning to reduce loss of training accuracy.The pseudo-code of Depthwise computing process is shown in Algorithm 2. The calculation in this study uses a sliding window method, and the input data format will be packed as NHWC16 to adapt to the computation for sparsity pattern.We regard 16 as a group for the output channel.The output channels that are not multiples of 16 are set to 4 or 8 as a group.In addition, to minimize the training loss of Depthwise, this study further adds a full-1 pattern, namely dense mode.The number of output channels for full-1 pattern can be dynamically increased according to the specific network training accuracy and which still follows the 16-block rules.
In this study, the output data of 2x16 block is calculated in every computing cycle.The number of 2 and 16 stands for feature map dimension and output channel respectively.Considering the usage of registers, as shown in Fig 4a the weight data requires 6x16 block size which occupies 12 neon registers.The input data requires 9x16 block size which occupies 18 neon registers.The output data is 2x16 which occupies 4 neon registers.We reuse two registers for the input data and keep the data individual from the others.
For the pruning of Conv3x3 operator, this study applies the method proposed in paper PatDNN [33] that 56 sparsity patterns were implemented.The same sparsity pattern will be shared by several adjacent filters which can be dynami- cally selected by the number of 4, 2, and 1 during network training considering the balance between training accuracy and performance.In this paper, we mainly focus on weights pruning for lightweight networks, so the sparsity of Conv3x3 operator will not be introduced in detail.
Whole network rearrangement
For pattern group pruning and connectivity group pruning, we observed that when the importance of kernels in the group differs greatly, it will lead to evaluation inaccuracy, which means that relatively important kernels are affected by unimportant kernels.This observation motivates us to change the layout of the weight tensor before pruning to reduce the importance variance of the kernels within a group.As shown in Fig 5, we propose the whole network rearrangement strategy to derive more influential blocks for accuracy improvements.When the example matrix(top-left) is pruned by 50% with a group size of 2 × 2, it results in a sparse weight (top-right) with l 1 -norm of 57.If we change the order of the input channel dimension and output channel dimension(bottom-left), the resulting sparse weight (bottomright) would have a total weight magnitude of 70.In order to avoid changing the output of the network, the rearrangement index needs to be propagated throughout the network graph, which means that the filter rearrangement index calculated by the "parent" layer will be used as the channel rearrangement index of "children" layers.Searching for good filter permutations for the target layer is challenging because for a layer with n i filters, there exists n i !permutations, which is almost uncomputable for large n i .However, the number of unique permutations can be reduced to in group pruning, in that both the order of filters in a group and the order of groups in a large matrix make no difference in accuracy improvements.Each unique permutation can represent g o !* (n i /g o )! permutations, which will lead to the same sparse matrix l1-norm.To quickly search and evaluate unique permutations, we define a canonical form that a permutation is unique only if each of its groups' filters is in sorted order and the groups are sorted with respect to each other (e.g. by the first index value of each group).Then we use the bounded regression [34] method to quickly solve the above problem.
Overview of SparseByteNN Framework
As shown in Fig1, The classic neural network pruning process consists of three steps: training from scratch, pruning, and fine-traing.Before the second pruning step, we first rearrange the entire network to further reduce the pruning impact.Then, we apply FKGP pruning to obtain the sparse model and use fine-traing to recover the accuracy of the sparse model.Similar to NNI [29], we encapsulate the above process into an algorithm compression component to provide users with out-of-the-box sparse fine-traing capabilities.The model conversion tool converts the ONNX model exported by the sparse fine-traing process into an sparse model IR [39].Finally, based on the sparse model IR, the sparse inference engine completes the forward process on the target hardware platform.
Experiments
In this chapter, we first show that SparseByteNN has a better precision-speed trade-off than Filter Pruning, Weight Pruning, and other sparse engines in the industry through comparisons of different dimensions.Then, we prove the acceleration benefit of DwConv3x3 and Conv1x1, and the accuracy gain brought by the whole network rearrangement through a series of ablation.Finally, we extended FKGP to WebAssemebly and achieved remarkable performance.
Implementation Settings
In order to make the comparison fair and sufficient, we use the Filter Pruning and Weight Pruning algorithms contained in NNI [29] to construct a comparable experiment.The sparse rate is the real sparse rate of the entire network, which considers the interlayer coupling.All the experiments based on resnet20 [18] of the CIFAR10 [20] have the same hyperparameters, in which epochs, batch size, learning rate, and weight decay are set to 250, 128, 1e-2, and 1e-5 respectively, and the optimizer and scheduler are set to sgd [1] and mstep respectively.Other experiments of ImageNet [4] are based on TIMM [40].The pre-training and sparse-training of MobileNet-v1 use the same hyperparameters, in which epochs, batch size, learning rate, and weight decay are 300, 128, 0.045 and 1e-5 respectively, and the optimizer and scheduler are respectively choosing rmsproptf [7] and stepdecay, where decay-epochs is set to 2.4 and decay-rate is set to 0.973.[16], Fpgm [14], L1 [17], L2 [17], ActivationMean-Rank [31], ActivationTaylor [31], and weight pruning of Agp [42].Fig 8 shows that filter pruning suffers the most performance degradation and the FKGP strategy surpasses all filter pruning.Although the accuracy of weight pruning under the same Flops exceeds that of FKGP, the former cannot obtain actual acceleration benefits.We conducted experiments on the actual latency of three types of pruning algorithms on mobile CPUs and found that FKGP exhibited a better speed-accuracy trade-off performance.Specifically, when the classification accuracy is 90.6%,FKGP achieved a 34% (0.91ms vs 1.34ms ) acceleration compared to FPGM on Qualcomm 855 and 29.6%(6.16msvs 8.75ms) on Qualcomm 625.Then, we compare the accuracy and latency on a lightweight neural network consisting of only conv1x1 and dwconv3x3.Since there is no obvious performance difference between different structured pruning algorithms, so we choose FPGM [14] as a representative.As shown in Table 3, compared with the baseline MobileNet-v1 [15], when the pruning rate is 20%, FKGP speeds up by 13%, and the accuracy increases by 0.264%.When the pruning rate is 40%, it speeds up by 29.6% while the accuracy only decreases by 0.78%.Compared with the Filter pruning, the accuracy of FKGP has an advantage of 0.878% when the latency is close to 25.3ms.
Performance Comparison
Finally, to further illustrate the acceleration advantages of SparseByteNN, taking mobilenetV1 as the baseline network and Qualcomm 855 as the test platform, we compare the performance between SparseByteNN and the SOTA on-mobile inference framework MNN [19].Since MNN only supports sparse Conv1x1, for the fairness of the comparison, Sparse-ByteNN turns off the sparse acceleration of DwConv3x3.As shown in Table 4, SparseByteNN is 3.21% faster than MNN for the dense model.With the increase of the sparse rate, the performance advantage of SparseByteNN is further highlighted.When the sparse rate is 30%, the performance advantage reaches a maximum of 22.30%.Based on the identical experimental configuration, we obtained the accuracy at this sparse rate.The results show that although SparseByteNN has a larger sparse granularity, the accuracy drop is close to that of MNN(0.224%vs 0.213%).The experimental results and its technical documentation show that MNN will have a significant acceleration compared to the dense model only when the sparse rate reaches more than 30%, and it suffers difficulties when generalized to DNN layers other than Conv1x1 layers.We will prove the effectiveness of the fine-grained sparse model based on experiments.Conv1x1: As described in Section 3.2 and Section 3.3, we only perform connectivity group pruning with a group size of 4x4 on Conv1x1.Table 3 and Table 4 show that SparseByteNN has performance advantages over SOTA pruning algorithms and sparse inference engines when only considering Conv1x1 pruning.In order to further illustrate the acceleration performance of the Conv1x1 operator, we conducted a comprehensive benchmark on common input configurations.As shown in Fig 6, when the sparsity rate is 30%, the speedup of a single operator ranges from 11.50% to 39.70%, with a median of 26.80% and a average of 25.38%.Test results at more sparsity rates can be found in the appendix material.In order to derive more influential blocks in group pruning, we propose the whole network rearrangement strategy.Table 6 presents the experimental results, which are conducted to further explore the impact of rearrangement under different Conv1x1 sparse rates on MobileNet-v1(ImageNet).From Table 6, we observe that the whole network rearrangement can effectively improve network accuracy.The above experimental results prove the excellent performance of our proposed fine-grained kernel group sparsity on ARM CPU.To illustrate the generalization of this strategy, we implemented efficient sparse kernels for Conv1x1 and DwConv3x3 based on WebAssembly, which can be used to accelerate neural network applications on the Web.As shown in Fig 9, when the input feature maps are 64x64, 96x96, and 128x128, under the channel configuration commonly used on the web side, the sparseness of 30% can achieve an average speedup of 22.3%, 24.15%, and 27.7%, respectively.Test results at more sparsity rates can be found in the appendix material.
Conclusion and Future Work
This work proposed a novel mobile inference acceleration framework named SparseByteNN, which provides end-toend neural network acceleration capabilities from algorithms to engines on general CPUs.It contains a fine-grained group kernel sparsity schema and a family of co-optimized efficient sparse kernels.Combined with a customized network rearrangement strategy, SparseByteNN achieves real-time execution as well as high accuracy.The experiments on the MobileNets and CPUs platforms demonstrated that Sparse-ByteNN has better speed and accuracy trade-off performance than the current SOTA pruning algorithms and sparse inference engine.In the future, we will further expand the application of pattern-based software-hardware collaborative sparse acceleration on more architectures, including mobile GPU(OpenCL) and server GPU(CUDA).
Acknowledgement
(a) Connectivity group pruning and 5:9 pattern group pruning for Conv3x3 (b) 3:9 pattern group pruning for DwConv3x3 (c) Connectivity group pruning for Conv1x1
Figure 3 .
Figure 3. Calculation flow of block-based pruning for Conv1x1 (a) Weight sparsity for the third column reuses input data twice for every two output pixels (b) Weight sparsity for the middle column needs more memory access to input data
Figure 4 .
Figure 4. Calculation method of pattern-based pruning for Dw-Conv3x3
Algorithm 1 :
Block-size Sparsity of Conv1x1 Data: Weight W ∈ R oc×kh×kw×ic , Input feature map I ∈ R n×ih×iw×ic and sparse info SD Result: Output feature O ∈ R n×oh×ow×oc 1 Set block size corresponding to the ih * iw dimension and oc dimension to M p and N p ; 2 for i ← 1 to ih * iw/M p do 3 for j ← 1 to oc/N p do 4 computing output block O i,j of M p × N p ; 5 for kIndex ← 1 to ∈ ic/4 do 6 kStartIndex = SD[kIndex]; 7 computing every 4 input channels as a summation factor for final result for output block data M p × N p ; In our study, the first and third pixels of each row of the kernel were randomly pruned.From the perspective of memory access, the input data corresponding to the middle position of weights can be reused twice for every two output data as shown in Fig 4a.However, the pruning of the middle pixel as shown in Fig 4b could not reuse the input data which increases the time consumption for memory access to cache and DDR.Every 8 sparsity patterns in our study reduce the access memory by 25% compared to the sparsity method for pruning the middle pixel.
Figure 6 .
Figure 6.Acceleration performance of Conv1x1 under different configurations.The experiment is conducted on Qualcomm 855 CPU with 30% sparse rate.Best view in colors.
Figure 7 .
Figure 7. Acceleration performance of DwConv3x3 under different configurations.The experiment is conducted on Qualcomm 855 CPU with 33% sparse rate.Best view in colors.
Table 1 .
Sparse model accuracy under different group size configurations and the accuracy of the baseline model is 72.634%.
Table 2 .
Sparse model accuracy under different group size configurations and the accuracy of the baseline model is 72.634%.
Table 3 .
Performance comparison on MobileNet-v1(ImageNet) and the accuracy of the baseline model is 72.634%
Table 4 .
Comparison of inference time under different sparsity rates.Sp represents the sparse rate.
4.3.1 Effectiveness of Sparse PatternsOne of our main contributions is to design different patternbased group pruning strategies for Conv1x1 and DwConv3x3 respectively, taking into account both accuracy and speed.
|
2023-11-01T06:43:06.157Z
|
2023-10-30T00:00:00.000
|
{
"year": 2023,
"sha1": "3bf3a48c10a92d7a4a17b57cf3f5296e9c0467a7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3bf3a48c10a92d7a4a17b57cf3f5296e9c0467a7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
7020781
|
pes2o/s2orc
|
v3-fos-license
|
Use of HOS data in Florida.
The Medicare Health Outcomes Survey (HOS) is a longitudinal cohort study that assesses physical and mental functioning of Medicare enrollees in MCPs. Realizing the potential of HOS data to improve health care, the Florida Medicare Quality Improvement Organization (QIO) analyzed HOS scores and shared them with M+COs to assist in evaluating the efficacy of their disease management programs. The QIO also discusses additional uses for HOS data such as cross-linking with a patient satisfaction survey and sharing with health care organizations that collaborate with the QIO.
INTRODUCTION
Process and outcome are measures used to evaluate quality in health care. Due to the appropriateness of severity adjustment and the long wait times for outcomes, these measures are not always valid for health care quality improvement, especially when comparisons are used. Process measures are easier to measure and compare, but their validity is harder to prove. Although mortality has been used as an important outcome measure, life is not just a measurement of years between life and death. The quality of life should also be measured. Functional assessment can partially measure quality of life. The HOS provides a way to describe the general quality of life of Medicare beneficiaries.
HOS uses a set of survey questions known as the SF-36 ® to measure the physical functioning and mental well-being of a group of Medicare beneficiaries over 2year periods of time (Bierman et al., 2001). The survey yields a mental component summary (MCS) and a physical component summary (PCS), which are reliable and valid measures of mental and physical health. These functional assessment scores are measures that can be used to evaluate M+CO disease management programs and national quality improvement projects.
HOS is the first Medicare managed care survey to measure functional outcomes over time . Since its inception in 1998, HOS has provided one of the largest cohort studies available studying the Medicare population and managed care. HOS was launched by CMS in collaboration with the National Committee for Quality Assurance (NCQA) under 2003 HEDIS ® . HOS measures whether enrollees in a particular M+CO maintained, improved, or declined in physical and mental health. Additional items included in HOS allow for case-mix adjustment and were necessary for reliable M+CO-to-M+CO comparisons of health outcomes.
The HOS sample is taken each year from approximately 1,000 Medicare enrollees from each M+CO throughout the United States. Enrollees remaining in the same plan are resampled after 2 years and measured for changes in their perceived health outcomes. The outcomes measured over the 2-year period are described as change scores. The first group of enrollees (Cohort I) was sampled in 1998. Change scores for Cohort I were obtained by resampling Cohort I enrollees in 2000. The sample size makes HOS an extremely large longitudinal cohort study that can be useful in assessing the quality and performance of M+COs. The Florida Cohort IV sample (2001 data) comprises 18,505 randomly selected Medicare beneficiaries from 16 M+COs, and 19 market areas with a total of 9,513 completed surveys and a valid response rate of 51.4 percent. Results can be compared nationally and with other State M+COs.
A major function of M+COs is to promote high quality health care. Disease management programs and health care quality improvement projects have been a major effort in achieving this quality. Proper evaluation of these programs and projects with sharing of best practices will help M+COs to maintain and consolidate their achievements.
QIOs are organizations of health care professionals dedicated to monitoring and improving the quality of health care. Florida Medical Quality Assurance, Inc. (FMQAI) is the Medicare QIO in Florida under contract with CMS to monitor, assess, and improve quality in all settings using data from a variety of sources.
CMS has been collecting HOS and CAHPS ® ‚ (Agency for Healthcare Research and Quality, 2003) a patient satisfaction survey to evaluate overall trends for the M+COs, since 1998 (Centers for Medicare & Medicaid Services, 2003). QIOs are encouraged to analyze these data sets to identify opportunities to improve care in the managed care setting.
M+COs in the State were educated about HOS data and were given examples demonstrating how to measure and trend the effectiveness of their disease management programs. The MCS, PCS, and change scores for diabetes and congestive heart failure (CHF) were trended and linked with M+COs who submitted information about their diabetes and CHF programs. Plan-level HOS scores were also matched with plan-level CAHPS ® ‚ scores. In addition, demographics (age, race, sex) and comorbidity data were trended aggregately as an aside to demonstrate the different kinds of data available to the M+COs and other health care organizations. This article will summarize how the QIO analyzes and uses HOS scores for evaluating M+CO disease management programs, and will also discuss additional uses of the data.
HOS DATA AND M+CO DISEASE MANAGEMENT PROGRAMS
The number of M+COs eligible for HOS participation has changed dramatically in Florida from 29 in 1998 to 16 in 2001. Medicare enrollees for the 16 M+COs in the most recent sample ranged from approximately 4,000 to 240,000 per plan. Even with fewer M+COs participating, Florida still had a large sample of enrollees with chronic diseases to examine for outcomes (Table 1). With this in mind, the QIO felt this rich data set could be used by M+COs to evaluate the outcome that their disease management programs had on enrollees.
FMQAI introduced the study during its monthly teleconferences with the M+COs. Discussions about HOS were periodically introduced at these teleconferences to the M+CO participants. Enthusiasm and spirited discussions ensued among participants about finding ways in which HOS data could be utilized within their organizations. As a result, a series of HOS presentations were developed to stimulate ideas and bring about discussion. At the initial teleconference, overall HOS scores for the State including general demographic data and MCS and PCS scores were also presented. A previous depression project that had been conducted at FMQAI was reviewed as an example of how the HOS scores could be used to measure and improve outcomes. Subsequent presentations eventually led to a discussion of disease management programs and the importance of evaluating the efficacy of these programs. The participants were interested in using HOS scores to determine the effect a disease management program had on enrollees with a particular disease by analyzing their MCS and PCS scores. M+COs primarily rely on HEDIS ® data as a means to measure the quality of the products they deliver to their enrollees. M+COs were interested in utilizing HOS as another reliable data set to measure their performance. M+COs involved in the teleconferences agreed that they would need the assistance of the QIO in order to complete the process of evaluating their disease management programs using HOS data. The QIO requested that interested M+COs submit information about all disease management programs they offered. In addition, start dates and dates of any significant changes made to the programs were requested. Although it was difficult for some M+COs to determine an exact implementation date for their programs, most agreed that programs had been enhanced over time. Nine M+COs chose to participate and mailed the requested information to the QIO. Because the two most common programs were CHF and diabetes, FMQAI focused its analyses on these two comorbidities. CHF and diabetes programs were also selected due to M+CO participation in the national CHF and diabetes projects.
There were nine M+COs that submitted disease management program information. All nine had CHF programs and eight had diabetes programs. M+COs that were eligible to participate, but did not submit disease management program information were still invited to learn about how HOS data could be utilized to improve their programs.
FINDINGS
Analyses were performed for enrollees with CHF and diabetes for all 16 M+COs regardless of participation in the study. Individual and aggregate M+CO scores were trended over time from 1998 to 2001 (Cohorts I-IV). Results were variable and showed that only PCS scores for CHF had improvement over time. MCS scores for all M+COs declined for both CHF as well as diabetes. Improvement in PCS scores was not correlated with improvement in MCS scores. M+COs with the highest CHF MCS scores did not necessarily have the highest PCS scores and vice versa. The same results were true for diabetes.
Cohort II change scores (1999 baseline and 2001 remeasurement) were analyzed in order to study overall changes in outcomes over time for all M+CO enrollees regardless of having comorbidities. Change scores were evaluated for individual M+COs and compared with each other ( Table 2). All M+COs had negative change scores for Cohort II over the 2-year period. When comparing M+CO change scores to each other, it was noted that one particular M+CO had the largest decline in PCS scores, but the smallest decline in its MCS scores. Overall PCS change scores ranged from -0.5 to -3.2. The MCS change scores ranged from -0.3 to -2.7. CHF and diabetes change scores were variable when compared with other comorbidities; however, there was a greater decline in MCS scores when compared to the diabetes scores.
Cohort IV diabetes scores from the eight M+COs with diabetes programs were compared with the one M+CO without a diabetes program. Results found no significant differences in MCS or PCS scores. This finding raised the question of the effectiveness of diabetes management programs. Since all M+COs either had CHF management programs and/or participated in the CHF national project, only overall trends for CHF were analyzed. As previously stated, improvement was only noted in PCS scores. One particular M+CO with good documentation of its CHF and diabetes program, start dates, and specific intervention dates was analyzed separately. MCS and PCS scores were examined over time for this M+CO, which was identified as M+CO-I. A significant improvement was seen in CHF PCS scores over time for M+CO-I ( Figure 1), as noted with most of the M+COs. CHF MCS scores for M+CO-I showed no significant improvement, nor did the MCS CHF scores for most of the other M+COs. Diabetes scores for M+CO-I showed a slight improvement in PCS, but no improvement for MCS (Figure 2).
DISCUSSION
The results were shared with M+COs and attributed to several different factors. Variations in MCS and PCS scores for M+COs with diabetic management programs could be correlated with the implementation of the national diabetes project
Figure 1 Congestive Heart Failure (CHF) Mental Component Summary (MCS) and Physical Component Summary (PCS) Scores for Florida's Medicare+Choice Organization-Ι Ι: 1998-2001
conducted in 1999. During this time, M+COs also had the choice of substituting their own measures in place of the national measures without CMS pre-approval; however, M+COs that chose to use their own diabetes measures did not benefit from participation in a national standardized measurement system. Another factor contributing to these variations may lie in patient compliance with the disease management programs. Repeating the national diabetes project for M+COs should standardize diabetes measures, reduce variation, and improve MCS and PCS diabetes scores over time.
FMQAI examined specific interventions that were implemented by each of the M+COs and discussed these interventions with the M+COs in relation to their HOS scores. For example, M+CO-I, which began its diabetes management program in January 1999 along with the national diabetes project, initiated educational classes for both clinical staff and enrollees with diabetes. It also distributed a diabetic flow sheet developed for its providers and educational materials for its diabetic enrollees. Then in 2000, it hired a diabetic educator. As a result, Figure 2 shows that M+CO-I diabetic PCS scores improved slightly over
Figure 2 Diabetes Mental Component Summary (MCS) and Physical Component Summary (PCS) Scores for Florida's Medicare+Choice Organization-Ι Ι: 1998-2001
time, so the program may have been somewhat effective in improving physical outcomes for its enrollees with diabetes. The diabetes MCS scores for M+CO-I remained the same. Based on the lack of improvement over time in mental outcomes for its enrollees, FMQAI shared with M+CO-I the benefits of incorporating a mechanism to address the mental health status of its diabetic enrollees such as adopting a depression-screening tool.
In looking at CHF for M+CO-I, HOS scores can be used as an example of measurable outcomes that can be tracked according to the timing of interventions implemented. Its CHF program began February 1998 by implementing provider education. In January 1999 it began educating its CHF enrollees and held monthly support group meetings. Then in 2000, M+CO-I began providing two home followup visits to enrollees who were discharged from the hospital with CHF. Then in January 2001, the national CHF project began. M+CO-I continued to work on improving CHF outcomes for its enrollees, and in March 2002 it contracted with a company to provide electronic scales for placement in the homes of its CHF enrollees to assist them in the monitoring of any worsening of CHF symptoms.
When explaining the overall improvements in CHF PCS scores to the M+COs, several external factors were reviewed that could have accounted for the improvements over time. Because CHF has been the most prevalent DRG claim according to the Medicare inpatient claims data in Florida, it has been a long-time focus of quality improvement efforts, possibly leading to standardization in treatment. HOS data show that when CHF is combined with other heart diseases, it ranks third after hypertension and arthritis. M+COs had focused much effort in improving CHF management for their enrollees prior to the implementation of the 2001 CHF national project.
FMQAI also had discussions with the M+COs about the low MCS scores for CHF enrollees, which contrasted with the improvement efforts directed towards physical outcomes for CHF enrollees. The CHF management program information sent to FMQAI from the nine M+COs noted a clear lack of focus on depression screening, which may have accounted for lower MCS scores. In an effort to improve mental health outcomes, FMQAI again encouraged M+COs to implement depression screening and treatment for these enrollees.
M+COs can also benefit from information provided by other rich data sources that can assist in identifying areas for improvement and insights into how well an M+CO is doing with respect to the enrollee perceptions. For example, when HOS data is linked with CAHPS ® data, it not only provides valuable information about the enrollees' physical and mental outcomes, but also about their perception of care. QIOs can educate M+COs about measuring their disease management program effectiveness along with enrollee satisfaction by examining HOS and CAHPS ® scores linked together.
FMQAI analyzed HOS data linked with CAHPS ® ‚ data at the plan level (Table 2) in order to analyze both the effectiveness of disease management programs, and the overall ratings on the M+CO. CAHPS ® scores were categorized by three areas of general satisfaction: plan/personal doctor/all doctors, access to care and quality of care. Results were analyzed using Pearson correlation for relationships between general satisfaction, access to care, and quality of service. Correlation between MCS and PCS scores and CAHPS ® were calculated and shared with the M+COs. Generally, combining HOS and CAHPS ® ‚ revealed that after adjusting for comorbidities and demographic characteristics, higher scores for quality of service, and access to care can be correlated with improved enrollee outcomes.
FMQAI will continue to analyze HOS data for M+COs to monitor changes over time as their disease management program offerings change. FMQAI will continue to offer ongoing feedback to M+COs on HOS and CAHPS ® analysis to individual M+COs as a measure of efficacy for the programs they offer, and to support their quality improvement efforts.
Other Uses for HOS Data
Most national organizations devoted to chronic diseases provide excellent statistics and demographics about persons with those diseases. However, it is difficult for these organizations to provide statistics about quality of life and functional outcomes. By using HOS data, these organizations can examine statistics on their senior patients' functional outcomes and quality of life and study whether certain interventions improved these outcomes.
Arthritis is the second ranked comorbidity for the Florida HOS with 5,378 (52 percent) of enrollees responding "yes" to having arthritis. For health care providers interested in arthritis, MCS and PCS scores for these enrollees could yield valuable insights to their clients' quality of life over time when compared to enrollees without arthritis. The QIO shares this information with the Florida Arthritis Partnership and Department of Health (DOH) on request. If future national projects are devoted to arthritis, HOS scores can provide valuable outcome information as a way of measuring the effectiveness of arthritis interventions.
Because of the large available sample in Florida, similar information on diabetes, CHF, acute myocardial infarction (AMI), and other comorbidities were shared with organizations devoted to these chronic diseases. For example, there were 2,162 enrollees who responded to having diabetes, resulting in a large sample of diabetics in the State to examine for various outcomes such as disparities between black and white beneficiaries. The Florida DOH has an active diabetes program of which FMQAI is a stakeholder. HOS data have been shared with the DOH on request as a data source available to measure outcomes for diabetics and to validate the estimated number of elderly diabetics in the State.
Another example for HOS use is on AMI. Florida had 1,229 enrollees who responded "yes" to having an AMI. This respectable sample could be studied for differences in male versus female outcomes. FMQAI is also a stakeholder with the Florida DOH Cardiovascular Steering Committee, and has shared AMI and CHF HOS data.
The effect of multiple comorbidities on MCS and PCS scores is also an area of interest for M+COs. HOS participants had a large number of comorbidities with 89 percent having one or more, 19 percent had 5 or more, 13 percent had 4, 17 percent had 3, 20 percent had 2, and 20 percent had 1. Both scores declined in correlation with increasing number of comorbidities. PCS scores showed greater declines than MCS scores (Table 1).
Comorbidities and cancers were linked for their impact on MCS and PCS scores compared with scores from enrollees without these conditions (Table 1). In all cases, having a comorbidity resulted in lower scores than for persons without the comorbidity. The lowest scores were seen in the PCS of CHF enrollees. This shows there is much more opportunity for improvement related to the quality of life for the CHF population.
General demographic data on enrollees can be of interest to many organizations including the M+COs. Age, race, and sex data (Table 3) were linked with MCS and PCS scores to examine outcome trends and showed decline with age. Although males generally had higher scores than females, age was a contributing factor since there were more females in the older groups. Race scores, distributed between white, black, and other enrollees, showed MCS and PCS scores were highest in white enrollees. Further analysis of MCS and PCS scores of specific comorbidities linked with race or sex can study correlations within areas known to have disparities in health care such as black enrollees with diabetes. In general, larger variations were seen among PCS rather than MCS scores. This could be a result of comorbidities affecting physical abilities more directly than mental well-being.
Of particular interest to stakeholders devoted to cancer, were the outcome data on lung, prostate, breast, and colon cancer (Table 4). HOS cancer scores were shared with the Florida Chapter of the American Cancer Society (ACS) that is a stakeholder at FMQAI and is interested in outcome data for enrollees with cancer. Change scores showed significant MCS decline (N=767,-1.7) and PCS decline (N=762,-2.5) for these four cancers. Although enrollees with prostate cancer initially had higher MCS and PCS scores than breast cancer, over a 2-year period they had lower PCS scores. Enrollees with lung cancers had the lowest MCS, PCS, and change scores with colon
CONCLUSIONS
HOS data is currently underutilized by M+COs. QIOs can take the lead in introducing and educating M+COs about the value of HOS data and how HOS scores can measure disease management program effectiveness and enrollee outcomes. If HOS scores were incorporated as a measure of outcome for M+CO projects, HOS would be utilized further and awareness about its merits would increase.
When analyzing both HOS and CAHPS ® scores for associations, a more complete assessment regarding the status of care for enrollees with a specific disease such as CHF or diabetes is at hand. Subsequently, this would give a full perspective of the care processes and effectiveness of disease management programs in improving the enrollees' quality of life.
As previously mentioned, the physical functioning for enrollees with CHF has improved since 1998, and the variation between programs and care offered by M+COs has been reduced. Indeed, reduction in the variation of disease management programs offered by M+COs will lead to standardized and improved care (Deming Electronic Network, 2003).
The M+COs that participated in this study all had CHF disease management programs and participated in the national CHF project. This unified participation resulted in an overall improvement in physical functioning and satisfaction for CHF enrollees as portrayed by their HOS and CAHPS ® scores. Diabetes management programs did not show as much improvement in MCS and PCS scores as CHF. Over time, a large variation in PCS scores was observed among M+CO enrollees. Lack of standardization may have led to this large variation. Due to the prevalence of diabetes and the great opportunity for improvement, diabetes is the M+CO national project topic for year 2004. With this increased emphasis on M+CO diabetes through a unified national improvement project in 2004, HOS scores can be used to measure whether any positive changes result from these standardized M+CO efforts.
If public reporting of HOS scores becomes available, it could assist Medicare enrollees in choosing an M+CO based on its outcomes. If HOS provided information in a user-friendly format, enrollees with specific comorbidities could potentially research which M+CO had the best scores for persons with their disease. Public reporting would also encourage M+COs to promote activities toward improving their enrollees' physical and mental status and to improve their disease management programs.
Lastly, HOS scores are currently being used to evaluate M+COs, but could also be used in the FFS arena. As previously mentioned, if certain process measures are proven effective in improving HOS results, then these same process measures could be replicated in the FFS area with similar results.
|
2016-05-17T19:09:29.498Z
|
2004-01-01T00:00:00.000
|
{
"year": 2004,
"sha1": "dadf6e95c7fe9df282d06c223e1096181a1facf2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "dadf6e95c7fe9df282d06c223e1096181a1facf2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13847903
|
pes2o/s2orc
|
v3-fos-license
|
The Response to and Impact of the Ebola Epidemic: Towards an Agenda for Interdisciplinary Research
Background: The 2013-2016 Ebola virus disease (EVD) epidemic in West Africa was the largest in history and resulted in a huge public health burden and significant social and economic impact in those countries most affected. Its size, duration and geographical spread presents important opportunities for research than might help national and global health and social care systems to better prepare for and respond to future outbreaks. This paper examines research needs and research priorities from the perspective of those who directly experienced the EVD epidemic in Guinea. Methods: The paper reports the findings from a research scoping exercise conducted in Guinea in 2017. This exercise explored the need for health and social care research, and identified research gaps, from the perspectives of different groups. Interviews were carried out with key stakeholders such as representatives of the Ministry of Health, non-governmental organizations (NGOs), academic and health service researchers and members of research ethics committees (N=15); health practitioners (N=12) and community representatives (N=11). Discussion groups were conducted with male and female EVD survivors (N=24) from two distinct communities. Results: This research scoping exercise identified seven key questions for further research. An important research priority that emerged during this study was the need to carry out a comprehensive analysis of the wider social, economic and political impact of the epidemic on the country, communities and survivors. The social and cultural dynamics of the epidemic and the local, national and international response to it need to be better understood. Many survivors and their relatives continue to experience stigma and social isolation and have a number of complex unmet needs. It is important to understand what sort of support they need, and how that might best be provided. A better understanding of the virus and the long-term health and social implications for survivors and non-infected survivors is also needed. Conclusion: This study identified a need and priority for interdisciplinary research focusing on the long-term sociocultural, economic and health impact of the EVD epidemic. Experiences of survivors and other non-infected members of the community still need to be explored but in this broader context.
Implications for policy makers •
The recent Ebola virus disease (EVD) epidemic in West Africa presents an important opportunity for research that will help to inform efforts to strengthen health systems, and enhance disease preparedness and control measures in the future. • Some of the key priorities for research are to understand the long-term socio-cultural, economic and health impact of the EVD epidemic on Guinea, and to relate that to the local, national and international responses to the outbreak. • Interdisciplinary research is required to ascertain the best ways of supporting and/or treating the survivors of EVD, and of minimising risks of future outbreaks. • Research combining epidemiological and biological studies with a sociological analysis of community members' beliefs and behaviours may help to develop better policies and practice for future disease containment.
Implications for the public
The development of intervention programmes aimed at mitigating the impact of disease epidemics need to be based on evidence derived from direct experiences of the local population. This research scoping exercise carried out in Guinea in relation to the recent Ebola epidemic identified seven research questions for further research. Each of these research questions was identified by key stakeholders and infected and non-infected members of the community and each has important implications for future disease prevention and health protection programmes. Engaging key groups in research at an early stage can help to shape the research agenda so that it is more meaningful and useful to these groups, resulting in research with greater impact.
Background
In 2013, Guinea was the first country in West Africa to experience the recent outbreak of the Ebola virus disease (EVD) which as a whole resulted in over 28 000 cases and 11 000 deaths in 10 countries, making it the largest Ebola outbreak ever recorded. 1 The epidemic took considerable time to contain, despite the extensive mobilisation of personnel, equipment and resources by national and international agencies. 2,3 Viral, health and epidemiological factors alone do not appear to account for this difficulty in controlling the outbreak. 4 It has been suggested that some of the social conditions that contributed to the size, extent and spread of the epidemic in Guinea and surrounding countries included war, population growth, poverty and a poor health infrastructure. 5 These social conditions might be reflected in the relatively low life expectancy rates in Guinea, which stood at 59 years in 2015. 6 Certainly, the capacity of the health system in Guinea appeared to be weak at the time of the outbreak, with several essential functions not performing well. 7,8 It was reported that there were inadequate numbers of qualified health workers; infrastructure, logistics, health information, surveillance, governance and drug supply systems were weak; the organisation and management of health services was sub-optimal; and government health expenditure was low whereas private expenditure (mostly in the form of direct out-of-pocket payments for health services) was relatively high. 9 In addition to health system weaknesses, one of the major barriers to controlling the disease appeared to be community resistance to the Ebola response. 10 For example, the World Health Organization (WHO) reported, in a 6-month retrospective analysis on the first cases of the outbreak, they were sometimes met with violence from a fearful population. 11 The communities' fear appeared to be in response to the way intervention programs had been introduced. 12 It also appeared to be due, in part, to the nature of the disease itself, which, as with other infectious diseases, disrupted the traditional cultural customs and behavioural practices for caring for the sick and dealing with a dying -or the death of -a relative, friend or member of the community. 5 The scale of the emergency in West Africa was such that the international response has progressed through three phases 1 : (1) contacts; establishing and maintaining safe triage and health facilities; building multi-disciplinary rapid response teams at regional and zone levels; providing incentives for individuals and communities to comply with public health measures; engaging in community-owned local response activities; improving Ebola survivor engagement and support; and ending human-to-human transmission of EVD in the populations and communities of the affected countries. This emergency response -particularly the last phase -was complemented by the joint West African government-led Ebola Recovery Assessment programme which aimed to lay the foundation for short, medium and long-term recovery. The focus in this programme was on four areas: health, nutrition and water, sanitation and hygiene; governance, peace building and social cohesion; infrastructure and basic services and socio-economic revitalisation. 13 It has been recognised that outbreaks of emerging infectious diseases are sources of instability, uncertainty and sometime crises. 14,15 There has been some sociological and political analysis of the way the Ebola epidemic was constructed as a problem or crisis outside Africa in high income countries, 3 and how it became a global political as well as a health event. 2 This analysis has tended to emphasise the importance of the influence of the international agencies in shaping the response but also the moral discourse or panic associated with this response. 3 The role played by the global media has been highlighted, for instance, in enhancing the stigmatisation of those directly or indirectly linked with the outbreak. 3 However, much of this research has been carried out 'at a distance' and there is limited detailed research evidence about the local and national responses to the EVD epidemic, and consequent missed opportunities to improve policy and practice responses in the future. 10,16 There is also increasing recognition of the need for interdisciplinary research to examine the social dimensions of the epidemic, the policy response to it, the communities' reactions to the response and how these factors intersected with the biological transmission of the virus, physical containment measures and community medical treatment. 2,17 This paper addresses the lack of detailed analytical research to date on the perceptions and needs of those with direct experience of the Ebola epidemic in Guinea. It presents evidence from a study exploring research needs from the perspective of a number of key groups, including members of local communities. The original aim of the exercise was to identify priorities for health and social care research with and for survivors of EVD in Guinea. Survivors' experiences have been the subject of limited previous research in Guinea which highlighted the stigma associated with Ebola and the consequences of social isolation for the mental health of survivors. 18,19 The aim was to see if this was still a priority from the perspective of survivors and/or if there are other research questions that might need to be explored particularly in relation to the long term experiences of survivors and their family and communities. The objectives were: to explore survivors' experiences of their various interactions with health, care and associated services delivered by local, national and international providers and agencies, including non-governmental organizations (NGOs); and to explore and discuss the need for health and social care research, and identify research gaps and priorities, from the perspectives of different groups -men, women, EVD survivors, community leaders, health practitioners, traditional healers, and local and national government stakeholders.
Methods
The study followed a structured, participatory, inclusive approach guided by the principles and values of the Essential National Health Research (ENHR) strategy for priority setting. 20 These principles include placing country priorities first; working towards equity in health; and linking research to action for development. The ENHR strategy, developed by the Commission on Health Research for Development, advocated the use of a systematized approach to priority setting that involved all stakeholders. The Council on Health Research for Development (COHRED) -established to assist with the implementation of this strategy -recommended a three stage approach (planning the priority-setting process, setting the priorities, and implementing the priorities) to increase the effectiveness of the priority-setting process. 21 Since then, several WHO committees, 22,23 and the Global Forum for Health Research, 24,25 amongst others, have further elaborated methods, tools and frameworks for research priority setting, that are underpinned by the principles and values of the ENHR. The study reported in this paper was a preliminary rapid assessment, rather than a full research prioritisation exercise. Due to time and resource constraints, it required a pragmatic approach guided by established conceptual frameworks for compiling information relevant for investigating health research priorities. The Combined Approach Matrix in particular guided us to explore not just the public health dimension (in terms of the magnitude of the problem, determinants and present level of knowledge), but also the institutional dimension (including the individual, household and community, health sector and sectors other than health), and the equity dimension (in particular gender, poverty and survivor status). 25 The starting point for this work was that, in the specific area of EVD research, whilst investment was (at least initially and understandably) prioritised towards biomedical scientific research aimed at treating and preventing infection, it is likely that there are a number of areas where research and development could make an important difference to global health, but which are currently not recognised or not receiving appropriate attention (and resources). The preparatory work for this study included the identification of key stakeholders, the collation and analysis of background information, and discussions with a range of interdisciplinary experts in health systems and policy research in Guinea, and in EVD research. The field visit included public engagement activities that enabled us to progress four elements of the ENHR process: getting to know the stakeholders; situation analysis/stocktaking; identification of research priority areas; and discussion and ranking of identified research priority areas. The goals of the public engagement activities were to become better informed about a range of people's views and concerns about EVD research, to hear different perspectives and insights, and to become more sensitive to the social and ethical issues that relate to it. The aim was also to develop collaboration with stakeholders in Guinea, where research questions could be developed and explored in partnership with the public.
Data Collection and Sampling Data collection consisted of face-to-face interviews and focus group discussions. The purposive sample of key stakeholders (N = 15) selected for interviews included representatives from the Ministry of Health (N = 5), NGOs (N = 4), academic and health service researchers (N = 4) and members of ethics and research committees (N = 2). These data were complemented by interviews with health practitioners (N = 12, of which 2 were traditional healers) and community representatives (N = 11) and focus group discussions with male (N = 12) and female (N = 12) EVD survivors from two distinct communities in Guinea. Both communities were small townships. Site one was approximately 50 km from the capital (Conakry), and was affected towards the end of the epidemic. Site two was in the more remote, forest region of Guinea, and was within the prefecture where the first cases were identified in 2013. Questions posed in interviews and discussions varied according to participants, and on information gathered during the field visit. However, they included questions to elicit information on: health status and social position (eg, information on the main health and health-related/social needs of people who have survived EVD, how these needs have changed over time, and the extent to which these needs are understood by others); health and social care systems (eg, the services available for local people, particularly in relation to needs expressed); health and social care research programmes (eg, awareness of and involvement in research for or with EVD survivors); and needs and values of survivors and other key stakeholders (eg, most important issues related to life after the EVD outbreak, now and in the future).
Fieldwork and Analysis
The analysis made pragmatic compromises between timeliness and resource requirements and scientific rigour and validity. It drew on the technique of rapid appraisal, seeking to gain community perspectives of local health and social needs and to translate these findings into action. [26][27][28][29] Data collected from one source were validated or rejected by checking with data from at least two other sources or methods of data collection. The majority of the interviews and discussion groups were recorded and notes were taken on the content and conduct of discussions. The interviews with key stakeholders were mainly carried out in French, and translated to English during the course of the discussion. The discussion with survivors at the two sites were conducted in two groups -one male, one female -and facilitated by a French speaker and a helper from the local community who spoke the local language. All were experienced facilitators and all participants contributed to the discussion. They used the same discussion guide for both groups. Both groups lasted for approximately one and a half hours. The analysis was conducted iteratively within the research group (which constitutes the authors of this paper), through reviewing and summarising audio files and field notes, by identifying and sorting key themes, and by comparing and contrasting different perspectives. The researchers took particular note of, and sought to explore further, issues associated with equity in health. The analysis was limited by the multiple languages used within the data. A more complete analysis, involving the full translation and thematic coding of transcripts in a single language, would likely uncover further depth and nuance within the data. This paper is based on an initial descriptive analysis of salient themes which emerged from the interviews and discussions based on the field notes and summaries. It does not, therefore, contain direct quotes. The field work as a whole was carried out in Guinea in January 2017.
Key Stakeholders' Ranking of Research Priorities
The final phase of the research scoping exercise involved a presentation and discussion of findings to a meeting of the key stakeholders in Conakry, and (separately) to a meeting of key stakeholders in the more remote site two. The group of key stakeholders in Conakry did not include representatives from survivors' groups, but did include participants with an in-depth understanding of the issues faced by survivors in Guinea. In site two, the group of key stakeholders included leaders of survivors' groups. The research team proposed and explained the key themes that arose from the scoping exercise, emphasising the links between research and action for development. The stakeholders discussed these themes within the group setting, and were then asked independently to rank the topics in order of priority, according to their own perspectives and interests. No attempt was made at this stage to develop consensus within the group, and no temporal or financial parameters were defined. This allowed the research team to see how priorities of different stakeholders varied, and to rank the questions in order of averaged priority. It is important to acknowledge that priorities will change over time, and that research priorities can sometimes be individual. In this exercise, explicit criteria for the ranking exercise were not set.
Results
This section describes the themes that arose in the interviews and discussion groups. They provide the basis for the research agenda set out in the final discussion section.
The Ebola Virus Disease Survivors
The initial focus of this research scoping exercise was on the survivors' experiences. In group discussions, the survivors described the ways in which they and their lives had been affected by EVD. The issues that arose, clustered into key themes, are summarised in table one. The discussions showed that the social and economic implications of experiencing the virus were as important as the implications for physical health. Some of the concerns already noted in the literature about survivors' experience were reinforced. For example, being stigmatised and excluded from the family and the community, and feeling lonely and isolated due to family break up were common sentiments expressed by both men and women. There were also major economic implications such as losing jobs and accommodation and generally suffering a serious reduction in income. Experiencing this illness and its consequences, perhaps not surprisingly, also had serious implications for well-being, happiness and mental health. There was a clear indication that these psychological needs were not being met (Table).
There was a great deal of consistency in the issues raised by men and women, and within the two very different communities. However, in site two, more stories were heard about very large numbers of people dying within single families and villages than in site one. This is likely because the outbreak went undetected here for a while before infection control and containment measures were put in place. Both the men and women in site two identified the circulation of false rumours as a particular issue. Survivors in all groups were worried about their health and what the virus is still doing to them. There was no clear understanding of virus persistence in their body, and anxiety and confusion about the lingering effects of the initial infection. Survivors reported still experiencing a number of symptoms -many bordering between mental and physical health. For example, whilst many reported experiencing fatigue, it is not known whether this was as a result of ongoing effects of the virus, or as a result of depression or post-traumatic stress. The support provided for physical health symptoms varied. Participants in certain research studies (eg, the 'Postebogui' study 18 ) were able to get free healthcare. However, many of the survivors in the discussion groups expressed the need for help with their medical charges. In addition, informants told us that there was no formal support provided for mental healthcare, even for those involved in the research studies. This appears to be very important as many survivors in the discussion groups faced considerable difficulty obtaining informal support from the community as a result of the stigmatisation that they faced. Isolation and mental ill-health were sometimes extreme -leading to two recent suicides amongst the survivors in site two. Survivors' groups organised by the local authorities were able to provide informal peer support to a limited degree, but this was hindered by difficulties with geographical spread and communication, and a lack of financial support to maintain the network. The wider social and financial needs that survivors faced were met to a certain extent following discharge from the treatment centre. They received food donations from the World Food Programme, financial assistance from National Coordination Ebola response, international agencies and local authorities and NGOs, and free healthcare and other material support for short periods. However, some survivors expressed concern that not all the donations were reaching the survivors and the communities as intended. The support, whilst appreciated and beneficial, was short-lived compared with the ongoing needs of the survivors. Participants emphasised their desire to live and work independently rather than rely on other, external, hand-outs. They talked about their need for education and capacity building, both to become more literate, and to open opportunities for employment. Most of the survivors in the discussion groups were aware of, or active participants in, some form of research on EVD. Many of them had signed up to give blood and/or semen samples for scientific analysis. Some had conducted health questionnaires. However, participants explained that they had not been involved in any research that asked them about their experiences either being infected or of the treatment they received, or the impact, for them, of having survived the virus.
Health Practitioners and Community Representatives There were many common themes expressed by both health practitioners and community representatives in both site one and site two. All the informants talked about the experiences and needs of survivors in much the same way as the survivors themselves -indeed many of the informants were themselves survivors. In addition, the informants described the ways in which their communities had been affected, and the ways in which their communities responded, both to the outbreak itself, and to the authorities' response to the outbreak. It was clear that community representatives saw whole communities as having been profoundly affected by the outbreak, and the notion of communities surviving the experience, in addition to individuals surviving the virus, began to emerge. The key themes are outlined below, in no order of priority:
Misunderstandings/Trust
There were many rumours surrounding Ebola, particularly with regards to where it came from and how it was spread. Negative reactions from some communities to the authorities' (including government, local/international agencies and NGOs) response were triggered by a lack of understanding which seemed to emerge both from the initial message from authorities that Ebola cannot be cured, and from the practices of those engaged in the response (eg, spraying of areas, secure burials). For example, the communities' perception of the ineffectiveness of treatment was reinforced by the fact that health practitioners -including both traditional healers and those practising scientific medicine -contracted the disease, and sometimes died from it. This was particularly the case in site two, since communities in this region were affected much earlier than elsewhere in the country. False rumours have been pervasive, damaging and lasting. These misunderstandings have contributed to a lack of trust not just with the authorities, but also with medical practitioners. Because of this, traditional healers played an important role (particularly in the early stages of the outbreak), where they received people who did not have trust/confidence in professional healers. The lack of trust that emerged as a result of misunderstandings extended to neighbours and communities and has persisted, resulting in the survivors facing stigma and discrimination. Sometimes, the misunderstanding and fear were such that individuals and groups acted in extreme ways towards each other, for example, by burning the house and possessions of survivors. Several health practitioners who themselves survived the virus reported difficulties in re-integrating at work, due to a lack of trust from their colleagues and patients.
Fractured Communities
The impact of the epidemic and the response to it appeared to have fractured communities. Whilst this was due in part to misunderstandings, it partly arose as a consequence of the need to break cultural traditions and social norms (such as caring for relatives who are sick or visiting friends when sick) in order to break the chain of transmission. This enforced separation and created discord within families, villages and larger community groups. Sometimes, family separation was caused by the social and practical necessities of caring for children who had lost one or both parents to the disease. This itself was complicated by extreme economic hardship, which sometimes forced difficult decisions to be made -for example, where a surviving parent lost their income and felt no longer able to look after their child(ren). The social cohesion that was affected during the outbreak appears to be taking time to be rebuilt. One informant explained how his mother, who refused entry to her house of a sick neighbour at the height of the outbreak, is still shunned by that neighbour's friends and family.
Needs of Communities
Survivors returning from the treatment centres often struggled to reintegrate within these fractured families/ communities. Whole communities felt the effects of the outbreak in a number of ways, with reduced opportunities for income generation, and consequent lack of ability to support families (including orphans), and pay for food, education and health services. Community leaders emphasised that whilst some additional resources had been provided, they were insufficient to meet the ongoing needs of the survivors and the community as a whole. Both the community leaders and health practitioners talked a great deal about the specific ongoing needs of survivors. These were entirely consistent with the discussions within the survivors' groups.
Capacity to Meet Needs
Informants reflected that the authorities and national and international NGOs had provided various ways of meeting the needs of survivors, including financial support, food donations and healthcare. However, communities continue to have unmet needs, such as employment, financial stability, mental health services and social support services (such as with looking after children, orphans, and other dependents).
In some ways, the ability to provide good quality health services is stronger now, with improved knowledge, better sanitation, improved supplies and better surveillance/reporting systems. However, there are signs that some of these improvements are not being/will not be sustained. Participants explained how there had been problems with the ongoing supplies of drugs and sanitation supplies, and how the initially improved sanitation practices (such as handwashing) were not being maintained by either health workers or communities. In some ways, capacity is weaker -for instance, with rejection of health workers, affected relationships (lack of confidence) between communities and health workers, and fewer patients with the ability to pay, leading to reduced income for hospitals.
Perspectives of Key National and International Stakeholders
The Needs of Survivors Key stakeholders in general emphasised that there is a continued need for research that focuses on EVD survivors. A greater understanding of the virus itself is still required, including the risks of reinfection/transmission and the long-term health and social implications for survivors. It is clear that some survivors continue to experience stigma and discrimination leading to social isolation and loss of employment; and such exclusion can have consequences for mental health. Stakeholders responsible for large scale response and relief efforts (including Government, WHO, and UNICEF) recognised the wider, longer-term impact of the virus, but did not have the information required to understand the extent of need faced by the survivors. In addition to the potential for vulnerability to mental health problems, stakeholder participants understood that survivors may have a number of complex unmet needs, including health, psychological and social needs, and the need for assistance with community re-integration. Policy makers and providers described the importance of identifying these and other ongoing long-term needs of survivors so that they know where to focus their support now, and to be better informed for any future outbreaks. Moreover, survivors have been given a range of short-term support from the Government, as well as local and international agencies, and stakeholders felt it was important to know how that has been received and the impact that it has had.
Social and Economic Impact
Stakeholders also highlighted the need for research beyond the survivors, in order to more fully understand both the national and international response to the EVD epidemic in Guinea, and its wider social, economic and political impact. They described the ways in which the responses of, for example neighbouring national governments, and the international media, had sometimes profound outcomes for the people of Guinea. It was also recognised that certain aspects of the response (such as closing ports and shipping routes) could act to hinder the country's ability to contain the outbreak (for instance, when equipment and supplies cannot easily be brought in). Discussions confirmed that it is important to make an analytical distinction between the impact of the epidemic, and the impact of the (micro, meso and macro level) responses to it, even though they are interrelated. There was clearly sometimes an element of conflict between national priorities (to contain the outbreak whilst at the same time limiting its impact on the country's economy), and international priorities (to ensure the virus did not cross-country borders). Informants identified the need to focus on the micro, meso and macro level responses to the epidemic and gave two reason for this. First, there was considerable variation between areas in Guinea in the disease incidence, virus transmission and the time it took to achieve containment. The reasons for this variation are not understood, yet they may hold some important lessons for improving responses to future outbreaks. Secondly, it is clear that in some communities there was resistance to the national and international response -including case management, contact tracing, sanitation practices and burial of the dead. This had important consequences for trust, community engagement and ultimately for the ability to locally contain the disease. The authorities' initial response to the outbreak seemed to influence the ways in which the local communities reacted, affecting both disease containment and the subsequent community/family reintegration of survivors. The consideration of trust, raises the general question articulated by one informant from an academic background about whether trust relations between communities and authorities had been eroded and were at a low level prior to the epidemic, and the Ebola virus outbreak exacerbated or brought to a head the tension between the different groups or if trust relations were specifically damaged by the authorities' response to the outbreak. However, the key stakeholders, particularly those representing the NGOs, also emphasised that there was little evidence available about the full, wider impact of the epidemic on the country, the communities and the survivors. Key stakeholders informed us that there are a number of partial analyses of the impact of the EVD epidemic which focus on discrete areas (eg, health services, impact on economic activity) but we were not given access to these, and there appears to be no comprehensive socio-cultural, economic and policy analysis of the impact as a whole. There are also areas of impact that seem not to have been explored in the academic literature, such as: the closure of transportation routes and trade links, community cohesion, impact on religious practices and the restrictions on travel, Guinea's capacity for research and emergency response, and the role of the global media.
The Need for International Comparative Research
Finally, participant stakeholders suggested that evidence from comparative research would aid the understanding of how distinctive both the response to the epidemic and its long-term impact was in Guinea. There appears to be some research collaboration between low-income countries such as Guinea, Sierra Leone and Liberia which were most affected by the outbreak, but the focus appears to be mainly on biomedical/ clinical research. There is limited evidence of comparative social science research investigating, for example, variations in policy response and impact.
Towards an Agenda for Research The research team identified, through a thematic analysis, seven broad areas for further research that emerged from the scoping exercise. These were phrased as research questions, and discussed during a debriefing meeting with key stakeholders, who were then asked to rank them in order of priority. The priority ranking given to each key question varied considerably amongst the stakeholders, with the result that the average rankings were closely clustered. The questions, in order of average priority ranking score were: 1. What is the long-term socio-cultural, economic and health impact of the EVD epidemic on the country of Guinea? 2. What is the nature and impact of social stigma associated with EVD, and what are the factors that have contributed to the stigmatisation of survivors? 3. What can we learn from the local, national and international responses to the EVD outbreak about the nature of communication required for effective community engagement? 4. Why was the response to and effect of the Ebola virus so variable between different communities? 5. What is the impact of the EVD outbreak on noninfected community members as compared to infected survivors? 6. Are the neurological symptoms experienced by EVD survivors a consequence of direct effects of the virus, or the unmet mental health needs associated with the experience the survivors went through? 7. How did the response to and impact of the EVD outbreak vary between different countries in the region?
Discussion
The aim of this study was to explore ideas and priorities for further health and social care research related to the EVD outbreak in Guinea from the perspective of members of the local population. A list of seven broad research questions were identified from this scoping study. However, before each of their implications are discussed, it is important to recognise the limitations of this rapid assessment. Due to time and resource constraints, it was not possible to conduct a full research prioritisation exercise. Rather, this exercise should be seen as a pre-cursor to such a study, and results interpreted accordingly. Informants were not selected randomly but purposefully -that is, a range of people who are in an appropriate position to understand the issues, were asked to participate. The sample of key stakeholders was limited in terms of whether it fully represented the key voices in the national and local populations. It was also overwhelmingly male (all bar 2). Whilst this reflects the much smaller number of women in senior positions, it might have been possible to identify and include additional female stakeholders in a more extensive field study. The discussion groups cannot be taken to represent the beliefs of the survivors as a whole. For example, they might have been more likely to have higher levels of literacy than the general population, which is reported to have relatively low levels of literacy compared with neighbouring countries. Data collection and analysis conducted using rapid appraisal techniques may have a risk of researcher bias. This was minimised by using local 'communicators' (community liaison) for interviewing, as they might more easily tap into the private accounts which people would be reluctant to release to foreigners/strangers. Interviewers were given brief training by a member of the research team who is experienced in rapid appraisal methods. Analysis was conducted by an interdisciplinary team (the authors) so as to take on board a range of professional training, ethnicity, gender and theoretical perspectives. There was evidence that some of the participants were involved in other research projects, which raises the question of whether survivors' groups might become 'over researched. ' However, the participants reported that whilst many had undergone repeated biological testing, this had been their first opportunity to share their thoughts, experiences and beliefs. This study began by focusing on the experiences of survivors. However, it became evident that during the course of this scoping exercise, other topic areas were identified by key stakeholders. This illustrates the flexibility and iterative nature of the methodological approach adopted but also raises the question of the extent to which the study was able to explore wider research questions when much of the focus was on survivor's experiences. Thus, it is necessary to be cautious about the interpretation of these data particularly in terms of their transferability to other contexts. The top priority for research for informants tended to vary according to the interests of the stakeholders. Many of the key stakeholders saw the need to assess the long-term impact of the EVD outbreak whereas, perhaps unsurprisingly, the survivors identified the question related to social stigma as being more important than did the other stakeholders. Several studies have been carried out evaluating the short-term impact of the EVD outbreak on different aspects, such as the economy and the health infrastructure. 8 However, there has not been any comprehensive analysis of the long-term impact on Guinea as a whole, evaluating both positive and negative aspects. The field work suggested that whilst many regions of Guinea were severely affected by economic and personal loss, there are also some ways in which country capacity is now stronger, for instance for health protection and scientific research. Therefore, a systems analysis of the response and its impact could be important, utilising similar approaches and methods to those used for infectious disease preparedness or strategic planning. 30,31 Such an analysis might probe deeper into the nature of contributory factors both for the (non) containment of the virus, and the scale of the repercussions at individual, community, country and international levels.
A wide ranging and comprehensive analysis might begin by carrying out a review of the available evidence. The gaps identified in this review could then be explored through further interdisciplinary research. This analysis could provide evidence to inform policy options if there are any further epidemics of Ebola or outbreaks of similar diseases in Guinea and comparable low-income countries.
There was some consensus that survivors' experiences need to be further investigated, although the clinical and psychiatric experience of survivors is being explored in current research carried out in the POSTEBOGUI study. 18 More information is required about the nature and impact of social stigma, including its impact on the personal, social and economic lives of the survivors and their families. Qualitative methods might be appropriate for eliciting in-depth information about felt, enacted and courtesy stigma. 32 This could build on and be compared with the considerable sociological research literature related to stigma and chronic illness for example, in relation to HIV/AIDS. 33 Evidence from such research could be used to inform the development of policies aimed at enhancing the social integration of survivors, as well as national and international responses to any future epidemics. The majority of EVD survivor studies are specifically focused on previously infected individuals. However, many noninfected members of the community have been similarly impacted, for example through financial loss, bereavement, trauma, isolation and the disruption of family and social networks. It may be beneficial to broaden the definition of 'survivor' to include both disease survivors as well as the non-infected survivors. By better understanding the needs of all survivors, it may be possible to identify strategies for reintegration, and for strengthening resilience within local communities.
The field work illustrated a significant number of neurological issues of unknown origin. These include symptoms such as headaches, chronic pain, fatigue, vision impairment and tremors. In addition, it seemed that survivors were suffering from a range of mental health issues that could include depression and post-traumatic stress disorder. 34 It is not clear if these neurological issues are a direct result of viral infection or are a consequence of mental health problems associated with the outbreak. By investigating the biological persistence of the virus in the central nervous system, in conjunction with a detailed mental health assessment, it may be possible to ascertain the best ways of supporting and/or treating the survivors. Trust or the lack of it appeared to be a key issue associated with dialogue and engagement and more generally between communities and the health and political authorities. 35 Communications between populations and local and national government, NGOs, health professionals and others played a vital role in the response to the outbreak and in disease containment. The explanations for the resistance of some sections of the community have received some attention from anthropologists. 10,12,36 There is some research evidence about why some sections of the community were resistant to the Ebola emergency response, although this research needs to be more extensive. 10 The ways in which messages were framed and communicated, for instance through the local, national and international media and through the country's community networks, are likely to have had an important influence on community response. 3 The focus of further research might be on how the nature of communication affected trust relations within communities and between communities and health and political authorities. 36 Trust covers both confidence in competence (doing a good job), and trust in intentions (working in the interests of the client/public). 37 An improved understanding of the relationship between communication and trust might identify strategies for building or repairing trust relations, which could inform policy recommendations for achieving effective community engagement in healthcare programmes. However, it has been argued that these relatively low levels of trust relations are more deep-seated -such as at the level of governance -suggesting that more extensive strategies might need to be considered for restoring trust in institutions. 38,39 It echoes the suggestion from the United National Development Programme that 'Trust in public institutions could be strengthened through inclusive dialogue, efforts to enhance accountability, and equitable and harmonized service delivery' (p15). 13 A related question is associated with the considerable variation in the transmission of the virus between and within communities. 40,41 In addition, different communities responded in different ways to the disease outbreak and to the authorities involved in disease containment. There is epidemiological data that has mapped the spread of the virus during the course of the outbreak. 1,4,40,41 In addition, there are ongoing biological surveys investigating community-tocommunity differences in survivor responses to the virus. In order to fully understand these community differences, it would be necessary to combine the ongoing epidemiological and biological studies with a sociological analysis of community members' attitudes, beliefs and behaviours. By understanding the reasons for community variation in EVD, it may be possible to develop better policies and practice for future disease containment. Certainly the role of communities has been identified as crucial to the success of containment and recovery programmes. 13 Finally, the outbreak had significantly different impacts on Guinea, Sierra Leone and Liberia, and also affected many other countries within the region that are not included in any ongoing analysis. International comparative research would attempt to explain why there may be differences and similarities across countries. 42 This would provide opportunities for policy learning that could be used to enhance resilience, infrastructure and response for future emergencies.
In conclusion, despite the limitations, it is clear that this scoping exercise has generated some important research questions that warrant further exploration. It identified an expressed need for research focusing on survivors. It also emphasised the importance of research which analyses the social response to and impact of outbreaks of epidemics such as Ebola, and to discover if the Ebola epidemic was distinctive in terms of pandemics, both in the way it was responded to and its impact. More generally, it highlighted the need for this research to be inter-disciplinary, and emphasised the importance of the contribution to this of the social sciences.
|
2018-05-05T17:51:15.223Z
|
2012-09-03T00:00:00.000
|
{
"year": 2017,
"sha1": "4d041be36cb89b3440a5c8267cc7a149c6194c52",
"oa_license": "CCBY",
"oa_url": "https://www.ijhpm.com/article_3410_0e0e877110f1e603df04adebb313ff17.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d041be36cb89b3440a5c8267cc7a149c6194c52",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
}
|
52015192
|
pes2o/s2orc
|
v3-fos-license
|
MicroRNA-224 down-regulates Glycine N-methyltransferase gene expression in Hepatocellular Carcinoma
Glycine N-methyltransferase (GNMT) is a tumor suppressor for HCC. It is down-regulated in HCC, but the mechanism is not fully understood. MicroRNA-224 (miR-224) acts as an onco-miR in HCC. This study is the first to investigate miR-224 targeting the coding region of GNMT transcript. The GNMT-MT plasmid containing a miR-224 binding site silent mutation of the GNMT coding sequence can escape the suppression of miR-224 in HEK293T cells. Expression of both exogenous and endogenous GNMT was suppressed by miR-224, while miR-224 inhibitor enhanced GNMT expression. miR-224 counteracts the effects of GNMT on the reduction of cell proliferation and tumor growth. The levels of miR-224 and GNMT mRNA showed a significant inverse relationship in tumor specimens from HCC patients. Utilizing CCl4-treated hepatoma cells and mice as a cell damage of inflammatory or liver injury model, we observed that the decreased expression levels of GNMT were accompanied with the elevated expression levels of miR-224 in hepatoma cells and mouse liver. Finally, hepatic AAV-mediated GNMT also reduced CCl4-induced miR-224 expression and liver fibrosis. These results indicated that AAV-mediated GNMT has potential liver protection activity. miR-224 can target the GNMT mRNA coding sequence and plays an important role in GNMT suppression during liver tumorigenesis.
Supporting Information
Reverse-transcription and real-time polymerase chain reaction (RT-qPCR). Total RNA was extracted using Trizol reagent (Invitrogen, Carlsbad, CA). RNA was reverse-transcribed using Tetro cDNA Synthesis Kit (Bioline, Tauton, MA) according to the manufacturer's instructions. KAPA SYBR FAST cursor Kits (Kapa Biosystems, Woburn, MA) were used for real-time PCR applications. PCR conditions were as follows: 5 min at 95°C followed by 40 cycles of 95°C for 10 sec, 60°C for 30 sec and 72°C for 30 sec.
Primer sequences are shown in Table S4. For the detection of hsa-miR-224, 20 ng of total RNA was reversely transcribed into complementary DNA using TaqMan MicroRNA Assay hsa-miR-224 reverse transcription primer and TaqMan miRNA reverse-transcriptase kit using the instructions provided by the manufacturer (Invitrogen, Carlsbad, CA). The miRNA expression was normalized to the level of RNU48 RNA.
Construction of pGNMT-MT plasmid and 3' UTR reporter assay
To construct plasmid pGNMT-MT containing the cytomegalovirus (CMV) promoter, a FLAG fragment and a miR-224 binding site silent mutation (MT) of full length GNMT coding sequences for FLAG-tagged GNMT, we used pFLAG-CMV-5 (Simga) as a vector and the pGNMT (wild type of GNMT cDNA) plasmid 1 as the PCR template for generating the insert. Mutations of the GNMT cDNA were made using PCR with the QuikChange II (Stratagene) site-directed mutagenesis kit. The presence of correct mutations was confirmed by DNA sequencing. The wild type (WT) and silent mutant (MT) cDNA fragment were subcloned downstream of renilla luciferase gene in a vector psi-CHECK2 (Promega). Detailed procedure of the constructions of the plasmids is illustrated in Figure S6 and primer sequences are shown in Table S4. HEK293T cells were transiently co-transfected with plasmid DNA from psiCHECK2-GNMT-WT (psi-WT) or a binding site mutant plasmid psiCHECK2-GNMT-MT (psi-MT) along with miRIDIAN mimic-negative control (NC) or hsa-miR-224 mimic (224-mimic) (Dharmacon) using TurboFect (Fermentas) and Trans IT-TKO (Mirus). After 48 hours, luciferase activity was measured using the Dual-Luciferase Reporter Assay System Kit (Promega) and Infinite 200 (TECAN) following the manufacturer's instructions. In the assay, renilla luciferase activities were normalized to firefly luciferase activities.
Cell proliferation
Cell proliferation was determined using the alamarBlue assay (Invitrogen, Carlsbad, CA). Ten thousands cells were seeded in triplicates on 24 well plates for 1, 3, 5, 7 and 9days. 100 μl of alamarBlue solution (final concentration of 10% in medium) and cells were further incubated for 4 h at 37°C for a specified time period. After incubation, 100 μl of the alamarBlue solution from each well of the assay plates was transferred to a new well in 96-well plate, then fluorescence was measured at 530/590 nm. Proliferation was compared to that of the 1 st day group.
Mice Liver tissues were obtained from professor Tsai's group in the National Yang-Ming University. The liver tissues were collected separately from 1.5-, 6-, 12, 16-month-old mice in two different groups: HBx transgenic type and wild-type male (n=3~6). Total RNA was isolated from mouse liver using Trizol Reagent (Invitrogen, Carlsbad, CA).
Masson's Trichrome stain
The collagen deposition in liver tissue was evaluated by Masson's trichrome staining (HT15-1KT, SIGMA-AlDRICH). Brief description: deparaffinize tissue sections and hydrate to deionized water. Stain in working Weigert's Iron Hematoxylin solution for 1 minute. Wash in running tap water for 5 minutes.
Figure S5
A scheme of the mechanism of GNMT down-regulation of chronic hepatitis virus infection and toxic exposure induced-inflammation response and miR-224 over-expression in liver cirrhosis and HCC.
|
2018-08-17T13:35:53.287Z
|
2018-08-16T00:00:00.000
|
{
"year": 2018,
"sha1": "6aa43e920d06dba1ae5755fc9c7ac22b780ce0c3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41598-018-30682-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c233f37412b872266b3f9d3e73f2a015216106a3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
247653978
|
pes2o/s2orc
|
v3-fos-license
|
Longitudinal [18F]GE-180 PET Imaging Facilitates In Vivo Monitoring of TSPO Expression in the GL261 Glioblastoma Mouse Model
The 18 kDa translocator protein (TSPO) is increasingly recognized as an interesting target for the imaging of glioblastoma (GBM). Here, we investigated TSPO PET imaging and autoradiography in the frequently used GL261 glioblastoma mouse model and aimed to generate insights into the temporal evolution of TSPO radioligand uptake in glioblastoma in a preclinical setting. We performed a longitudinal [18F]GE-180 PET imaging study from day 4 to 14 post inoculation in the orthotopic syngeneic GL261 GBM mouse model (n = 21 GBM mice, n = 3 sham mice). Contrast-enhanced computed tomography (CT) was performed at the day of the final PET scan (±1 day). [18F]GE-180 autoradiography was performed on day 7, 11 and 14 (ex vivo: n = 13 GBM mice, n = 1 sham mouse; in vitro: n = 21 GBM mice; n = 2 sham mice). Brain sections were also used for hematoxylin and eosin (H&E) staining and TSPO immunohistochemistry. [18F]GE-180 uptake in PET was elevated at the site of inoculation in GBM mice as compared to sham mice at day 11 and later (at day 14, TBRmax +27% compared to sham mice, p = 0.001). In GBM mice, [18F]GE-180 uptake continuously increased over time, e.g., at day 11, mean TBRmax +16% compared to day 4, p = 0.011. [18F]GE-180 uptake as depicted by PET was in all mice co-localized with contrast-enhancement in CT and tissue-based findings. [18F]GE-180 ex vivo and in vitro autoradiography showed highly congruent tracer distribution (r = 0.99, n = 13, p < 0.001). In conclusion, [18F]GE-180 PET imaging facilitates non-invasive in vivo monitoring of TSPO expression in the GL261 GBM mouse model. [18F]GE-180 in vitro autoradiography is a convenient surrogate for ex vivo autoradiography, allowing for straightforward identification of suitable models and scan time-points on previously generated tissue sections.
Introduction
The 18 kDa translocator protein (TSPO) is increasingly recognized as an interesting target for the study of glioblastoma (GBM), the most common and aggressive primary malignant brain tumor in adults with a five-year survival rate of only 7.2% [1]. Effective treatment options remain limited, redefined tumor classification and improved chemotherapy regimens resulted in a median overall survival of up to 4 years depending on the molecular subgroup; however, most glioblastoma patients die earlier [2,3]. Originally mainly subject to research in neuroinflammation [4][5][6], TSPO appears to assume a pivotal role in resistance to apoptosis, invasiveness, and proliferation in GBM [7]. Eventually, TSPO-also known as peripheral-type benzodiazepine receptor (PBR)-may be an important functional stakeholder in tumorigenesis and treatment resistance of GBM.
Positron emission tomography (PET) has gained recognition in neuro-oncology as a valuable molecular imaging tool especially using the radiolabeled amino acid analog O-(2-[ 18 F]fluoroethyl)-L-tyrosine ([ 18 F]FET) [8][9][10]. In the wake of its recognition as a functionally relevant target in GBM, increasing experience has recently been obtained with glioma imaging directed against TSPO, including the use of PET tracers such as [ 18 [7,11,12]. While a few preliminary studies have pointed to a potential clinical benefit of TSPO PET in glioblastoma patients attributed to additional information as compared to established imaging modalities [13,14], the potential of TSPO imaging in preclinical studies is currently a thriving and promising field in order to better understand the glioma microenvironment [15]. Although most of the recent human data on TSPO PET imaging in GBM has been achieved using the tracer [ 18 F]GE-180, until now it has not experienced as much use in preclinical GBM PET imaging studies [13,14,[16][17][18][19][20][21].
Here, we investigated the feasibility of TSPO PET imaging using the high-affinity TSPO ligand [ 18 F]GE-180 for the first time in GBM in the preclinical setting, and we hypothesized that longitudinal PET imaging facilitates in vivo monitoring of TSPO expression over time. To this end, we performed a longitudinal PET study in the syngeneic GL261 GBM mouse model, being one of the most frequently used GBM mouse models. TSPO PET imaging was correlated with contrast-enhanced computed tomography (CT), and in vivo findings were verified by in vitro methods including [ 18 F]GE-180 autoradiography and TSPO immunohistochemistry (IHC). Furthermore, we performed a head-to-head comparison of ex vivo and in vitro [ 18 F]GE-180 autoradiography with regard to regional tracer distribution in the brain of GL261-bearing mice.
Study Design
All experiments were performed in compliance with the National Guidelines for Animal Protection in Germany with approval of the local care committee of the Government of Oberbayern (Regierung von Oberbayern) and overseen by a veterinarian. In total, 24 female C57BL/6 mice, 10-12 weeks old, were delivered by Charles River (Sulzfeld, Germany) and acclimated for one week. Animals were housed in a temperature-and humidity-controlled environment (25 • C and 65% rH, respectively) with a 12 h light-dark cycle, with free access to food and water. At day 0, mice were orthotopically inoculated either with GL261 (GBM mice) or with saline for control (sham mice).
One cohort of mice (n = 9 GBM mice, n = 3 sham mice) was scanned longitudinally on up to four time-points post inoculation (day 4, 7, 11 and 14; n = 40 TSPO PET scans), and single mice were sacrificed for tissue-based analyses earlier than day 14. To increase sample size for tissue-based analysis, one other cohort (n = 12 GBM mice) was scanned Biomedicines 2022, 10, 738 3 of 14 cross-sectionally on different single time-points post inoculation (day 7, 11 or 14; n = 12 TSPO PET scans). Contrast-enhanced computed tomography (CT) was performed at the day of the final PET scan (±1 day). Subsequently to PET scans, mice received intracardiac perfusion with 4% paraformaldehyde (PFA) to fix the brain tissue for ex vivo analysis. Ex vivo TSPO autoradiography (n = 13 GBM mice; n = 1 sham mouse) and in vitro TSPO autoradiography (n = 21 GBM mice; n = 2 sham mice) were performed on day 7, 11 and 14. Brain sections were also used for hematoxylin and eosin (H&E) staining (n = 17 GBM mice; n = 2 sham mice) and TSPO IHC (n = 8 GBM mice, n = 1 sham mouse) afterwards.
First, TSPO radioligand uptake in PET was quantified at different time points comparing GBM mice and sham mice. Second, uptake changes over time were analyzed. Eventually, the location and extent of uptake in PET were correlated to CT and in vitro findings. Further, ex vivo and in vitro autoradiography performed on same brain slices were directly compared with regard to tracer uptake patterns.
An overview of the study is provided in Figure 1A.
Biomedicines 2022, 10, x FOR PEER REVIEW 3 of 15 and single mice were sacrificed for tissue-based analyses earlier than day 14. To increase sample size for tissue-based analysis, one other cohort (n = 12 GBM mice) was scanned cross-sectionally on different single time-points post inoculation (day 7, 11 or 14; n = 12 TSPO PET scans). Contrast-enhanced computed tomography (CT) was performed at the day of the final PET scan (±1 day). Subsequently to PET scans, mice received intracardiac perfusion with 4% paraformaldehyde (PFA) to fix the brain tissue for ex vivo analysis. Ex vivo TSPO autoradiography (n = 13 GBM mice; n = 1 sham mouse) and in vitro TSPO autoradiography (n = 21 GBM mice; n = 2 sham mice) were performed on day 7, 11 and 14. Brain sections were also used for hematoxylin and eosin (H&E) staining (n = 17 GBM mice; n = 2 sham mice) and TSPO IHC (n = 8 GBM mice, n = 1 sham mouse) afterwards. First, TSPO radioligand uptake in PET was quantified at different time points comparing GBM mice and sham mice. Second, uptake changes over time were analyzed. Eventually, the location and extent of uptake in PET were correlated to CT and in vitro findings. Further, ex vivo and in vitro autoradiography performed on same brain slices were directly compared with regard to tracer uptake patterns.
An overview of the study is provided in Figure 1A.
Animal Model
Intracranial implantation of tumor cells and intracranial saline injection were performed as previously published with slight modifications [22]. In brief, mice received a pre-medication of 200 μg/g body weight metamizole (WDT, Garbsen, Germany) 2 h prior the surgery. Anesthesia consisted of intraperitoneal injection of 100 μg/g ketamine and 10 μg/g xylazine (both from WDT, Garbsen, Germany). The mouse head was fixed on a stereotaxic frame heated at 37 °C (David Kopf Instruments, Tujunga, CA, USA). After skin incision, the burr hole was placed 1.5 mm lateral (right) and 1 mm anterior to the bregma (23G/21G microlances, BD Biosciences, Heidelberg, Germany). Either 1 × 10 4 GL261 tumor cells in 1 μL of saline or 1 μL of saline alone were injected into the right striatum at 1.5-3.0 mm depth using a stereotactically guided glass syringe (22G, Hamilton Bonaduz, Bonaduz, Switzerland). After skin closure with Ethibond Excel 5-0 suture
Animal Model
Intracranial implantation of tumor cells and intracranial saline injection were performed as previously published with slight modifications [22]. In brief, mice received a pre-medication of 200 µg/g body weight metamizole (WDT, Garbsen, Germany) 2 h prior the surgery. Anesthesia consisted of intraperitoneal injection of 100 µg/g ketamine and 10 µg/g xylazine (both from WDT, Garbsen, Germany). The mouse head was fixed on a stereotaxic frame heated at 37 • C (David Kopf Instruments, Tujunga, CA, USA). After skin incision, the burr hole was placed 1.5 mm lateral (right) and 1 mm anterior to the bregma (23G/21G microlances, BD Biosciences, Heidelberg, Germany). Either 1 × 10 4 GL261 tumor cells in 1 µL of saline or 1 µL of saline alone were injected into the right striatum at 1.5-3.0 mm depth using a stereotactically guided glass syringe (22G, Hamilton Bonaduz, Bonaduz, Switzerland). After skin closure with Ethibond Excel 5-0 suture material (Ethicon, Norderstedt, Germany), the mice were kept under surveillance on a heated pad until full recovery.
TSPO PET
All mice received [ 18 F]GE-180 PET scans. Each mouse received bolus injection of 12.5 ± 2.2 MBq of [ 18 F]GE-180 in 150 µL of saline into the tail vein [23]. Radiosynthesis of [ 18 F]GE-180 was performed as previously described [24] with slight modifications [25], eventually resulting in a radiochemical purity >98% and a specific activity of 1400 ± 500 GBq/µmol. Anesthesia was performed with isoflurane 2% delivered via mask at 3.5 L/min in oxygen. Four mice were placed simultaneously in the tomograph (Siemens Inveon PET, Siemens Healthineers, Erlangen, Germany). Emission recording was performed for the interval 60-90 min post injection (p. i.) followed by a 15 min transmission scan using a rotating [ 57 Co] point source as previously described [25]. The PET images were reconstructed as previously published [26]: A 3D ordered-subset expectation maximization (OSEM) with 4 iterations and 12 subsets was performed and succeeded by a maximum a posteriori (MAP) algorithm with 32 iterations [26]. An attenuation correction was performed using the transmission scan obtained with the rotating 57 Co point source. The scattering contributions were estimated using the transmission data set and simulations for a limited number of scattering points in the object. We applied a decay correction for 18 F. The zoom factor was 1.0, the matrix was 256 × 256 × 159. The final voxel dimension was 0.78 mm × 0.78 mm × 0.80 mm [27].
Myocardial tracer uptake was used as normalization procedure as previously established [26].
For quantitative analysis, a manually generated uniform tumor VOI with a volume of 36 mm 3 was created based on an average image of all scans performed. It was doublechecked for all mice individually that the visually increased uptake at the site of inoculation was comprised by the uniform tumor VOI. Another manually generated VOI with a volume of 53 mm 3 in the tumor-free contralateral hemisphere was set as background. For VOI definition, see Figure 1B.
TSPO Autoradiography
Ex vivo autoradiography was performed directly after the final [ 18 F]GE-180 PET scan as shown before [27]: After intracardiac perfusion with PBS and subsequently with 4% PFA (circulation stop at 105 min p. i.), the brains were cooled down for a maximum of 5 min at −80 • C. After ca. 10 more min at −20 • C, brains were horizontally cut in 16 µm sections for autoradiography or in 3 µm sections for immunohistochemistry using a Leica CM1510 Cryostat (Leica Microsystems, Nussloch, Germany).
In vitro autoradiography was performed on 16 µm horizontal brain cryosections: After pre-incubation with binding buffer (Tris-HCl 50 mM, pH 7.4) and drying, the sections were incubated for 60 min with 0.06 MBq/mL of [ 18 F]GE-180 in binding buffer at room temperature. After incubation, sections were washed twice by immersion in ice-cold buffer solution, dried and placed on imaging plates for 24 h [28]. The obtained data was analyzed with AIDA image analyzing software (version 4.50; Elysia-raytest GmbH, Straubenhardt, Germany). A manually drawn region of interest (ROI) was placed in the contralateral tumor-free background and sections were scaled to mean background activity; prior to quantification, the photo plate background was subtracted (in analogy to [29]). Due to single processing artifacts (e.g., freezing damage), an automated target region segmentation for all sections was discarded, and target ROIs were created via manually adjusted hot-seed function [27]. Area tissue and mean radioactivity concentrations per area tissue were measured. The ROIs were used for volumetric approximation according to the Cavalieri method [30] on every 24th section.
TSPO Immunohistochemistry and Hematoxylin and Eosin (H&E) Staining
Cryo-conserved brain tissue was sampled from sham or tumor-cell-inoculated mice for immunohistochemical staining. TSPO staining was performed following a standard protocol [31]. After cutting, a positive control was deparaffinized, and after short thawing, the sample slices as well as the positive control underwent Tris/EDTA buffer antigen retrieval and blocking. Slices were then incubated with the primary antibody for 1 h. As primary antibody, we used the anti-PBR antibody [EPR5384] (from Abcam, Berlin, Germany) at 1:100 dilution. The EnVision TM + Dual Link System-HRP (Dako by Agilent Technologies, Santa Clara, CA, USA) was utilized for detection of antibody binding according to the manufacturer's protocol (Kit K4065, https://www.agilent.com/cs/library/packageinsert/ public/PD04048EN_02.pdf, accessed on 8 March 2022). Slices were then counterstained with hematoxylin, dehydrated and coverslipped.
A further subset of the prepared brain sections was processed by hematoxylin and eosin (H&E) staining for histopathological analysis, after being temporarily stored at −80 • C. Photographs of the tumors were taken with a Primo Star/Axiocam 105 color microscope (Zeiss, Jena, Germany). ROIs were created with ImageJ (National Institutes of Health, Bethesda, MD, USA) and VOIs were estimated as described above.
Autoradiographies and consecutive H&E staining were conducted on the exact same slices while immediate adjacent slices were used for immunohistochemistry.
Contrast-Enhanced Computed Tomography (CE-CT)
For the purpose of morphological correlation, a subgroup of mice underwent CE-CT scans as previously described [22]. Mice were anaesthetized with isoflurane 2% delivered via mask at 3.5 L/min in oxygen and received an intravenous bolus injection of 300 µL imeron-300 (equivalent to 90 mg iodine, Bracco Imaging, Konstanz, Germany) 3 min prior to CT acquisition for contrast enhancement. The CT scan was performed using a small animal radiation research platform (SARRP, Xstrahl, Camberley, UK) for the first cohort of mice and a Molecubes X-Cube (Molecubes, Belgium) for the second cohort of mice.
Statistical Analysis
Statistical analysis was performed using IBM SPSS Statistics (version 25; SPSS, IBM, Armonk, NY, USA). Group comparisons of VOI-based PET results between pooled sham mice and GBM mice and at different days post inoculation were tested using a one-way-ANOVA with subsequent Tukey post hoc test for multiple comparisons. Similarity of volumes in ex vivo and in vitro autoradiography was expressed with Pearson's correlation coefficient. A threshold of p < 0.05 was considered to be significant for rejection of the null hypothesis.
TSPO PET
A total of 52 [ 18 F]GE-180 PET scans was carried out. All GBM mice presented increased [ 18 F]GE-180 uptake at the inoculation site. The increased uptake in PET was co-localized with contrast-enhancement in CT at the site of inoculation (see Figure 1B). The [ 18 F]GE-180 uptake in PET visually increased over time both by signal intensity and extent. [ 18 F]GE-180 uptake was also visually observed in sham mice along the inoculation scar. In contrast to GBM mice, the extent of [ 18 F]GE-180 uptake in sham mice did not increase over time (see Figure 2A).
Using a uniform tumor VOI for quantitative assessment of tracer uptake at the inoculation site, [ 18 F]GE-180 uptake steadily increased over time in GBM mice finally mounting to mean SUV mean 0.36 ± 0.05 at day 14 post inoculation (p < 0.001, compared to day 4, see Figure 2).
Interestingly, while the background [ 18 F]GE-180 uptake at early time points was comparable between GBM mice and sham mice, it slightly but significantly increased over time in GBM mice (mean SUV mean + 8% from day 4 to day 14, p = 0.025) and was 10% higher in GBM than in sham animals on day 14 post inoculation (p = 0.02).
All TBR and SUV mean values for all mice included in the study are displayed in Figure 2.
Tumor target regions as depicted by [ 18 F]GE-180 PET were in all mice co-localized with contrast-enhancement in CT. Tumoral extent and growth were visually confirmed by CE-CT.
TSPO Autoradiography
All GBM mice showed highly elevated [ 18 F]GE-180 uptake at the inoculation site. Ex vivo and in vitro autoradiography were compared on the exact same slices (n = 13 GBM mice, n = 196 slices) and visually showed a highly congruent [ 18 F]GE-180 uptake pattern and intensity (see Figure 3A), as previously reported in a single case [16]. A quantitative comparison of volumes in ex vivo and in vitro autoradiography confirmed a high congruency between both methods resulting in a high Pearson's correlation coefficient of r = 0.99 (n = 13, p < 0.001; see Figure 3B). Furthermore, autoradiography volumes were congruent to TSPO and H&E staining results (see Figure 4). Figure 3B). Furthermore, autoradiography volumes were congruent to TSPO and H&E staining results (see Figure 4). Mean tumor volume, estimated by in vitro autoradiography, was 1.3 ± 0.7 mm 3 on day 7 post inoculation, 3.3 ± 0.6 mm 3 on day 11 post inoculation and 16.1 ± 8.7 mm 3 on day 14 post inoculation. Sham mice showed signal enhancement consistent with the inoculation scar; however, the enhancement was mostly too small for reliable volumetry.
TSPO Immunohistochemistry and Histology
Immunohistochemical staining confirmed high TSPO expression in the right frontal lobe at the site of inoculation in GBM mice. The extent of TSPO expression in immunohistochemistry was visually congruent with the extent of [ 18 F]GE-180 uptake in autoradiography as well as with the tumor borders in H&E staining (see Figure 4) with an accentuated [ 18 F]GE-180 uptake and TSPO expression at the tumor margin.
Moreover, TSPO expression was proven in several brain structures apart from the GL261 tumor such as CA1-CA3 neurons and dentate gyrus in hippocampus, ependyma, cerebellar Purkinje cells, and glial scar of the inoculation process (see Figure 5), which also mirrors PET and autoradiography findings in sham mice (e.g., see Figure 2A).
H&E staining showed spherical tumor growth in the right frontal lobe in all GBM cases. The histologically estimated tumor volume was 0.5 ± 0.1 mm 3 on day 7 post inoculation and increased up to 16.2 ± 9.9 mm 3 on day 14 post inoculation. Mean tumor volume, estimated by in vitro autoradiography, was 1.3 ± 0.7 mm 3 on day 7 post inoculation, 3.3 ± 0.6 mm 3 on day 11 post inoculation and 16.1 ± 8.7 mm 3 on day 14 post inoculation. Sham mice showed signal enhancement consistent with the inoculation scar; however, the enhancement was mostly too small for reliable volumetry.
TSPO Immunohistochemistry and Histology
Immunohistochemical staining confirmed high TSPO expression in the right frontal lobe at the site of inoculation in GBM mice. The extent of TSPO expression in immunohistochemistry was visually congruent with the extent of [ 18 F]GE-180 uptake in autoradiography as well as with the tumor borders in H&E staining (see Figure 4) with an accentuated [ 18 F]GE-180 uptake and TSPO expression at the tumor margin.
Moreover, TSPO expression was proven in several brain structures apart from the GL261 tumor such as CA1-CA3 neurons and dentate gyrus in hippocampus, ependyma, cerebellar Purkinje cells, and glial scar of the inoculation process (see Figure 5), which also mirrors PET and autoradiography findings in sham mice (e.g., see Figure 2A). H&E staining showed spherical tumor growth in the right frontal lobe in all GBM cases. The histologically estimated tumor volume was 0.5 ± 0.1 mm 3 on day 7 post inoculation and increased up to 16.2 ± 9.9 mm 3 on day 14 post inoculation.
Discussion
In this multimodal longitudinal [ 18 F]GE-180 PET study we corroborate the translocator protein TSPO as an interesting target for in vivo imaging of glioblastoma, as confirmed by ex vivo autoradiography, in vitro autoradiography and TSPO staining in a preclinical setting.
[ 18 F]GE-180 PET and [ 18 F]GE-180 autoradiography showed high tracer uptake at the site of inoculation as compared to the contralateral brain hemisphere in the GL261 glioblastoma mouse model, and [ 18 F]GE-180 uptake in PET increased over time in GBM mice. Both reader-dependent visual evaluation and objective quantitative analysis provided a reliable differentiation between tumor mice and healthy sham mice from at least day 11 after inoculation. Given the rapidly growing tumor model, the comparatively modest SUV progression of tumor mice may appear to be somewhat underestimated. This may be due in part to the use of a uniform tumor VOI, which in principle includes an excessive amount of non-tumor tissue in the case of small tumors, but also in part to the role of various origins of the TSPO signal, as discussed in more detail below. The in vivo imaging results were supported by ex vivo findings, which showed a high spatial overlap between TSPO staining and [ 18 F]GE-180 uptake in high resolution autoradiography. On this occasion, we noted in a large data set of n = 196 brain slices, that in vitro autoradiography and ex vivo autoradiography show equal [ 18 F]GE-180 uptake patterns and intensity (Pearson's r = 0.99, p < 0.001, n = 13 GBM mice), which eventually allows for post hoc correlation of TSPO radioligand uptake via in vitro autoradiography with other targets on the very same brain slice (e.g., see Figure 3). This might especially facilitate regional ex vivo colocalization of TSPO expression in the frame of double tracer PET studies, which gain increasing importance also for the understanding of the glioblastoma metabolism and microenvironment [19,32,33].
Sham mice also showed a signal enhancement in the inoculation area, probably due to neuroinflammation, as further elucidated below [33,34]. However, unlike this relatively low uptake related to the traumatic inoculation procedure, the higher tumor-associated [ 18 F]GE-180 uptake clearly increased over time both in extent and intensity (see Figure 2A). Quantitative [ 18 F]GE-180 PET analysis substantiated significant differences between GBM mice and healthy sham mice from day 11 post inoculation forward using SUV measurements and tumor-to-background ratios (see Figure 2B). Compared to [ 18 F]FET in a relatable study setting [27], [ 18 F]GE-180 provides a higher tracer uptake at the site of inoculation and excellent tumor-to-background ratios in the early stages of the disease as described in human data as well [19]. As recently shown, longitudinal TSPO PET imaging in glioblastoma in vivo models is an efficient tool to monitor treatment response and delivers different information compared to established tracers [32]. The current study aimed to investigate [ 18 F]GE-180 PET imaging in one of the most common syngeneic mouse glioblastoma models, GL261. However, the inclusion of various additional tumor models in in vivo imaging studies will be of interest, as different molecularly defined subtypes of GBM may show different levels of TSPO expression, and TSPO-targeted PET imaging might therefore be useful to non-invasively assess the latter [35,36]. Although [ 18 F]GE-180 PET is widely been used in other disease entities such as neuroinflammatory diseases [37], so far, TSPO PET imaging is overall of limited value for extra-cerebral oncologic diseases and has only sporadically been evaluated in other cancer entities than GBM, such as malignant pancreatic lesions [38].
Although [ 18 F]GE-180 provides a high tumor-to-background contrast which facilitates intra-cerebral tumor delineation, in vivo tumor volumetry in TSPO PET is hampered by physiological tracer uptake and perfusion-related increased signal in adjacent structures such as Harderian glands, the olfactory epithelium, the skull base and the pituitary gland. We decided to use a reference region in the contralateral hemisphere (see Figure 1A) to consider quantitative variability through individual differences in blood flow and other mouse specific confounders [39,40]. At later time points, we found a 10% higher radiotracer uptake in the contralateral hemisphere in tumor-bearing mice compared to sham mice (p = 0.02). This finding might indicate an affection of the hemisphere contralateral to the macroscopic tumor, either being of neuroinflammatory nature, or in the scope of long-range signaling within multicellular networks in glioma, as suggested in recent discoveries on the neuroscience of gliomas [41,42]. Thus, the increased background TSPO upregulation over time in the GL261 model would support the conception of glioblastoma being a disease of the entire brain [43]. However, this finding is limited by a low number of sham cases at later time points and therefore should not be overemphasized.
It is still a matter of debate to what proportion TSPO expression in glioma is related to tumor cells and to inflammatory cells such as glioma-associated microglia/macrophages (GAM). Since both cell types have been shown to overexpress TSPO, the high [ 18 F]GE-180 uptake into GL261 tumors in the present study might represent a symbiosis of both [33,44,45]. Further cell types such as activated astrocytes and endothelial cells express TSPO in the context of brain diseases and thus are likely to contribute to a certain degree to the TSPO PET signal in brain tumors [46,47]. Currently ongoing in vivo and in vitro studies, both in humans and rodents, need to provide further clarity on the origin of TSPO radioligand uptake in brain tumors. Here, e.g., longitudinal double tracer PET studies including both [ 18 F]FET and [ 18 F]GE-180 in glioblastoma mouse models would be helpful to gain further in vivo insights on the inflammatory contribution to the TSPO PET signal. Beyond tumorassociated inflammation, the slightly increased [ 18 F]GE-180 uptake at the site of inoculation in sham-operated mice in the current study supports the assumption of an additional inflammatory component solely related to traumatic brain injury, which is in line with previous TSPO PET studies on brain injury mainly in rats [48,49]. A first study has used TSPO PET imaging in glioma using a TSPO knock out (KO) mouse strain, which might also be a promising tool to further decipher the origin of TSPO-related molecular processes in glioblastoma [11]. A major advantage of this approach is that TSPO KO mice should not express endogenous TSPO and therefore represent a "null-background host". Thus, after tumor cell inoculation, the level of tumor cell-related TSPO expression can be monitored without interfering signal from endogenously TSPO-expressing macrophages, microglia, activated astrocytes or other cells of the host. Specifically, the authors compared the TSPO radioligand uptake in PET from wildtype GL261 glioma in TSPO KO mice with the uptake of wildtype GL261 glioma in wildtype host tissue (TSPO+/+). They found that in TSPO+/+ the signal extended beyond the tumor, whereas in TSPO KO the signal was lower and restricted to the tumor, indicating that in the wildtype situation-as in our study-the TSPO PET signal in the GL261 model is indeed of diverse cellular origin [11]. The authors used magnetic resonance imaging (MRI) as a reference modality for tumor extent. However, [ 18 F]FET PET is known to depict vital brain tumor tissue beyond tumor extent in MRI [50]; therefore, it would also be highly interesting to include the above-mentioned TSPO KO model in future dual tracer PET studies with [ 18 F]GE-180 and [ 18 F]FET, all the more when investigating TSPO expression in brain tumors under therapeutic circumstances, e.g., after radiotherapy [17]. Pharmacological microglia depletion is another valuable approach for modulation of host TSPO levels, yet to be applied in experimental brain tumor models [51].
As in several comparable preliminary TSPO PET studies in GBM, a limitation of our study is the lack of resolving the contribution of specific cell types to the TSPO signal in direct correlation to in vivo imaging findings. Yet, we were able to show that [ 18 F]GE-180 uptake in autoradiography and positive TSPO staining were congruent and co-localized with [ 18 F]GE-180 uptake in PET. However, in order to clarify the source of TSPO signal at a cellular level, additional experimental methods such as immunohistochemical co-staining or innovative approaches of radiolabeled cell sorting [52] would be helpful, and such investigations are underway in preclinical brain tumor models. Some studies already aimed to quantify in vitro the respective contribution of different cell types to the overall TSPO signal, albeit with conflicting results: While some studies attribute most of the signal to neoplastic cells [45,53,54], others highlight a mutual contribution of several cell types [33] or rather claim a major role of GAMs in contributing to the overall TSPO signal in brain tumors [32]. This may in part be due to different experimental setups used in those studies. However, even within distinct studies, the range for the amount of TSPO-positive GAMs contributing to the overall TSPO cell population was high (e.g., ranging between 4.2% and 55%) [54]. This rather suggests that, on balance, the interplay of tumor and host cells in generating the TSPO signal in brain tumors is not yet sufficiently understood. In sum, TSPO PET imaging in glioma mouse models remains an exciting field harboring the perspective of better understanding the glioblastoma microenvironment at a molecular level.
Conclusions
[ 18 F]GE-180 PET imaging facilitates non-invasive in vivo monitoring of TSPO expression in the GL261 GBM mouse model. [ 18 F]GE-180 in vitro autoradiography is a convenient surrogate for ex vivo autoradiography, due to highly congruent tracer distribution in both methods, allowing for straightforward identification of suitable models and scan time-points on previously generated tissue sections for the design of future studies.
|
2022-03-25T15:20:45.395Z
|
2022-03-22T00:00:00.000
|
{
"year": 2022,
"sha1": "958c9c0684549f176cdf5650cbb2219970c14d1a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/10/4/738/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a65b33048427dc73576deb4a3fd51071e3f6902",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225709492
|
pes2o/s2orc
|
v3-fos-license
|
Use, Purpose, and Function—Letting the Artifacts Speak
: Archaeologists have likely collected, as a conservative estimate, billions of artifacts over the course of the history of fieldwork. We have classified chronologies and typologies of these, based on various formal and physical characteristics or ethno-historically known analogues, to give structure to our interpretations of the people that used them. The simple truth, nonetheless, is that we do not actually know how they were used or their intended purpose. We only make inferences—i.e., educated guesses based on the available evidence as we understand it—regarding their functions in the past and the historical behaviors they reflect. Since those inferences are so fundamental to the interpretations of archaeological materials, and the archaeological project as a whole, the way we understand materiality can significantly bias the stories we construct of the past. Recent work demonstrated seemingly contradictory evidence between attributed purpose or function versus confirmed use, however, which suggested that a basic premise of those inferences did not empirically hold to be true. In each case, the apparent contradiction was resolved by reassessing what use , purpose , and function truly mean and whether certain long-established functional categories of artifacts were in fact classifying by function. The resulting triangulation, presented here, narrows the scope on such implicit biases by addressing both empirical and conceptual aspects of artifacts. In anchoring each aspect of evaluation to an empirical body of data, we back ourselves away from our assumptions and interpretations so as to let the artifacts speak for themselves.
Introduction
Every day, without pause, we select certain items, with (or perhaps without) preferred qualities, to perform some task. Whether or not the selected item was the ideal choice, the proper design for the task, selected for expedience, or even the right tool for the job, in the end the object was used to perform the task at hand. The logic behind this act is known to us, our intentions describable, and merely one among the numerous choices made that day. Our personal rationale for why the coffee cup on the desk is holding pens and not coffee is known and explainable. We might acknowledge, if pressed, that the true purpose of a coffee mug is to hold coffee and not pens-thus violating that intended purpose-but no one is likely to raise any substantial objections or be mystified by our actions. The function of that mug (regardless of original purpose) is now to hold pens and not coffee, and that is how it will be perceived by those around us as well. In our daily lives, we see no contradictions to such behavior and so books frequently become doorstops and paperweights, while mugs proliferate across the desk holding pens.
Why then, as archaeologists, would we look at an artifact and accept that its type determines its use? . . . that its form dictates its function? . . . that its decoration necessarily imparts specific meaning or purpose to the object itself? If human nature, need, adaptability, and innovation were as dynamic as they are now-since people have always been people-then interpreting artifacts and assemblages is a multifaceted question of purposes, uses, and functions beyond formal descriptions and typologies. Archaeologically, we have a complicated task describing the actual life of artifacts. Granted, this is hardly a novel observation. Archaeologists have been discussing the contextual interpretations of artifacts, and the behaviors they reflect, since the outset.
In recent years, there has been a renewed interest and theoretical trend toward recognizing the interplay between the materiality of objects and the contextual perceptions-both past and present-that give rise to meaningful interpretations (e.g., [1][2][3][4][5][6][7]). The goal of those interpretations is to present our best understanding of what people were doing in the past and why they did so. Most importantly, we want to ensure that we are not imposing our "why" onto or in place of theirs, while diligently and rigorously supporting any assertions we make. Despite the discipline's reflexive concerns over ideologically decolonizing the archaeological project, there has been something of a shortage of pragmatic methodologies for bridging conceptual gaps between culturally grounded intentions of people in the past and the intentions of those in the present. We cannot, ultimately, ask ancient people why their mug was holding a pen.
Theoretical lenses such as "symmetry" (e.g., [8][9][10][11]), "entanglement" (e.g., [12]), or "material engagement" (e.g., [13,14]) may serve to assuage our (justifiable) concern that archaeologists are carelessly tromping through the homes of other people's ancestors, but none proposes how to give an emic indigenous voice to those ancestors. Instead, we theorize a methodological distinction between the analytical evaluation of artifacts and the qualitative inferences of interpretations and meanings. We can only observe that mugs seem to hold coffee or pens more or less often than not, then extend that as an inference to the intentions regarding both purpose and function for objects of that type or form.
Unfortunately, that approach easily conflates uses, purposes, and functions as relatively synonymous in the interpretations of artifacts. This is particularly evident in functional typologies for artifact classification. Function is generally attributed through a combination of physical characteristics and analogous forms, which is then imputed to both an artifact's purpose and its use. Likewise, if texts, images, or ethnographic parallels can be found seeming to give explicit indications of any of the above (appearing to make those intangible intentions clear), what archaeologist would not take that as conclusive?
But. . . should we? We would likely still refer to our hypothetical container of pens as a "coffee mug" and not a "pen mug" regardless of the change in use. The mug may easily revert to holding coffee, or something else altogether, without significantly contradicting its prior use or its initial purpose. This is not true for all objects, however. For some, the purpose and function is held relatively sacrosanct irrespective of any other possible uses. For the former (i.e., our mug), purpose is insufficiently determining to wholly constrain its use or function. For the latter, perhaps an item of ritual or ideological significance, purpose overrides any practical range of functions and thereby determines its possible uses. This suggests a certain independence between intentionality and pragmatics that is not adequately appreciated in current approaches to interpreting artifacts and the materiality of behavior.
The problem is that use, purpose, and function are not truly synonymous. Use relates to actions, purpose to intentions, and function to technical capabilities. From our hypothetical examples, we can see that neither purpose nor use can necessarily be inferred directly from the other, while function is dependent on the perceptions of both. Instead, each aspect appears to be simultaneously determined by the combination of the others. This implies that use, purpose, and function are a system of independent normative dimensions whose interactions describe the scope of an artifact's materiality. By recognizing the distinctions and interactions between these dimensions, we will outline a better methodology for reconciling intentionality and pragmatics. To do so, we will begin with describing how common practices in artifact typology can exacerbate the difficulties of materiality and present two curious case studies that illustrate the problem. We will then look at some of the rationale behind theories of materiality, followed by a description of our model and its application.
In other words, archaeologists might not need to travel in time to ask why the mug was holding pens if use, purpose, and function are interrelated characteristics intrinsic to the object. The answer, if we can understand it, has already been given.
Functional Typology and the Lure Of Analogy
Our hypothetical mug is, of course, only one object. Archaeologists often collect hundreds or thousands of artifacts from a relatively small excavation. Conservation labs and museum collections can amass well into the millions of individual objects. Moreover, the vast majority of cultural materials collected from archaeological excavations are merely fragments, and it is not always obvious what the original artifact might have been. As a simple matter of practicality, we cannot always address every object as an individual artifact. Instead, we have to find ways to classify and categorize artifacts by generalized characteristics, aggregating like with like, as an assigned object type.
The most obvious systems of artifact classification, and perhaps the oldest, consist of classifying by physical and formal characteristics. These are, typically, further subdivided by stylistic or decorative commonalities that are in turn commonly identified by geographic regions and/or period of time. As a practical matter of data management this is both necessary and productive, but there is a hidden catch. It becomes very easy to implicitly ascribe, rather than derive from the things themselves, their uses and functions and purposes based on our contemporary perceptions of the analogous categories. The map easily becomes mistaken for the territory.
The use (and abuse) of inference by analogy has a long and storied history of debates in archaeology (see [15][16][17][18][19][20]), particularly when it comes to experimental or ethnographic analogy of artifact function. We do not intend to rehash those debates here, except to raise a specific exception with common approaches to functional typology. Our reasons for this will become clearer from our discussions below, but the essence of our objection is that the underlying premise of functional typology flattens much of the potential diversity in material behaviors more often than not. The problem begins with the conflation of formal typology with functional typology.
All systems of classification are inherently reductionist, since the generalization of traits is just that-finding an optimal reduction of within-class variation for a specific selection of traits that can then be cleanly separated into categories. Inasmuch as those traits are directly observable (e.g., size, shape, material, manufacture, surface treatment), such formal classification is immanently necessary and useful. Functional traits, however, are not observable. Function entails the evaluation and selection of the potential utility of an object's formal characteristics by the designer and/or user of that object. Since evaluation and selection involve some aspect of intention (i.e., an unobservable trait), function cannot be directly ascribed from form alone. Although we might infer a range of potential function from the utility of its formal observable traits, the specific function of the object is a matter of interpretation rather than observation.
Where this becomes even more problematic is in the incorporation of analogy into functional classification, whether by ethnographic and ethno-historical comparison or by common or implicit perceptions related to a specific artifact form. Analogy is, of course, a very useful tool for inference but can also easily mislead. Analogy only gives an illusory sense of observation. By this, we mean that the observer sees an activity "X" being performed with an object similar to object "Y", which implies that "X" could be done with "Y". While this may be useful for exploring the range of possible functions for an artifact, it does not intrinsically constrain function to that analogous use. In short, the use of analogy can only suggest, support, or refute inferences and help generate hypotheses. Much like first impressions, the illusion of observation can be (and often is) stubbornly persistent. Such analogies can quickly become attached to an entire class of artifact, solely by virtue of this appearance of "proxy" empiricism, and become "common" knowledge. They remain, however, inferences at best and assumptions at worst-neither of which is an especially solid foundation for typology.
The two examples below illustrate the interpretive consequences of conflating form and function, privileging perception over observation, and confusing map for landscape. In the first case, the standing functional interpretation for a specific class of vessel was found to be grounded more on impressions and expectations than on evidence [7]. For the second example, what now appears to be its own class of artifact had long gone relatively under-appreciated by being subsumed under a broad classification [21]. Both are situations in which the interpretations for the broader intersections of object and behavior surrounding an artifact had been obscured by the practices of archaeology itself. The voices of the archaeologists did not leave room for the object to get a word in.
2.1. The Curious Case of "The Missing Chocolate" The first case study involves a well-known type of elite-owned Classic Period Mayan cylinders (AD 550-900), commonly referred to as "chocolate" vases or pots ( Figure 1). These cylindrical vessels are typically tall, thin-walled, straight-sided containers ranging in size from a hand-held cup to ones large enough to hold two or more liters of liquid. These particular vessels bear hieroglyphic texts stating something like "his/her drinking vessel/instrument for cacao". These same vessels are often decorated with beautiful scenes of ritual events or palace life. The vessel form is broadly cup-shaped, tends to bear decoration that has to do with food and ritual, and often mentions a known foodstuff (typically maize or cacao) in the glyphic text found along the rim known as either the Primary Standard Sequence (PSS) or the dedicatory sequence (see [22][23][24]). In the 1980s, the interpretation of these vessels as fancy drink-ware for chocolate was corroborated with the positive identification of the biomarkers for cacao (i.e., alkaloids Theobromine and Theophylline) by Hurst et al. [25] in the Rio Azul vessel. Although the Rio Azul is a lidded storage vessel rather than one of the cylindrical vases, it was the first identification of a text-bearing vessel with matching residues, having kakaw-"cacao"-written twice in hieroglyphics. With that evidence of text naming vessel content, it seemed reasonable to conclude that the vases also likely held what the texts identified and were therefore used for drinking just as the glyphs seemed to say. Truthfully, if our modern minds did not immediately put together the likely scenario of someone drinking the named substance out of that vessel, it would be odd.
The problem emerged when no subsequent residue study chemically identified the biomarkers of cacao in any of the hieroglyphically labeled cylinders. Other unlabeled cup-shaped vessels had tested positive for cacao's alkaloid traces, but no labeled ones tested positive aside from the Rio Azul. The chemical methods of identifying the relevant alkaloids employed in the analyses were well-established and consistent with proven techniques, but numerous factors can affect the preservation or contamination of organic residues. Although considerations of organic preservation and deterioration cannot be fully discussed here (consider [26][27][28]), they were thoroughly addressed in the research agendas. Ancient American pottery was not fired in a kiln, and therefore remains rather porous. This means that, in some cases at least, we should be able to find viable residues absorbed into the ceramic fibers within the vessel walls even if surface residues are not found.
In fact, residues were being found in various vessel forms and the variety of residues that could be detected was expanding as the techniques were refined and the methods were applied more widely. Even still, there were no further examples of cacao in a labeled vessel other than the Rio Azul.
Not surprisingly, few of the labeled cylindrical vessels had been tested prior to 2007, in part because the Rio Azul vessel had made the question somewhat moot-the vases' text said they were for the drinking of cacao, and the Rio Azul had seemed to confirm it. The persistent lack of finds, however, made three possibilities clear: (1) not a large enough sample of cylinders had been examined; (2) the cylinders held something else; and/or (3) these vessels never held anything.
At this point, well over a hundred of the labeled "chocolate" vases have been tested, and none have found any of the biomarkers consistent with cacao. The lack of cacao residues was not, however, the only inconsistency suggesting there may be a problem with the standard interpretation of the "chocolate" vases. Squaring the lack of chemical evidence with the common interpretation started a cascade of other questions. There was no staining of the porous interiors that would result from evaporation or absorption of a liquid, but instead some showed areas of vertical abrasion and pocking as though they held something dry rather than liquid. The diverse sizes and shapes range from small cups to very large containers (e.g., more than one liter), suggesting drinking from many of these vessels would at best be an awkward proposition. Other vessels had been repaired during ancient times in such a way (e.g., crack-lacing) that would render them unusable for holding liquids. Even without such repairs, the interior surface treatment (e.g., burnish or slip) combined with the high porosity of the vessel walls would have caused a significant amount of liquid to be absorbed [7].
We are left, then, with a collection of vessels whose purpose was "for drinking" but are often in sizes that are not necessarily ideal to drink from, or apparently ideal for holding a liquid for any great duration. We have texts on those vessels that say they are for cacao, and yet they show no traces at all of the cacao beverages they ostensibly contained. The contemporary interpretation as elite drinking vessels presumes both to be true, but the empirical data on use contradict both the purported purpose (i.e., cacao vessels) and function (i.e., drinking vessels). We can, however, assume that the ancient Maya knew exactly what they meant by the text, used the vessels in just the manner they intended, and that the vases functioned in precisely that manner. It is only the interpretation of text, vessel, and intentions that are conflicted. Instead, we need to consider what viable interpretations are possible for all of the above while accounting for all of the empirical data. What else might "for drinking of cacao" mean if it is not referring to the act of drinking cacao from the vessel itself? If there is a logical way to answer the question that is not self-contradictory, then that answer makes for the stronger interpretation.
When Is a Flask Not a Flask? When It is a "House"
Small objects, even those impeccably crafted, tend to garner less notice than the large and ornate artifacts or monuments. Curiously, these small objects can be deceptively complex in their broader importance. One such class of small-ish artifact are the Classic Period flask containers. These flasks were commonly referred to as poison bottles, pilgrim's flasks, veneneras, pigment bottles, medicine bottles, or snuff bottles (see, for example [29][30][31][32][33]). Some of these names seem to imply distinct uses, while others borrow by analogy from specific ethnographic traditions. Present in many collections, most were described in catalogs and reports in passing-curious and beautiful, but not overly interesting.
One such small flask in particular showed that maybe not all "poison bottles" were the same. In 2012, one of these small flasks ( Figure 2) was found to hold the chemical traces of processed tobacco (primarily nicotine, see [34]). More remarkably, there was glyphic text on the flask that read y-otoot 'u-mahy-"the home of his/her tobacco." In what is effectively the opposite of the previous case study, this was a case of content and text matching when it was not necessarily expected and a class of vessels for which the purpose, use, and function had been under-specified rather than presumptively ascribed [21]. Finding nicotine residue matching up with 'u-mahy "tobacco" would have been interesting enough, but it also became quickly apparent that there was something curious to the use of y-otoot-"home of"-in the text. A number of similar flasks have inscriptions suggesting that at least some were also the 'otoot-"home"-of something ( [35], p. 8), while other flask-like vessels were actually designed to look like thatched-roof houses. Boot [36] later noted that 'otoot carries the specific connotation of a home or dwelling, rather than a particular structure (i.e., a house), which suggests a certain intention of purpose to its use. Once that pattern started becoming apparent, it turned out that quite a few of the various flask-like containers were likely related, despite their formal dissimilarities. Whether small sculpted flasks labeled as 'otoot, paneled flasks of similar volume bearing house-like elements, or animal or anthropomorphic effigy flasks appearing in similar contexts, there appeared to be a functional connection as something's dwelling place. Even more curiously, it appeared that the small under-appreciated flasks may have been hiding in plain sight within the Classic period artwork all along. Along with two well-known examples of small olla-shaped flasks in use (Kerr rollout images K1377 and K3460), other images were found depicting flask-like items strung and hanging from around the neck, waist, or inserted into headdresses and back-racks (see, for several examples, [21]). It would seem that many 'otoot had been depicted within the images, unnoticed until there was reason to look.
The next logical question, of course, becomes one of whether all flasks were considered an 'otoot, and if not, when is something an 'otoot and when is not it? It does not seem to refer to a particular form or configuration of vessel, and in fact there are a number of objects that have texts indicating them as an 'otoot for something that is neither flasks nor even necessarily pottery ( [36], pp. 169-170). It is not used to describe every container, and seems to be associated with primarily small or portable containers. There is still insufficient information to determine if only certain substances needed a "home" to be kept, but tobacco certainly appears to have a strong association.
Isolating Use, Purpose, And Function For Artifacts
Even accepting that use, purpose, and function are interrelated but distinct concepts, discriminating and defining the boundaries between them is surprisingly complicated. Each seem intrinsically, almost irreducibly, related to the others. Artifacts, by definition, are artifacts because they were intentionally made and/or used [1][2][3]6]. Even in the case of expedient tools, the transformation from "object" to "artifact" involves a mental state of intention (i.e., "I will use this to accomplish that"). Intention appears to clearly initiate purpose and therefore is a predicate to use, with function seemingly dependent on the intersection of the two.
The above scenario describes a hierarchical relationship-intent to purpose, goal to function, and action to use-in a specific order of causation ( Figure 3). Going back (again) to our proverbial coffee/pen mug, by putting pens in the mug and not coffee the goal and action were supplanted and the purpose of the object was then redefined. Importantly, the purpose and function of that specific mug changed with the use and not the purpose or function of the broader type of object. Part of the complication is that there is no coherent way to discuss the function of any artifact without reference to the intent behind it (see [2,37,38]). Typically, though, there are multiple intentions involved. Purpose, function, and use each may have some predicate intention that may or may not be identical or even originate from the same intentional agent, and so it makes little sense to conflate them. In the hierarchical relationship described above, intention initiates a chain of causes and effects, with each subsequent step (and its associated intention) dependent on the preceding one. This explanation is problematic, however, as we have already demonstrated that use, function, or purpose can each be altered independently of the others, and are unconstrained by any a priori classifications. That very independence may provide clues towards a preferable model.
If we consider use, purpose, and function as distinct and separate entities (Figure 4), each with its own associated intention-i.e., what is done with the artifact (use), why that artifact is designed or chosen (purpose), and how that artifact is employed (function)-the relationships between them would depend on their operating (in this case, behavioral) intersections. Likewise, each pair of intersections-purpose and function, function and use, purpose and use-similarly represent independent relationships. If we consider the behavioral domains that are described by each of those pairs, some familiar patterns start to emerge. The natures of the behavioral dynamics related to each intersection resolve to commonly referenced normative social structures. Consider that (1) a purposive why involves the intentionality of how an artifact should be used, and (2) a functional how entails the practicalities of what could be done with it. Therefore, use and purpose (what and why) equate to pragmatics or practices, while use and function (what and how) equate to the actual performance of the associated behavior, and finally purpose and function (why and how) directly express the normative ideals guiding that performance and practice ( Figure 5). These three concepts (i.e, practice, performance, and norm) cleanly provide the core attributes for interpreting the interaction of human behavior and material things. In short, they define the interpretive domains for materiality.
Intentions, Materiality, and Artifacts
Our goal, as archaeologists, is to build the strongest (empirically supportable) chain of inference (e.g., [19,[39][40][41][42][43]) between the material evidence and the behaviors (i.e., the intentional actions). While there is a substantial body of literature exploring those inferences in a macroscopic sense (e.g., site formation processes, regional interactions, assemblage analysis, or ethnographic analogies), the behavioral interpretations of artifacts themselves have received surprisingly less in-depth theoretical attention. Not to suggest that the association of artifacts and behavior has not been explored (e.g., [1][2][3][4][5]7,38]), but that the specific narrow interface by which intention translates to implemental action is still murky in archaeological theories of materiality.
Most work on the intentional nature of artifacts deals with the concept of function. Artifact function has traditionally been problematic for a couple of reasons. Firstly, the term function itself is ontologically ambiguous in the sense that the concept is used to describe both the implemental aspects of physical utility and the intentional or social aspects by which that utility is systemically situated. Secondly, common approaches to functional explanation itself (see [44]) often fail to specify which of those aspects of function are being addressed.
In the case of material artifacts, the function of an artifact relates not only to how the object was actually used, but also a combination of social contexts and intentions by both the maker and the user of the object. In other words, a functional description for artifacts entails a particular set of causal relationships between intention, form, and utility. Houkes and Meijers [2] (p. 119) note that this comes, in part, from a duality of connotation in the term function itself, stating: . . . artefacts have a 'dual nature'. . . technical artefacts, that is, the products designed by engineers for practical purposes, are both physical bodies that have geometrical, physical, and chemical characteristics, and functional objects that have an intrinsic relation to mental states and intentional actions. This thesis can be developed in different directions, for example, conceptually, by connecting the two 'natures' in a coherent conceptualization (Kroes 2006), or epistemically, by arguing that functional knowledge cannot be reduced to knowledge of physical characteristics (Houkes 2006).
Similarly, the theoretical framework underlying the current "symmetrical" approaches (e.g., [8,9,45]) recognize this same duality between utilitarian and symbolic aspects of material culture. (Olsen [9], p. 586) describes this interrelationship of the material and the social, saying: However far back we go into prehistory humans have extended their social relations to non-humans with whom they have swapped properties and formed collectives (Serres 1987, p. 209;Latour 1999, p. 198). If there is one historical trajectory running all the way down from Olduvai Gorge to Post-Modernia, it must be one of increasing materiality: more and more tasks are delegated to non-human actors, more and more actions mediated by things (Olsen 2003).
Advocates of the symmetrical approach cite the counter-productivity of untangling material versus social interpretations, since the material is (in that view) innately social. Strongly influenced by Bruno Latour's actor-network theory, with various other influences such as the assemblage theory of Delueze and Guatarri, this approach to material objects prioritizes the entangled system of relationships between things and people.
Whether framed in terms of the duality of intentions (i.e., technical versus functional) or as an intermediary between intention and action, the function of an artifact described by these approaches is related directly to intentions. Intention begets function, which in turn determines the role of the artifact in its implementation. Either formulation, however, conflates the causes and effects. If intention and action are both satisfied by the artifact, which is both the product and implement of the intended action, it becomes both cause and effect simultaneously. The artifact's utility becomes conflated with its purpose. However, throughout its lifetime of uses, an artifact can be subject to numerous intentions as discussed above.
The duality between artifact as designed object and artifact as a tool of implemental action is especially problematic in archaeological contexts. The necessary predicate intentions of both the maker of an object and its subsequent user(s) are unknowns, so conceptualizing function along these lines is archaeologically impractical. Therefore, the term function remains ambiguous, where dualistic concepts of function can equally refer to why an object was created, how it was used, the social context of its use, behavioral activities denoted by the object's presence or use, or some amalgamation of the above. Similarly, archaeological derived concepts of materiality cannot rely on a symmetrical social embedding of object and context, as if they were equivalent or indistinguishable entities sharing some abstract form of agency, for much the same reason that duality is problematic.
Our assertion is that this ambiguity occurs because there is an additional dimension to the set of relationships that these views of materiality have conflated. Instead, our archaeological goal is a simultaneous understanding of an artifact's utility and purpose as discrete elements, best assessed through multiple empirical and inferential lines of evidence (e.g., [39]). Our reorganization of the common concepts surrounding function into a multi-faceted system-i.e., use, purpose, and function being the material attributes forming the edges with the three vertices of norm, practice, and performance representing the behavioral expressions-shares some commonalities with Preucel's pragmatics (e.g., [46,47]) and Alexander's theory of social performance (e.g., [48,49]). The point that the current conceptualizations and usage of the term function (as it pertains to materiality) are not reducible solely to physical form or characteristics, however, remains especially pertinent to archaeological interpretation.
An artifact's design and physical characteristics impose constraints on its use and potential range of uses, but nothing in the design or physical aspects intrinsically ensures a use that was intended by its designer or guarantees an expected function by common social perceptions. That being the case, physical characteristics cannot be viewed as adequate or sufficient empirical data to ascribe all three aspects (i.e., function, use, and purpose) reliably. Artifact function can only describe a range of possible uses based on those physical capabilities. Since we are never in a position to witness direct use, and we cannot rely solely on an artifact's morphological characteristics to necessarily specify that actual use, we need a method to deduce compound materiality from data that is archaeologically accessible.
Use-Purpose-Function Model For Materiality
The model that we are proposing separates each of the aspects of materiality from any implicit hierarchical structure between intention and action. This limits the connotations of the terms use, purpose, and function to much more discrete definitions. By doing so we avoid the intermingling of intentions with actions. Where each of the aspects in this triad intersect, we define here as their behavioral expressions norm, practice, and performance. Our purpose is to delineate an archaeological ontology for materiality in which each term is as discretely bounded as possible and to identify the empirical basis we seek, without the conflation and overlap of concepts.
Use, Purpose, and Function
The first, and perhaps most straightforward, to address is the nature of artifact use. Use refers only to the specific manner in which a particular object was employed and not the possible range of uses for which such an object could be employed. Use is the consequence of some conscious intention by an active agent. The use of an object (e.g., putting pens in a mug), but not the object itself (i.e., the mug), is the manifest implementation of that intention. In other words, the mental state of intention is an attribute of the active agent, but does not attach to the artifact itself. Note that use may be either a direct or indirect implementation of intention and action. A specific artifact may be implemental as one component in an assemblage or process (i.e., as an indirect catalyst or facilitator). Similarly, the use of an object may be a symbolic, representative, or communicative action instead.
Purpose is certainly the most abstract and ephemeral of the triad, but is in many ways the most concretely bounded. Purpose is, quite simply, the intentions of active and sentient agency that initiate some implemental action. In other words, purpose is an intention to do something. As we have discussed above, there are often multiple intentions associated with an artifact-minimally those of the maker, the user, or witnesses. The distinction we are making by separating purpose as a discrete element or dimension of materiality is that purpose, as an intention, is not entirely determinate. Purpose alone does not initiate action. Purpose is the intention to bring about the consequence of an action. This subtle, but critical, distinction allows the attachment of multiple purposes to an object. Since intentions are not strictly determinate, there is no primacy of intention over function or use and therefore no hierarchical presumption, which is otherwise implicit in most theories of materiality.
The concept of function has received the most attention and, as seen in the preceding sections, has proven to be one of the more difficult to address coherently. The problem is, in part, that the term function is used in such a diverse array of contexts that it is difficult not to attach other parallel but extraneous connotations when discussing it. Perhaps a more apt terms to use here would be functionality rather than function, since the operating definition (for our purposes) is related to the physical and technical capabilities of an object. In the use, purpose, and function triad we are proposing here for artifact materiality, function is segregated from intentions inasmuch as the functional dimension of an object describes only how an object could be employed to fulfill some intended use.
Performance, Practice, and Norms
Although use, purpose, and function describe the dimensions of materiality, the characteristics of the intersections between those dimensions-i.e., performance, practice, and norms-are what become visible as the materiality of behavior. Each of these characteristic aspects of behavioral materiality are comprised of the interactions of a unique pair of the use, purpose, and function dimensions.
1.
The combination of use and function, or what action is done with an artifact in conjunction with how that action is accomplished, comprises the behavioral performance of fulfilling an implemental action.
2.
The use and purpose associated with that artifact, or what is done and why it is done in that manner, entail the social practices that guide the context and content of an action with an artifact. 3.
The intersection of function and purpose delineate the social norm or norms describing how a material artifact should be employed and in what contexts such an implementation is appropriate.
We previously described use, purpose, and function as the "what, why, and how" of an artifact, but it is through the performance, practice, and norms that those mental states of material behavior become visible. The performance of a material behavior, in the sense of some action by an intentional agent implemented through some material object, refers to the manner and method in which that behavior is accomplished. Performance is the "what" and "how" described by the interaction of use and function. Use captures the method of the implemental action itself, and function captures the manner in which the action is conducted. Performance is the physical manifestation and visible enactment of behavior. Since neither use nor function specifically indicate the mental state of intentionality or rationale behind an action, describing performance is the most observable (therefore empirically confirmable) dimension of artifact materiality. Performance combines the physical capacity of an object to be used to carry out some action with the act of actually doing so.
The practice of a material behavior, where artifact use intersects with an agent's purpose, involves the set of intentions for which a particular object and an implemental action are considered suitable. Practice, as we define the term here, is what some references to social function typically intend to describe. The conversion of intended outcome into expressed behavior (i.e., from purpose to use) entails a purposive intention and its socially contextualized practical implementation. That conversion requires the knowledge of what object to use and why in order to use it for that purpose. Unlike performance, which is a matter of purely logistical potential for use of a material object, practice incorporates the applicable social knowledge concerning both the action and object. Practices are not directly observable, since they involve contextual knowledge of mental states, but the patterns of use that they prescribe may be.
Whereas performance and practice are related to the specific and actual use of an artifact, the norms of materiality pertain only to potentials and purposes. Norms, in this narrow and restrictive sense of the term, refer to the knowledge of why certain criteria are required to satisfy an intended purpose and how an object may be utilized to do so. Norms, as we are defining them here, specifically refer to the social information related to material behavior. Similar to practices, norms are not directly observable and can only be inferred indirectly through their latent influences on other patterns. Material norms are always indeterminate, however, since purpose is indeterminate and function only entails the potential of an artifact's capabilities.
Materiality and Its Archaeological Interpretation
Archaeologists are still left with a substantial quandary when it comes to interpreting materiality. Specifically, there is no direct link between mental states, such as intention, and their empirical expression in the archaeological record. Instead, we have to build our interpretations by weaving together multiple threads of inference in order to link the observable with the unobservable. In the case of materiality, if norm, practice, and performance represent the intersections of use, purpose, and function then we need to find an empirical expression of those in terms of the artifacts and their contexts.
The goal is to identify which empirical attributes of artifacts can be used to indicate an underlying structure to their associations, and to isolate observation of those attributes from bias introduced by the interpretive implications of those observations ( [43], p. 61). In archaeology, spanning the empirical gap between the archaeological record in the present and activities in the past is well-established as a set of analogical inferences (e.g., [15,[17][18][19][20][50][51][52][53]). In practical terms, the empirical support for archaeological analysis (particularly so for any sort of quantitative analyses) depends on prioritizing the present, material aspect of the observable archaeological record in order to inform the interpretation of the unobservable past activities and place. Ultimately, it comes down to making a clear distinction between what can be observed and what cannot.
We obviously cannot observe past intentions, so purpose and its associations (i.e., norms and practices) must always be inferred from other information. Although the norms (how and why) of the behavior of interest certainly affect the form and content of the archaeological record, their primary link to observable archaeological resources is by structuring the patterns of activity areas and associated artifacts, thus structuring the practice and contexts of performance. Determining normative constraints is most often the objective of archaeological interpretation rather than the data. Understandably, behaviors and their associated artifacts, as practices in the sense of what and why, are empirically problematic in that they require some prescriptive rationale (i.e., a purposive why) that is inherently unobservable except through its patterning effects.
By the set of associations we have described, the full materiality of an artifact is delineated across three domains of information (i.e., use, purpose, and function). Consequently, the materiality of the behaviors with which those artifacts are associated can be expressed by the intersection of each pair of domains as practice, norms, and performance. Therefore, by solving for an empirical correlation between any two within each of those triads it should necessarily indicate or constrain the range of possibilities for the third. The only archaeological path towards discerning the intentional normative dimensions of the problem is through the triangulation of the material patterns of performance, use, and function. This sort of triangulation is nothing new to archaeological practice. Archaeological interpretations have always been inferential and inductive, given the nature of our data. Accordingly, our methods rely on empirical support if those inferences are to have solid foundations ( [43], p. 60). Basically, we need to work backwards from effects to causes.
The triangulation of unobservable materialities in archaeological artifacts is not substantially different. Instead of a temporal gap between present archaeological resources and past activities, the interpretation of use, purpose, and function entails bridging a conceptual or ideational gap. To do so, the material artifacts and their various networks of associations (whether by spatial organization or corresponding assemblages) need to provide sufficient empirical data with which to identify at least two of those three elements of materiality described above. Since we have no direct and viable empirical links for purpose, norm, or practice, we need to be able to find use, function, and performance from the available archaeological data.
Triangulating Artifact Materiality from Empirical Data
Since our goal is to find empirically supportable ways to interpret artifact materiality from the archaeological record, we want to work (as much as possible) from directly observable sources of data. We have identified a few of these in the preceding discussions-i.e., use, function, performance, and possibly the patterns (if not the content) of practices-from which to build a supportable chain of inference. If we can identify the material patterns and range of performance for an object (i.e., its use and function), we should then also be able to determine at least the limits of a range of practices (i.e., use and purpose). If we can at least determine the limits of the range of possible practices, especially if we can archaeologically identify some of the material patterns of those practices, it would then be reasonably supportable to demarcate a relative domain of purposive intentions associated with an artifact.
What we have been describing throughout is merely a conceptual form of triangulation, by which we work from the relationships between two or more points of reference to extrapolate some unknown point. In this case, having empirical data on what and how, we are trying to find a reasonable means to extrapolate why. Function, inasmuch as it only refers to how an object could be employed (contra to the technical or social connotations of the term described above), may be reasonably discerned from the technical capabilities of the object. We also have empirical methods that can reasonably describe use or uses (e.g., use-wear or residues analyses). Knowing, or having reasonable belief regarding, what and how (use and function) independently is not enough, though. We also have to have access to the linking relationship between those dimensions to be able to make reasonable inferences about the remaining unknowns. Performance, the intersection between use and function, is not directly observable (having occurred in the past) but its effects can (or should) be detectable archaeologically.
In short, if we can determine use, function, and performance with any confidence from the data, there should be sufficient information from which to make reasonable inferences regarding practices, even if the specific purpose remains unknown. Building the chain of inference further, however, the possible range of purposes shrinks proportionally to the strength of data available for performance and practice, given that function and use can be reasonably established. Once the range of purpose can be narrowed down, we have effectively completed our triangle of material dimensions (use, purpose, and function), joined by two of their "vertices" of relationships (i.e., performance and practice), which then completes the set of intersecting relationships for the materiality of both artifact and the associated behavior. Finding a reasonable interpretation of the material norms associated with object then becomes a matter of summing up the set-the norm must be the remaining normative association of function and purpose.
A Toy Example
Thus, what would all of these abstractions about intentions and dimensions and intersections mean, in practical terms, for our hypothetical mug if we were to find it archaeologically? Our motivation has been to draw a set of empirically grounded inferences from which the more ephemeral, and more anthropologically interesting, aspects of artifact materiality can be given their proper voice. The rationale and framework for interpreting materiality that we have outlined above should, with minimal assumptions, allow us to form a reasonable approximation from observable characteristics for each edge (use, purpose, and function) and vertex (performance, practice, and norm) making up that metaphorical triangle of materiality. By using the observable characteristics of our mug, what it might have held, and what we can reasonably surmise that it did hold, we should be able to infer something of its innate story. What, then, would our mug have to say for itself? Imagine some archaeologist in the future excavating our office space and finding our rather simply-decorated "metaphor mug" (intact, for simplicity's sake) with a couple of pens still in it. Intuition may say "pen holder" but, being a diligent and conscientious researcher, they send it back to the lab for further analysis. The analyst (who conveniently just happens to be an archaeo-chemist with very thorough empiricist predilections) finds traces of ink, but not just from the pens found with it. Not satisfied with the obvious conclusions, however, they also run organic analyses and find that the mug contains small traces of caffeine and various other organic compounds that are definitely not from ink. Informed by the model we have described here, what should the future researcher conclude about our mug?
The object itself has the physical capability to hold all manner of things (e.g., hot or cold liquids, small objects), but could also be used as a candy dish, weigh papers down, hold a potted plant, or any number of other perfectly reasonable possibilities. The range of function is relatively broad by virtue of the mug's physical properties. There is empirical evidence, though, for a relatively narrow subset of those possible functions, consisting of only two specific (but unrelated) uses for this particular object (i.e., pens and coffee). In performance, then, the mug would be both a container for coffee and container for pens, but not simply a generic container due to the relative difference between range of function and range of use.
Since performance is subsequently limited to a relatively narrow range, and function is clearly not the limiting factor, we can infer that the restricting dimension must be the intentions attached to the practice (i.e., purpose). Conversely, given that the range of uses was not strictly limited to coffee, coffee-like substances, or even consumable liquids more generally, it is also possible to infer that the restricting purpose was not particularly narrow, either. We do not know those purposes specifically, but we know that they both kept our uses to a narrow subset of functions while not being so narrowly defined as to preclude diversity. Purpose and use (i.e., practice) were more closely aligned than either were with function ( Figure 6). This means that, by triangulation, we can assign a relative weight not only to the dimensions of use purpose and function, but also to the relative influence of performance, practice, and norm. We can describe the overall materiality of the mug with a fair degree of specificity, but still can only speak to the materiality of that one mug. P u r p o s e F u n c t io Again, being the diligent researcher, our future archaeologist excavates a series of such offices and finds numerous very similar metaphor mugs. Since they are formally similar, the range of function stays relatively fixed. If they all were shown to have only held coffee and/or pens (i.e., within the same set of identified performances) we would then have a consistent pattern of practice. What if, instead, the range of uses expanded (i.e., use and function tended towards parity), or if only a very small portion of the mugs ever held pens or anything but coffee (i.e., use and purpose were aligned)? What if the mugs were widely distributed, but never showed any sign of being used for anything? What if decorated mugs with pithy sayings are only used for coffee, and plain ones for coffee or pens?
For each scenario, the balance and influence shifts between use, purpose, and function as well as their combinations, giving different strength to the practical or intentional aspects of the behaviors. By specifying and triangulating from these dimensions to evaluate these relative weights, and deriving patterns from them, it is not difficult to imagine the interpretive implications of various possible configurations of the dimensions. For example, utilitarian or expedient tools would prioritize parity between function and use, while function would more likely be closely aligned with purpose for specialized tools.
The Language of Things
Iconography, hieroglyphs, or texts offer archaeologists what tantalizingly seems to be the greatest possible direct link to the behavior of the peoples in question. They provide an apparent means of assessing some aspect of thought and intention. The myriad of archaeological appreciations of linguistic assessments for text and image in ancient contexts and on artifacts is well beyond the scope of this discussion. Instead, our present interest is only in the presence of text and/or image on an artifact. Imagine our mug again, long forgotten among the collapsed remains of the office. The coffee mug, remaining mainly intact or at least with all pieces present, is discovered in excavation, carefully bagged, identified, and eventually re-assembled by the patient lab staff. Much to their surprise, the coffee mug bears text (assuming it can be read) stating, "World's Best Mom" along with a big "#1" and a cartoon trophy. Should we conclude that it may have actually belonged to a renowned and highly celebrated matriarch? Should we further infer that the whole world had held some competition or search for the best and most worthy mother to be so honored and celebrated? Was this an annual competition? Conversely, it would be presumptuous to simply assume that the statement must be mere hyperbole.
The mug has been used to present text, but function and purpose are under-specified. Now consider that the mug was chipped at some point prior to its deposition, the interior heavily stained from many uses for coffee, and (presumably) later re-purposed as a pen holder. Where would the archaeologist begin? Which is more important towards the interpretation-the cup, the condition, the text, the context, or some other feature?
If we started our analyses with the text on our mug we would want to know (1) what it is saying (i.e., its lexicon), (2) how is it saying it (its syntax), and finally (3) the statement's meaning (its semantics). Notice, though, that we arrive again at the basic triad discussed above (i.e., what, how, and why), which brings us back to use, function, and purpose. More importantly, if we consider those same intersections-paralleling those of use-purpose, function-use, and purpose-function-we also find linguistic equivalents in practice, performance, and norm. In this case, the intersections are of lexicon with semantics (practice), lexicon with syntax (performance), and syntax with semantics (norms). The underlying structure of the linguistic and textual analyses can be expressed equivalently to that derived above with respect to artifacts. In other words, materiality can equivalently be viewed as the "language" of things.
Whether an artifact is the medium for communication or (symbolically) the communication itself, the dynamic of this triad and its intersections remains the objective of interpretation. Ultimately, each aspect of information informs and constrains the others. Both the sides and the angles of the triangle are subject to the relative weights and proportions of the whole. Even if the text is a direct reflection of the thoughts and intentions of the people in question, it would remain necessary to evaluate and consider the text as an artifact as well as the text's relations to the artifact. Otherwise use, purpose, or function may easily be overlooked. The very presence of text or image is itself a use of that artifact-one of obvious intentionality. The choice of applying text, image, or decoration is a conscious action. If we conflate any aspect of the use-purpose-function triad, we are unlikely to identify the others appropriately.
Discussion
What has been presented here is an elaboration on various more informal themes and approaches applied to previous work. The three-part concept of use, purpose, and function was previously applied to the two case studies described at the beginning (the "missing chocolate" in [7]; and "flask as house" in [21]). In both of those studies, empirical evidence showed that classification practices based on form and presumptive function had greatly obscured pertinent aspects of the behavior surrounding those artifacts. In both cases, use and function had been conflated and had therefore misidentified the purpose. This article formalizes aspects of those applications of use, purpose, and function into a fuller model, explicitly defining those three dimensions and introducing the material implications of their intersections.
We began and ended our theoretical discussions with our hypothetical mug. The true question behind the mug, however, was really one of how we, as archaeologists in the present, can honestly and accurately describe the intentions of people in the past. Specifically, are we describing the artifacts that they used in a way that they would see as the recognizable and mundane objects of daily life? If not, then are we not really just imposing our own perceptions of the things of life? To do so would not allow much room for the voices of either ancient peoples or their descendants, and would instead only project our contemporary view of the world onto the past. Instead, our goal as archaeologists is to get out of the way of those past voices as much as possible, and we have argued here that the best way to do so is by grounding our interpretations as firmly as we can on the empirical. The model we have proposed, with respect to the materiality of behavior, prioritizes building chains of inference from the empirical attributes of artifacts towards the unobservable. Our intent is specifically to minimize the imposition of our own voices, and let what was already written into the artifact's stories come from the voices of their authors.
Author Contributions: Both authors contributed equally to the conceptualization and writing (original draft preparation, review, and editing) of this manuscript. All authors have read and agreed to the published version of the manuscript.
|
2020-07-02T10:11:04.149Z
|
2020-06-30T00:00:00.000
|
{
"year": 2020,
"sha1": "2c79bcc5a13516bedd8370b1ba62e836662f99d2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2571-9408/3/3/34/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9de4152b84e8d0b16d0183defabd0dc6e275e097",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
213517817
|
pes2o/s2orc
|
v3-fos-license
|
Atmospheric disturbance on the gas explosion in closed fire zone
In order to avoid serious safety accidents caused by closed fire zone, based on the continuous monitoring of atmospheric pressure at different monitoring points in multiple mines, the atmospheric pressure fluctuation model and the air leakage model were established and analyzed. The change law with time of oxygen concentration and gas concentration in the fire zone were obtained due to atmospheric disturbances under the influence of different pressure difference, volume and size of fire area, wind resistance, gas emission, sealing moments, etc. so as to evaluate the explosion risk of a closed fire zone. Research showed that the mine atmosphere fluctuates with the atmosphere of ground, and the pressure difference between the inner and outer sides of the enclosed fire zone is affected by the periodic fluctuation of atmosphere, which has about 16-h cosine fluctuation and approximate 8-h fixed value. Compared with the fire zone with poor sealing quality, good sealing fire zone has better resistance to atmospheric disturbance. The reduction of oxygen concentration in the inner side of a well-sealed fire zone mainly depends on the dilution of methane, which is more likely to accumulate and rise rapidly. And the fire zone with poor sealing quality is easy to be interfered. The inner oxygen concentration and gas concentration are easily affected by the absolute gas emission and the air leakage in the fire zone. Fire zone with small wind resistance and small volume is especially obvious. At the initial stage of the closed fire zone it's very possible to happen explosion. The time duration of explosion danger varies under different conditions, and the atmospheric disturbance may lead to repeated explosions in some cases. It's suggested to take some methods to avoid explosions according to the real-time situation, closure time, oxygen concentration and gas concentration of fire zone.
Introduction
The atmospheric pressure on the ground changes with season, temperature, morning and evening. Then the changes in the surface atmospheric pressure will be quickly transmitted to coal mine, causing the atmospheric pressure at various points in the coal mine to fluctuate with fluctuations. This phenomenon is called mine breathing effect. Relevant research showed that the mine breathing effect has a significant impact on the underground air pressure (Zhou 2002). Some experts and scholars have also carried out corresponding studies. According to the concept of breathing phenomenon in goaf, Li (2012a) analyzed the causes and general rules of this phenomenon, and on this basis, he proposed technical measures to prevent this phenomenon. Guo (2016) studied the relationship between atmospheric pressure changes and gas anomalies caused by changes in seasons and alternation of day and night, and adopted ventilation control of the roadway to resist the abnormal gushing of gas. Through the detection technology of tracer gas, Li (2012b) found that there is a close relationship between the breathing phenomenon in the goaf and the atmospheric pressure change on the ground, which easily causes spontaneous combustion of the remaining coal in the goaf. Zhou (2018) established an experimental platform to monitor and study the combustion state in the fire closed zone of coal mine, and inferred the danger after the fire zone was closed. Therefore, if the fluctuation of pressure that is caused by the breathing effect is not considered, or if it is not considered well, it will bring hidden dangers to safe production of the mine, which may lead to major accidents such as gas accumulation and gas explosion in the closed fire zone. It may bring hidden dangers to the closed and unsealing work of the fire zone (Zhu et al. 2008;Niu et al. 2013;Zhai and Lai 2016;Shi et al. 2017;Zhang et al. 1999). However, the different sealing sequence of fire zone of the coal mine will cause different air flow and gas accumulation in the fire zone, resulting in different hazards in the fire zone (Niu et al. 2016). So, safety issues during the closure of coal mines must be addressed through sound management measures (Adam and Alicja 2018;Arif et al. 2015). How to formulate a set of safety management measures for closed fire zones of mine based on atmospheric changes is very important according to the law of ground atmosphere change.
2 Analysis on the influence of surface atmospheric pressure on underground pressure in different locations Based on the comprehensive detector of mine ventilation parameter, the barometer method is used to measure the difference of absolute static pressure between different points in the well and the ground base point. The variation law of ground and underground atmospheric pressure obtained through monitoring around the clock can be used as an important basic parameter for closing, unsealing and determining the risk of gas explosion in the fire zone. Figure 1 showed the variation of atmospheric pressure at three measuring points. It can be seen that the atmospheric pressure changes of three monitoring points are similar, and the atmospheric pressure is obviously affected by the ground pressure, but there is a weak time difference.
Ground pressure changes happened about 30 min before the underground station, about 40 min before the fire sealing zone, and the phase difference was approximately 7.5°and 10°, respectively, while the phase difference between the enclosed fire zone and the outside is only 2.5°. In general, wave transmission is carried out from top to bottom and from near to far. Therefore, it can be approximately considered that the atmospheric pressure outside the closed fire zone changes in a cosine period with the change of the ground atmosphere, which can be expressed by the mathematical formula as follows: where, P out is the air pressure outside the enclosed zone; P average is the average pressure; t is the time; Dt is the cycle period, and DP out is the amplitude of pressure change. Based on Eq.
(1), the periodic variation law of pressure with time at three different monitoring points as shown in Fig. 2 is calculated. Comparing Fig. 1 with Fig. 2, it can be seen that Fig. 2 and Fig. 1 have a high degree of repeatability. In Fig. 2 8:00 a.m. to 24:00 p.m., with a period of about 16 h, which is close to the curve from 8:00 a.m. to 24:00 p.m. in Fig. 1 in terms of trend and value. The atmospheric pressure changes between 24:00 p.m. and 8:00 a.m. is approximately a linear relationship of small fluctuations, which can be expressed as follows: P out;initial is the atmospheric pressure at the end of cosine fluctuations, and k is a constant coefficient. When the value of k is small, it can be approximated that the slope is extremely small during linear change. P out % P out;initial . All the above studies have shown that changes in ground atmospheric pressure have a great impact on underground pressure. Therefore, it is necessary to study how atmospheric pressure changes leading to the temporal and spatial changes of gas flow and components in the closed fire zone, so as to analyze the gas explosion risk in the fire zone under the corresponding state and propose corresponding measures.
3 Analysis on the influence of ground atmospheric pressure change on air leakage in the closed fire zone Although there are different degrees of air leakage in the closed fire zone of mine due to various reasons, the air leakage is generally not substantial. Therefore, it can be assumed that the air in the closed fire zone is laminar flow, and the direction of air leakage is flowing into the fire zone.
where q air leakage is the air leakage volume, P closed is the air pressure in the enclosed zone, P out is the air pressure outside the closed zone, and R is the total air resistance of air leakage passage and firewall. The change of the air pressure in the closed fire zone from the initial value P outside;initial reaches P closed after the time t is expressed as: Due to t ¼ 0, P closed ¼ P closed;initial , The above formula can be solved to get: Put Eq. (5) into the Eq. (3): where P closed;initial is the initial air pressure in the closed zone, and t is the time. From Eq. (6), it can be seen that the air leakage volume q air leakage is affected by the air pressure outside the closed zone P out , the air pressure inside the closed zone P closed , and the air leakage channel and total wind resistance R of the fire wall. The air pressure P out outside the closed zone has been analyzed and studied, which is a periodic variable. The total wind resistance R of the air leakage passage and the firewall represents the shape parameter of the air leakage passage on the closed wall. For the closed fire area of fixed research object, it can be assumed to be a fixed value. The air pressure P closed in the enclosed area is affected by both P out and R. According to Eq. (6), Dt can be inferred from measured data and empirical data. The air leakage amount q air leakage can be calculated by real-time air pressure outside the closed zone P out , by total wind resistance of the air leakage channel R, and by initial air pressure in the closed zone P closed;initial . When the outside pressure is greater than the inside, airflow leaks into the fire area from the outside of the closed fire zone. On the contrary, the airflow in the closed fire zone flows from the return air side to the outside.
It is assumed that the combustion reaction consumption, gas generation, oxygen consumption, adsorption and other comprehensive phenomena in the inner side of the fire zone will not lead to a significant change to air pressure in the fire zone. The pressure change in the fire zone is mainly caused by the external air leakage and internal gas outflow is caused by the pressure difference between the inside and outside. Therefore, it can be considered that the initial pressure inside the fire zone is the same as the pressure outside the fire area because of the moment of complete closure. However, due to many factors such as coal seam cracks and unbalanced sealing quality, it is impossible to realize the real separation between inside and outside the fire zone. There is a relationship between the pressure changes, but there is a certain phase difference. The fluctuation ranges of inside and outside are not the same (When the sealing quality is good and the wind resistance is large, the internal and external pressure difference is large; on the contrary, when the sealing quality is poor and the wind resistance is small, the internal and external linkage is obvious, and the pressure difference is small). According to the quality of closed wall in the fire zone, the values of the total wind resistance R of the air leakage channel are set as follows: 0.5, 1, 2, 3, 4, 5 9 10 4 N s/m 5 , respectively. At the same time, according to Figs. 1 and 2, as well as relevant survey data, considering the measurement error, it is assumed that the pressure difference inside the closed fire area maintains the amplitude of about 600 Pa and shows cosine fluctuation (Zhou 2002;Guo 2016;Zhou 2018). In combination with formula (6), the relation between air leakage volume and time in the period of cosine cycle variation is shown in Fig. 3. In Fig. 3, according to the actual situation, under different wind resistance settings, the gas exchange between the inner and outer side of fire zone in the changing period caused by the change of pressure difference between the inner and outer side of the closed fire zone is cosine change in the external atmospheric pressure. When the wind resistance is 0.5 9 10 4 N s/m 5 , then the maximum air leakage q max ¼ 4:8 m 3 /min, which are obtained at both ends of the cosine curve respectively. The air leakage in the middle part is small in symmetry. There was no fresh air flowing into the fire zone from 13:00 pm to 19:00 pm. The gas inside part of the fire zone flows back from the return air side to the fire zone. The better the airtightness, the larger the wind resistance. When the wind resistance is 5 9 10 4 N s/m 5 , the maximum air leakage q max ¼ 0:48 m 3 /min. In summary, the more closed the wall, the better the quality, the greater the wind resistance, the smaller the total amount of air leakage into the inner side of the fire zone. The increase of wind resistance has a significant impact on the leakage air volume.
According to Figs. 1, 2, 3 and the above analyses, the period of air leakage from the outside of the fire zone into the inside is from about 8:00 am to 13:00 pm, and from about 19:00 pm to 24:00 pm. The duration of each period is about 6 h. And an approximate linear change phase lasts about 8 h from about 24 pm to 8 am. From about 13:00 pm to 19:00 pm, since the outside atmospheric pressure is lower than the inside pressure, almost no fresh airflow enters, and the inner gas escapes from the return air side.
4 Analysis on explosion risk in the closed fire area with periodic fluctuation of air pressure Explosion in a closed fire zone must meet the necessary and sufficient conditions for the explosion, which means the oxygen concentration is above 12%, the gas concentration is between 5% and 15%, and the superposition can be generated in time and space. For this reason, it is assumed that oxygen and gas in the closed fire zone are in the same space at every moment. This paper starts with the change trend of oxygen concentration and gas concentration in the closed space with time. Analyses and calculations in this paper are made by establishing a mathematical model. This paper also analyzes the size and coincidence degree of knife and fork of the two in time, so as to determine the gas explosion risk in the closed fire zone. Before solving, it is assumed that all kinds of gases in the air of closed fire zone have been uniformly mixed and there is no chemical reaction, gas adsorption and absorption phenomena. The change in concentration of each gas component in the air of enclosed fire zone can be expressed as follows: where C is the concentration of an air component in the closed zone, q i is the component flow into a confined fire area affected by real-time air leakage volume), and q e is the mixed gas flow out of the closed fire zone. Assuming that the boundary condition is t ¼ 0, C ¼ C 0 , the above differential equation can be solved: The oxygen concentration in the fire zone will reduce by dilution of other gases or oxidation reaction, adsorption and absorption consumption, which of these two types of influences dominate the type of mine fire, after the reduction of the supply of wind, the fire zone spreads and the flow of gas dilutes the air in the fire zone. For small fires in the coal mine with small fire belt (quantity) and large air volume, dilution has a greater impact on the air oxygen concentration than oxygen consumption. Comparing the data of oxygen consumption with that of dilution and consumption, it is found that the influence of oxygen consumption on oxygen concentration can be ignored in most fires with small ignition belts and large air volume (Zhou 2002). Therefore, if only the influences of the double dilution effect of the gas emission from the fire zone and the atmosphere pressure outside the fire zone on the methane and oxygen concentration are considered, then, q i ¼ q a C a can be put into Eq. (7) to get: q a is the air flow into the fire zone (different values in different situations), and C a is the oxygen concentration in q a that is 21%. According to the boundary conditions t ¼ 0, C ¼ C a0 , and the equation is solved, then the corresponding methane and oxygen concentration at a certain time t after the fire zone is closed can be expressed as follows: If the sealing quality is good and there are no cracks in the fire zone, then no air can penetrate the fire zone.q a ¼ 0, the methane and oxygen concentration C ¼ C a0 exp Àt q e v À Á . Relevant parameters should be set before calculating the methane and oxygen concentration inside the enclosed fire zone. Combined with the actual situation, it is assumed that C a0 ¼ 0:5% (initial gas concentration when calculating gas concentration), C a0 ¼ 21% (initial oxygen concentration when calculating oxygen concentration), C a ¼ 21%, q e ¼ q CH 4 þ q a . q a is calculated based on Eq. (6), and q CH 4 is the absolute gas emission. 10 m 3 /min and 1 m 3 /min are adopted respectively according to high and low gas mines.
Analysis on the influence of air pressure fluctuation on oxygen concentration in the closed fire zone
According to Eq. (10), the oxygen concentration inside the fire zone can be calculated at different times with different volume sizes under the circumstance of closed fire zone (in this paper, taking V = 500, 1000, 2000, 5000, 10,000 and 50,000 m 3 , respectively). According to Figs. 1, 2 and 3, 13:00 pm, 19:00 pm and 24:00 pm are selected as the closed time for analysis (Zhou 2002;Deng et al. 2004;Zhou et al. 2013Zhou et al. , 2015Wang et al. 2014;Jiao et al. 2012;Duan et al. 2010;Deng et al. 2004;Wang et al. 2003). Figure 4 shows the change of oxygen concentration in the fire zone from 24:00 at midnight to 13:00 pm of next day. In this interval, as the pressure difference outside the closed fire zone gradually decreases, the air leakage decreases. It can be seen from Fig. 4a that when the absolute gas emission amount in the fire zone is large, the oxygen concentration in the closed fire region with a relatively small volume rapidly decreases, and the larger fire zone decreases slowly. Among them, 500 m 3 fire zone needs about 33 min, 1000 m 3 fire zone needs about 65 min, 2000 m 3 fire zone needs about 130 min, 5000 m 3 fire zone needs about 240 min, 10,000 m 3 fire zone needs about 380 min, 50,000 m 3 fire zone needs about 680 min to reduce the oxygen concentration in the fire zone to the relatively safe limit of 12%. Except in the case of 50,000 m 3 , the oxygen concentration in the rest volume of the fire zone will decrease to nearly 0. Except the 50,000 m 3 super large fire zone, the oxygen concentration in the fire zone can be reduced to 2% in all other cases within about 12 h.
It can be seen from Fig. 4b that when the absolute gas emission amount in the fire zone is small, the change trend of the oxygen concentration inside the fire zone is similar to the case where the absolute gas emission amount is large. However, it has been estimated that in all cases, the Atmospheric disturbance on the gas explosion in closed fire zone oxygen concentration in the fire zone cannot reduce to less than 12% in a short period of time. If it considers the oxidation and oxygen consumption of the fire zone, it will take more than 360 min or even more to reduce the oxygen concentration in the fire zone to a safe limit. For safety reasons, inert gas injection is recommended. Figure 5 is the change of oxygen concentration inside the fire zone from the assumption that the fire zone is completely closed from 19:00 pm to 8:00 am of next day. In this zone, with the passage of time, the pressure difference between the inside and outside of the closed fire zone gradually increased, and the air leakage gradually increased. As seen from Fig. 5a, when the absolute gas emission in the fire zone is large, the oxygen concentration in the relatively small closed fire zone decreases rapidly, while that in the large fire area decreases slowly 500 m 3 fire zone needs about 25 min, 1000 m 3 fire zone needs about 55 min, 2000 m 3 fire area needs about 100 min, 5000 m 3 fire zone needs about 230 min, 10,000 m 3 fire zone needs about 350 min to reduce the oxygen concentration. In the 50,000 m 3 fire zone, it takes about 660 min to reduce the oxygen concentration in the fire zone to the relatively safe limit of 12%. In all cases, the oxygen concentration in the fire zone cannot reduce to less than 2%.
It can be seen from Fig. 5b that when the absolute gas emission amount in the fire zone is small, the change trend of the oxygen concentration inside the fire zone is similar to the case where the absolute gas emission amount is large, and the difference is that the required time is different. Moreover, the oxygen concentration in all cases in the figure cannot reduce to less than 12%. If the oxygen concentration in the fire zone reduces to a safe limit in a short time, it is estimated that the volume of the fire zone is less than 200 m 3 to satisfy the condition. In the actual conditions, the fire zone is relatively large. Because of the oxidation relationship, there are generally different degrees of oxygen consumption. Therefore, for the 500 m 3 situation in Fig. 8b, it is possible to drop oxygen concentration to less than 12% in about 180 min, and it is possible for 1000 m 3 situation to drop oxygen concentration to 12% in 360 min. It is possible for the situation of below 2000 m 3 to drop oxygen concentration to below 12% after 480 min, and the rest of the larger fire zone will take more than 600 min to realize it. But for the 50,000 m 3 fire zone, it's impossible to realize it. The situation shown in Fig. 8b is only effective in small fire zone. If the oxygen concentration inside the fire zone is completely lower than 12%, the safeguard measure is usually to inject inert gas into the fire zone. Figure 6 showed the change of oxygen concentration inside the fire zone from the time that the fire zone was completely closed, which is 13:00 to 8:00 of next morning. As it can be seen from Fig. 6a, when the absolute amount of gas gushing out of the fire zone is very large, the oxygen concentration trend in the 50,000 m 3 super-large fire zone always reduces to a certain level and maintains stable, and the oxygen concentration trend in other types of fire zones first decreases and then recovers to a certain level and maintains stable. It only took 28 min for oxygen concentration of 500 m 3 fire zone reducing to 12%. The capacity of oxygen concentration diluted to 0% in 180 min. As for 1000 m 3 fire zone, it took about 55 min reducing to 12%, and the capacity of oxygen concentration completely diluted in 360 min. Likewise, it took about 110 min for 2000 m 3 fire zone needs reducing to 12%, and 420 min for complete dilution. It took about 220 min for 5000 m 3 fire zone reducing to 12%, and 540 min for dilution to 2.4%. It took about 320 min for 10,000 m 3 fire zone reducing to 12%, and 600 min for dilution to 4.7%. Finally, the oxygen concentration inside the above-mentioned several types of fire zone will temporarily stabilize at around 6.8%. The oxygen concentration in the oversized fire zone of 50,000 m 3 needs to reduce to about 12% within 680 min, and temporarily stabilized at about 7.9%. The oxygen concentration in the smaller fire zone can reduce to less than 2% between 3 and 6 h. It can be seen from Fig. 6b that when the absolute gas emission in the fire zone is very small, the oxygen concentration change inside the fire zone first decreases to lower than 12%, and then quickly rises to higher than 12%, which both have experienced two dangerous areas. The oxygen concentration of 500 m 3 in the fire zone dropped to 12% about within 220 min, and then it dropped as low as 3.7%. After about 6 h, it returned to higher than 12% again and temporarily stabilized around 17.4%. The oxygen concentration in the 1000 m 3 fire zone decreased to 12% about within 320 min, and thereafter it was as low as 4.2%. After about 4 h, it returned to higher than 12% again and temporarily stabilized at around 17.4%. The oxygen concentration in the 2000 m 3 fire zone decreased to 12% about within 560 min, and then decreased to the lowest 4.4%. After about 1.5 h, it returned to higher than 12% again and temporarily stabilized at around 17.4%. The oxygen concentration in the 5000 m 3 fire zone decreased to 12% about within 570 min, and thereafter it decreased to as low as 6.8%. After about 1 h, it returned to higher than 12% again and temporarily stabilized at around 17.4%. The oxygen concentration in the 10,000 m 3 fire zone decreased to 12% about within 580 min, and then decreased to the lowest 10.8%. After about 1 h, it returned to higher than 12% again and temporarily stabilized at around 17.4%. The oxygen concentration in the 50,000 m 3 fire zone temporarily stabilized after it reduced to 17.7%.
In the same way, three different moments were taken to study the change law of oxygen concentration in the initial stage of fire zone closure under the condition of wind resistance is 5 9 10 4 N s/m 5 , as shown in Figs. 7,8,9. Comparative analysis brings the following three conclusions: (1) For the same sealing effect (the same as the wind resistance), the oxygen concentration in the fire zone with a large amount of gas emission has better resistance to external atmospheric disturbance than that with a small amount of gas emission. The fluctuation range is smaller, and the oxygen concentration in the fire zone reduced to a lower range more quickly.
(2) For the outer closed case at the same time, the oxygen concentration in the fire zone with a large amount of gas emission is stronger and more stable than that in the fire zone with a small amount of gas emission, and the value is more stable. When the amount of gas emission in the fire zone is small, the external atmosphere has a strong interference to the oxygen concentration in the fire zone, which is likely to cause a repeated over-limit of 12% of the oxygen concentration in the fire zone. When the gas emission in the fire area is large, although it is also subject to fluctuations in the external atmosphere, the fluctuation range is relatively small, and there is basically no possibility of repeating again to higher than 12%. This is especially true for the completion of closure at 13:00. (3) For the same gas emission amount, the oxygen concentration in the fire zone reduced to a lower value in the case of greater wind resistance than that in the case of smaller wind resistance, and the rebound amplitude was small. Oxygen volume concentration in closed fire zone/% 500m 3 1000m 3 2000m 3 5000m 3 10000m 3 50000m 3 Fig. 6 The relationship between the gas emissions in different fire zone, the volume of different fire zone, and the small wind resistance is presented. The oxygen concentration in the fire zone changes with time after the completion of closure at 13:00 Atmospheric disturbance on the gas explosion in closed fire zone 4.2 Analysis on the influence of pressure fluctuation on gas concentration in the closed fire zone Figure 10 showed the changes of gas concentration inside the fire zone for a period of time after the fire zone is completely closed from 24 o'clock in the middle of the night. It can be seen from Fig. 10a that when the absolute gas emission amount in the fire zone is large, the gas concentration in the closed fire zone with relatively small volume rises rapidly, and the larger fire zone rises relatively slowly. 500, 1000, 2000, 5000, 10,000 and 50,000 m 3 fire zones reach the lower limit of 5% explosion time about in 2.5, 5, 10, 25, 45, 180 min, respectively. The time to reach the maximum explosion concentration of 15% was about 8, 16, 32, 80, 110 and 580 min, respectively. The duration of explosion risk concentration was about 5.5, 11, 22, 55, 65 and 400 min, respectively. In Fig. 10b, 500, 1000, 2000, 5000, 10,000, and 50,000 m 3 fire zones reach the lower limit of 5% explosion time was about in 30, 55, 100, 210, 480, 720 min, respectively. The time to reach the maximum explosion concentration of 15% was about 120, 280, 560, 630, 700, 1440 min, respectively. The duration of explosion risk concentration was about 90, 225, 450, 420, 320, and 720 min, respectively. It can be seen that the smaller the volume of the closed fire zone is, the earlier the concentration limit of gas explosion will reach, but the duration is relatively short. In the fire zone with a large amount of gas emission, the gas concentration rises much faster than that in the fire area with a small amount of gas emission. The time to reach the explosion limit is shorter, the lasting time of explosion is shorter, and the danger is relatively smaller. Figure 11 is the change of gas concentration in the fire zone within a period of time after the complete closure of the fire zone was assumed from 19:00. It can be seen from Fig. 11a, 500, 1000, 2000, 5000, 10,000 and 50,000 m 3 fire zones reach the lower limit of 5% explosion time about in 2. 5, 4.5, 9.5, 23, 45, 140 min, respectively. The time to reach the maximum explosion concentration of 15% was about 7.5, 15.5, 31.5, 78.5, 100, and 280 min, respectively. The duration of explosion risk concentration was about 5, 11, 22, 55, 55, 140 min, respectively. In Fig. 11b, 500, 1000, 2000, 5000, 10,000 and 50,000 m 3 fire zones reach the lower limit of 5% explosion time about in 23, 48, 95, 130, 210, 870 min, respectively. The time to reach the maximum explosion concentration of 15% was about 80,115,200,660,920, and 1520 min, respectively. The duration of explosion risk concentration was about 57, 67, 105, 450, 710, 650 min, respectively. After about 30 h, the concentration of all types fluctuates again to 17%-18%. In case of damage caused by poor quality of airtight wall or other reasons, the leakage of fresh air may cause to reach the upper limit of explosion. Figure 12 showed the variation of gas concentration inside the fire zone for a period of time after the fire zone is Methane volume concentration in the enclosed fire zone/% 500m 3 1000m 3 2000m 3 5000m 3 10000m 3 50000m 3 Fig. 10 The changes of different gas emission, different fire zone volume, and small wind resistance, the gas concentration in the fire zone with time after the completion of closing at 24:00 midnight Atmospheric disturbance on the gas explosion in closed fire zone completely closed at 13:00. It can be seen from the Fig. 12, 500, 1000, 2000, 5000, 10,000 and 50,000 m 3 fire zones reach the lower limit of 5% explosion time about 2.5, 4.5, 9.5, 23, 45, 180 min, respectively. The time to reach the maximum explosion concentration of 15% was about 8, 16, 31.5, 78.5, 150, and 390 min, respectively. The duration of explosion risk concentration was about 5.5, 11.5, 22, 55.5, 105, 210 min, respectively. In the Fig. 12b, 500, 1000, 2000, 5000, 10,000 and 50,000 m 3 , respectively, fire zones reach the lower limit of 5% explosion time about at 25, 50, 100, 190, 280, 430 min, respectively. The time to reach the maximum explosion concentration of 15% was about 82, 110, 260, 390, 440, and 450 min, respectively. The duration of explosion risk concentration was about 57, 60, 160, 200, 160, 20 min, respectively. Similarly, three different moments were taken to study the variation law of gas concentration in the initial stage of fire closure under the circumstance of 5 9 10 4 N s/m 5 fire zone wind resistance, as shown in Figs. 13, 14 and 15. Through comparative analysis, it can be seen that the fire zone with good sealing quality (large wind resistance) has better resistance to disturbance of atmospheric pressure fluctuation than the fire zone with poor sealing quality (small wind resistance). The fluctuation range is relatively small (within 30%), and all of them are in non-explosive area. The rise of gas concentration in the inner side of wellsealed fire area mainly depends on the agglomeration of gas gushing out. The gas concentration in a small volume fire area will rapidly rise to the explosion limit range, quickly pass through and reach the non-explosive area, while the gas concentration in a large volume fire area will reach the explosion range after a long period of time, and maintain for a long time. The gas concentration in a small volume fire zone will rapidly rise to the explosion limit range, quickly pass through and reach the non-explosive area, while the gas concentration in a large volume fire zone will reach the explosion range after a long period of time, and maintain for a long time. Comparatively speaking, the rising rate of gas concentration in the fire zone with a large amount of gas emission is much faster than that in the fire zone with a small amount of gas emission. The explosion limit is satisfied earlier, the explosion time period is shorter, and the danger is relatively smaller.
4.3 Analysis on the influence of pressure fluctuation on gas explosion risk in the closed fire zone As it can be seen from Figs. 4, 5 and 6, in the initial stage of sealing, when the sealing quality of the closed wall is good and the absolute gas emission is high, the oxygen Atmospheric disturbance on the gas explosion in closed fire zone concentration will rapidly decrease to lower than 12%, although there is a rebound without exceeding the limit. When the absolute gas emission was low, the oxygen concentration rebounded to higher than 12%. For the poor sealing quality of the closed wall, no matter the absolute gas emission quantity is high or low, the oxygen concentration is higher than 12%. The gas and oxygen concentration were analyzed with the cross analysis method at three different closed time points, as shown in Fig. 16. Figure 16 is a cross analysis diagram of oxygen concentration and gas concentration according to the above analyses, with the maximum and minimum values respectively, representing the two extremes of the analysis type, and the rest are right in somewhere of the middle. It can be seen from the figure that the explosion risk interval corresponding to the case where the wind resistance represented by Fig. 16a is small, the absolute amount of gas emission is large, and the wind resistance represented by Fig. 16c is large and the absolute gas emission amount is relatively small. The duration of the gas explosion in the closed fire zone is less affected by atmospheric fluctuations in the several minutes to 2 or 3 h. The situations of small wind resistance and small absolute gas emission represented in Fig. 16b and the situations of large wind resistance and small absolute gas emission represented in Fig. 16d corresponded to relatively large gas explosion risk interval, which lasted for more than 10 h and was relatively affected by atmospheric fluctuations. Among them, the situation in Fig. 16b is particularly affected by fluctuations, mainly because the gas concentration will be affected by atmospheric fluctuations in the period of 17%-8%, and the upper limit of the gas explosion is very close. If the external conditions are abrupt, it is highly probable that there is an explosion in an instant. Zhou (2002) suggested that after the fire zone is closed, the internal oxygen concentration will gradually decrease and the gas concentration will gradually increase. After a certain period of time, these two will form an area with explosive risk inside the fire area. The difference from the research results is that the size of the enclosed fire zone and the quality of the closed fire zone are different. The time required to reach the explosion area is longer and the duration of the explosion is different. The studies of Niu et al. (2016) showed that different sealing sequences have a certain effect on the gas concentration inside the fire zone. It also creates explosive danger areas inside the fire zone. Zhai and Lai (2016) studied that after the fire area is closed, there is indeed a danger zone leading to explosion. What is different from the research results is that the rise and fall rate of gas and oxygen concentration are much faster, which means the conditions for gas explosion appear much earlier. However, its potential explosion duration is relatively short.
Analysis and verification
At the same time, on-site verification has been carried out in this paper in combination with engineering projects. The engineering background are two mines. The first one is a mine which the air pressure was measured in Fig. 1. The absolute gas emission of this mine is 12.8 m 3 /min, and volume of fire zone is about 2500 m 3 , and the completion of sealing at about 24:00. The oxygen concentration of fire zone was measured about 40 h after sealing completed.
The results shows that the oxygen concentration fluctuates between 3% and 7%, it's very close to our researches.
The other one is a mine of a large coal industry group in northern China with an approved production capacity of 1.5 million tons/year. The absolute outflow of the mine is 103.8 m 3 /min, and the relative gushing amount is 21.9 m 3 / t. The mine has a high gas capacity. Rectangular section is adopted in the roadway of this mine, with a net width of 5.0 m, a net height of 3.6 m, and a length of about 715 m. The mining face volume is 12,870 m 3 , including the goaf volume. The fire area volume is larger than goaf volume, which belongs to the situation of the large fire zone. After the coal seam is self-ignited, the relevant technical department uses polyurethane foaming materials to seal the fire zone and close the fire zone at 12:00 on a certain day. After the completion of the sealing, the original monitoring sensor, beam tube and other means were used for gas sampling and chromatographic analysis of different positions inside the fire zone. The results showed that the polyurethane sealing quality is good, and the air leakage in the fire zone is very small, which can be regarded as the case of high wind resistance. After on-site observation, about 8 h after the closure was implemented, a gas explosion occurred in the closed fire zone. This showed that after 8 h, the oxygen concentration and gas concentration inside the fire zone meet the necessary and sufficient conditions for the explosion. There is a high temperature area, which in turn causes a gas explosion. As it belongs to the large fire zone and closed at noon, it can be seen from the comparison of gas concentration 1 and oxygen concentration 1 in Fig. 16c that the oxygen concentration is higher than 12% and the gas concentration is between 5% and 15% in the 8 h interval after the closed zone is completed. There is a complete possibility of explosion. This indirectly proves the field value of the research results. Therefore, the closed time, the closed quality, etc. are critical to the steady state after the fire zone is closed.
All the above research results showed that the research results in this paper are similar with the results mentioned above to a certain extent, and have been verified by engineering projects, which have important guiding significance for the safe implementation of closed operation in coal mine fire zone.
Conclusion
(1) The fluctuations in the atmospheric pressure of the mine, the quality of the closed wall in the fire zone, and the amount of gas in the fire zone have an important influence on the gas concentration and oxygen concentration inside the closed fire zone.
(2) Compared with the fire zone with poor airtight quality (high wind resistance), the fire zone with good airtight quality (low wind resistance) has better resistance to the disturbance of atmospheric pressure fluctuation with relatively small fluctuation range and is located in the non-explosive area. The increase of gas concentration and the decrease of oxygen concentration, inside of the closed fire zone with good quality, mainly depend on the accumulation of gas emission. (3) The gas concentration inside the fire zone with poor airtight quality (small wind resistance) is easy to be disturbed by the external atmosphere, and the relative fluctuation is large. There are situations in which the conditions of the explosion are repeatedly reached or approached. (4) The completion time of the fire zone closure has an important influence on the variation law and fluctuation of the inner gas concentration after the fire zone is closed. Especially, attention should be paid to the fire zone with poor airtight quality. (5) The research results of this paper are highly similar with the results of relevant scholars according to the comparison. After the site verification, it showed that there is a good field practical value, which can effectively guide the safe development of closure work in the coal mine fire area. ck2017zkyb001) and Open Cooperative Innovation Fund of Xi'an Institute of Modern Chemistry (No. 204-J-2019-0387). Thanks.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
|
2020-02-06T09:09:02.024Z
|
2020-02-05T00:00:00.000
|
{
"year": 2020,
"sha1": "8daef8a9797afd6b3df6ac4bde68687201832054",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40789-020-00295-3.pdf",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "8daef8a9797afd6b3df6ac4bde68687201832054",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
234781504
|
pes2o/s2orc
|
v3-fos-license
|
Pemphigus vulgaris aggravated by obsessive-compulsive behavior: the importance of adjuvant topical occlusive dressing☆☆☆
athan Jetter: Participation in the conception and planning f the study; obtaining, analyzing, and interpreting the data; riting; approval of its final version. Felipe Bochnia Cerci: Analyzing and interpreting the ata; writing; approval of its final version. Karan Pandher: Participation in the conception and planing of the study; obtaining, analyzing, and interpreting the ata; writing; approval of its final version. Aleksandar L. Krunic: Participation in the conception and lanning of the study; obtaining analyzing and interpreting he data; critical review of the manuscript; approval of its nal version. 4. Rotunda AM, Bhupathy AR, Dye R, Soriano TT. Pemphigus foliaceus masquerading as postoperative wound infection. Dermatol Surg. 2005;31:226--31. 5. Tolkachjov SN, Frith M, Cooper LD, Harmon CB. Pemphigus foliaceus demonstrating pathergy after mohs micrographic surgery. Dermatol Surg. 2018;44:1352--3.
Dear Editor,
Pemphigus vulgaris (PV) can be a difficult clinical diagnosis if mucosal involvement is not present. The occurrence of IgG4 anti-Dsg1 autoantibodies is associated with the pathogenesis of skin lesions and anti-Dsg3 with mucosal lesions. Serologically, the predominantly cutaneous presentation has circulating anti-Dsg1 and anti-Dsg3 autoantibodies, with a tendency to higher titers of anti-Dsg1 than anti-Dsg3, which implies a rare clinical phenotype of pemphigus vulgaris. 1 This is a case report of a 64-year-old male patient with a history of depression, type 2 diabetes mellitus, alcoholism, and liver cirrhosis. He was referred, with a previous diagnosis of PV, due to difficulties in therapeutic management and with a suggestion for rituximab therapy. He had numerous ulcerated lesions, covered by hemato-meliceric crusts, predominantly on the face, pinna and cervical region (Fig. 1). No mucosal lesions were observed. Due to the exuberance of the condition with an atypical clinical presentation, new biopsies were performed, which confirmed the diagnosis of PV through histopathology and direct immunofluorescence. The clinical and laboratory investigation corroborated the aforementioned comorbidities. Serologies for hepatitis and HIV infection were negative. The patient had been using prednisone 0.85 mg/kg for two years without improvement.
During the hospitalization, a compulsive, daytime pattern of lesion manipulation was identified, which resulted in the exacerbation of the pre-existing lesions and the formation of crusts on them, which apparently justified the lack of response to treatment. After a psychiatric evaluation, sertraline 50 mg/day was started, together with psychotherapy, and dressing in polyhexamethylene biguanide (PHMB) gel, rayon and occlusion (Fig. 2). It was also decided to add azathioprine 150 mg/day and maintain the prednisone dose. There was an immediate and visible improvement after two days of the established therapy, and a significant improvement after 40 days (Fig. 3).
A follow-up study of patients with pemphigus (broad sense) found an incidence of depression 1.98 times more frequent than in the control group and 2.42 times higher CASE LETTER when pemphigus was associated with low income. This higher frequency of depression would be associated with the chronic, relapsing, stigmatizing and debilitating course of the disease. 2 Obsessive-compulsive disorders (OCD), in this case, compulsive and repetitive excoriation impulse control disorder, often begins after a previous dermatological condition, with the face being the area most preferably involved. 3 Associated with the diagnosis of OCD, a high frequency of anxiety disorders was observed in 79.6% of cases and substance abuse disorders in 38.6% of patients. 4 In the reported case, psychotherapeutic and pharmacological support, local care and use of an occlusive dressing that prevented local manipulation were essential for the success of the treatment. The dressing was changed daily, with topical care, use of PHMB and under medical and nursing supervision.
Corticosteroid therapy is the first choice for pemphigus, with the frequent help of adjunctive corticosteroid-sparing therapies. 5 Recently, rituximab has been suggested as a first-line drug for severe or recalcitrant cases. 5 The present report aimed to highlight the importance of identifying psychological disorders associated with dermatological diseases, appreciate the global care of the patient and call attention to the importance of complementary topical care, little valued in clinical practice.
Financial support
None declared.
|
2021-05-20T06:16:18.731Z
|
2021-05-15T00:00:00.000
|
{
"year": 2021,
"sha1": "6877d93a9c803533979b14e8a3c100e775687826",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.abd.2020.06.026",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "733f35dffc72538896169656a72bf6308f55d0d5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4384983
|
pes2o/s2orc
|
v3-fos-license
|
The landscape of somatic copy-number alteration across human cancers
A powerful way to discover key genes playing causal roles in oncogenesis is to identify genomic regions that undergo frequent alteration in human cancers. Here, we report high-resolution analyses of somatic copy-number alterations (SCNAs) from 3131 cancer specimens, belonging largely to 26 histological types. We identify 158 regions of focal SCNA that are altered at significant frequency across multiple cancer types, of which 122 cannot be explained by the presence of a known cancer target gene located within these regions. Several gene families are enriched among these regions of focal SCNA, including the BCL2 family of apoptosis regulators and the NF-κB pathway. We show that cancer cells harboring amplifications surrounding the MCL1 and BCL2L1 anti-apoptotic genes depend upon expression of these genes for survival. Finally, we demonstrate that a large majority of SCNAs identified in individual cancer types are present in multiple cancer types.
a) The number of focal amplification (red) and deletion (blue) peaks identified using GISTIC on random subsets of the data. Crosses represent individual randomizations; lines represent averages over all randomizations for a given sample size. b) Robustness of focal SCNA analysis to removal of each of the five most represented tumor types (Lung NSC, acute lymphoblastic leukemia, breast, myeloproliferative disorder, and colorectal) or all cell lines. The fraction of the 76 amplification peaks (red) and 82 deletion peaks (blue) still identified as peak regions when each tumor type is removed is plotted. c) Frequency of significant arm-level (large circles) and focal (small dots) amplifications (red) and deletions (blue), sorted by increasing frequency. d) Method for determining confidence regions likely to include the true target of focal SCNAs. Local maxima in the G-score (G ) correspond to a "minimal common region" of overlap and generally reflect the presence of nearby "target genes" whose alteration plays a role in driving cancer growth. However, the presence of technical and biological noise ("passenger SCNAs") may displace G from the true target. DG represents the maximum local variation expected in 95% of cases due to such noise. Subtracting DG from G allows us to determine a confidence region at least 95% likely to contain the gene target (details in Mermel et al, manuscript in preparation). e) Increasing sample size leads to better resolution of likely gene targets. For each of the random tumor subsets in a), we ranked peaks by q-value and computed the median number of genes in each group of 20 peaks, starting with the most significant. High-level MCL1 amplification (blue) was defined as MCL1 copy number greater than 3x that of the chromosome 1 centromere; focal, low-level MCL1 amplification (red) was defined as MCL1 copy numbers less than this but exceeding the centromere; and polysomy of 1q (green) was defined as equal copy numbers of both MCL1 and the chromosome 1 centromere but exceeding the number of copies of the chromosome 11 centromere. c) Efficacy of doxycycline-inducible MCL1 knock-down. Western blot analysis of MCL1 protein levels in the 7 cell lines tested in Figure 3c before and after induction of inducible anti-MCL1 shRNA or non-targeting control. GAPDH was used as a protein loading control. For H2110 (MCL1 amplified) and H1792 (MCL1 unamplified), cleaved PARP levels were also determined before and after induced expression of anti-MCL1 and non-targeting shRNAs. d) siRNA knock-down efficacy for MCL1 and neighboring genes. Quantitative RT-PCR was used to measure mRNA transcript expression before and after introduction of siRNAs against the 7 non-provisional genes in the MCL1 peak in H2110 cells (as shown in Figure 3d). The expression of each transcript after knock-down is graphed as a fraction of the expression in mock-treated cell lines. No expression of CTSK was detected in mock-transfected H2110 cells. e) Comparison of the effects of multiple anti-MCL1 shRNAs and siRNAs in H2110 cells. H2110 cells were infected with three independent shRNA constructs against MCL1, and treated with an anti-MCL1 Dharmacon siRNA SMART pool and a single siRNA sequence from that pool. For each treatment, the change in cell number (proliferation rate) over 48 hours (as measured by CellTiterGlo, Promega), relative to non-targeting control, is shown. Figure 7. Clustering of tumor types by arm-level and focal SCNAs. a) Specific arm-level SCNAs can reach high frequencies among individual cancer types. Copy-number profiles (only arm-level SCNAs were included in this view) are displayed for samples selected among five tumor types (arranged across the x-axis) across all autosomes (positions indicated along the y-axis). Red and blue represent gains and losses, respectively. b) Arm-level SCNAs distribute across cancer types by developmental lineage. For each of the 26 cancer types studied, each chromosome arm was assigned an excess amplification score representing the frequency of arm-level gain minus the frequency of arm-level loss. Positive and negative scores are displayed in red and blue, respectively. Tumor types are arranged along the x-axis according to the results of unsupervised hierarchical clustering (see Supplementary Methods) of these scores (dendrogram is on the bottom). Developmental lineage reflects the ICD-O classification scheme except for melanoma, which we designated as of neural lineage due to its derivation from the neural crest. c) All 158 significant focal events (arranged on y-axis according to significance of amplification, followed by significance of deletion) across the 26 cancer types studied in part b), arranged along the x-axis according to the results of unsupervised hierarchical clustering of excess amplification scores (dendrogram is on the bottom). d) Excess amplification scores are displayed for the 10 most significant focal amplifications (upper panel) and deletions (lower panel), ranked top to bottom and denoted by putative target genes from each region. The ordering of the tumor types along the x-axis is the same as in part c). However, there is a tendency for telomeric regions to be focally deleted. As a result, telomeric deletions have to rise less above the average level to attain significance.
DNA isolation and hybridization to arrays
Previously published SNP array datasets were generated as described (Barretina, in review) 1,2,3,4,5,6,7,8,9,10,11,12,13 . For unpublished data, DNA was obtained from cell line pellets or tumors frozen at the time of surgical dissection and maintained at -80C until use, with the exception of 11 gliomas from which sufficiently high-quality DNA could be obtained from paraffin-embedded samples 14 . The majority of tumors were obtained at primary surgery, with the exceptions of 27 prostate tumors obtained through rapid autopsy programs at the Universities of Washington 15 and Michigan 16 . Each sample was genotyped using the Sty I chip of the 500K Human Mapping Array wet (Affymetrix), containing probes to 238,270 SNP loci, according to manufacturer's instructions. In brief, 250 ng of genomic DNA was digested with the StyI restriction enzyme (New England Biolabs), ligated to an adaptor with T4 ligase (New England Biolabs), and PCR-amplified using a 9700 Thermal Cycler I (Applied Biosystems) and Titanium Taq (Clontech) to achieve fragments ranging from 200-1100 bp. These fragments were pooled, concentrated, processed through a clean-up step, and further fragmented with DNaseI (Affymetrix) before being labeled, denatured, and hybridized to arrays. Arrays were then scanned using the GeneChip Scanner 3000 7G (Affymetrix). Samples were processed in batches of 96 on a single plate using a Biomek FX robot with dual 96 and span-8 heads (Beckman Coulter) and a GeneChip Fluidics Station FS450 (Affymetrix) and tracked using 2D barcode racks and single tube readers (ABGene). Raw data are available at www.broad.mit.edu/tumorscape.
Generation of segmented data
Probe-level signal intensities were normalized to a common reference array using quantile normalization 17 and combined to form SNP-level signal intensities using the model-based expression (PM/MM) method 18 . For each tumor, genome-wide copy number estimates were obtained using tangent normalization, in which tumor signal intensities are divided by signal intensities from the linear combination of all normal samples that is most similar to the tumor (to be described in greater detail in Getz et al, in preparation). This linear combination of normal samples tends to match the noise profile of the tumor better than any set of individual normal samples, thereby reducing the contribution of noise to the final copy-number profile. However, similar results were also obtained using other previously described methods 19 (data not shown). Normal samples used in this process were confirmed to lack contamination with tumor cells by visual inspection of their copy-number profiles. Copy number profiles were segmented using the Gain and Loss of DNA (GLAD) algorithm 20 with default parameters. Results were robust to modification of these parameters or use of Circulary Binary Segmentation 21 (data not shown). SNP markers within previously mapped CNVs 22 were removed, as were the 10,000 SNPs with the highest absolute G-scores (see below) in our panel of Segments containing fewer than 6 SNPs were removed.
Determination of SCNA lengths and amplitudes
Copy-number profiles were deconstructed into individual SCNAs as shown in Supplementary Figure 1a. The method (to be described in greater detail in Mermel et al, in preparation) determines the minimum number of SCNAs required to reconstruct the copy-number profile. Initially, amplifications are only allowed to overlap amplifications and vice versa for deletions, providing a unique solution to the lengths and amplitudes of these SCNAs. In reality, however, amplifications may overlap deletions, leading to many possible SCNA combinations that could produce a given profile. We applied an iterative optimization algorithm to determine which of these solutions was most likely. Here, the distributions of lengths and amplitudes for SCNAs determined in one iteration were then used to score the likelihood of different possible SCNA combinations in the next iteration. To reduce computation time, the number of possible SCNA combinations was limited by allowing only two SCNAs per chromosome to form basal copy-number levels with which both amplification and deletion SCNAs might overlap. These basal SCNAs were separated by a single breakpoint that might reside anywhere in the chromosome.
Length and amplitude thresholds
The length of each SCNA was converted into chromosome-arm units by calculating the fraction of each chromosome arm covered by the SCNA; for SCNAs that cross the centromere, the length is expressed as the sum of the fractions of each chromosome arm covered by the SCNA. This normalization allowed for the comparison of events occurring on chromosome arms of different length and results in length values ranging between 0 and 2. Five chromosomes (13, 14, 15, 21, and 22) have fewer than 8 probes mapping to the short (p) arm; for these chromosomes, only the q-arm is counted, resulting in a maximal SCNA length of 1. Removal of these chromosomes does not substantially affect the distribution of SCNA lengths as shown in Figure 1a or Supplementary Figure 2, nor does it explain the excess of single-arm length SCNAs relative to focal SCNAs of nearly the same size (data not shown).
SCNAs with lengths > 0.98 chromosome arms were used for arm-level analyses and SCNAs with lengths < 0.5 chromosome arms were used for focal analyses. The results of these focal analyses were not significantly different when the focal length threshold was varied from 0.3 to 0.98 (data not shown).
Only SCNAs with copy number changes >0.1 or <-0.1 inferred copies were included in subsequent analyses. These thresholds were achieved in 0.35% and <0.1% of amplifications and deletions in normal samples (representing rare germline CNVs and occasional analytic artifact).
Assessing the significance and tissue distribution of arm-level SCNAs
Across the entire dataset, we noted that the frequency with which chromosomal arms are measured to undergo gain or loss is negatively correlated with the size of that arm (Supplementary Figure 6). Two potential explanations for this trend are that longer chromosome arms have a lower background rate of copy number change, or that copy changes affecting larger chromosome arms are subject to a greater degree of negative selection. In either case, deviations from this trend suggest the presence of additional selective pressures. Chromosome arm-level SCNAs which are observed less frequently than predicted likely undergo additional negative selective pressure. Conversely, armlevel SCNAs that are observed more frequently than predicted are likely to be affected by either positive selection, or a relative absence of negative selection.
To determine which arms were significantly enriched/depleted among copy gains and losses, and therefore suggesting the presence of additional selective pressures, we compared the expected frequency of gain and loss for each arm, determined by linear regression (average alteration frequency vs. # genes on chromosome arm), with the actual frequency observed over the entire dataset. Since samples with gain of a chromosome arm cannot have loss of the same arm, we computed the frequency of gains and loss among the undeleted and unamplified samples, respectively. By decoupling the gains and losses in this way, the frequency metric follows a binomial distribution; z-scores for each arm were calculated using the normal approximation to the binomial (Figure 1b), and the resulting p-values were corrected for multiple hypothesis testing using the Benjamini-Hochberg FDR method 23 .
To assess how these tissue specific arm-level patterns compared across tumor types, we computed the frequency of arm-level gain minus the frequency of arm-level loss for each arm within each tumor type for which we had greater than 20 samples (see Supplementary Figure 7b). Hierarchical clustering of the resulting values was performed using the Pearson correlation distance metric and complete linkage. Replicate clustering with multiple distance metrics and filtering criteria gave broadly similar results (data not shown). To identify the arm-level changes that most significantly differentiated between the resulting major tissue clusters, we utilized the Comparative Marker Selection Tool 24 available in the GenePattern Software Suite 25 (http://www.broad.mit.edu/cancer/software/genepattern/), using the signal-to-noise test statistic (Supplementary Table 6).
Identification of Recurrent Focal SCNAs
Significantly recurrent focal SCNAs were identified using the GISTIC methodology 19 , with three improvements described below (and to be described in greater detail in Mermel et al, in preparation). The motivation behind GISTIC is to identify regions where SCNAs are observed significantly more frequently than the background rate. In the absence of independent estimates of the background rate, the previous version of GISTIC used the overall frequency of SCNAs across the genome, taking in account the amplitude of copy number change. In part, the improvements described below make use of the large number of segments available in this dataset to refine our estimates of the background rate of SCNA to more accurately reflect its dependence on both amplitude and length. It should be noted that the existence of widespread positive or negative selective pressure may lead to inaccurately high or low estimates of this background rate. Indeed, as described in the main text, the finding that deletions tend to preferentially avoid gene-dense regions (Figure 2b) is consistent with the presence of widespread negative selective pressures that may lead us to underestimate the background rate of deletion.
A. Scoring of the Genome
Optimally, each marker should be scored (GISTIC uses the "G-score") by the probability of undergoing all the events observed at that marker-either by multiplying the probabilities of each event, or (as is the procedure in GISTIC) adding the logs of those probabilities. With the large dataset available in the current study, we have been able to revise the scoring scheme to reflect these probabilities more accurately. The probability of a marker undergoing a focal SCNA appeared to be approximately equal for SCNAs of all lengths up to the level of a chromosome arm because the frequency of longer SCNAs was inversely proportional to their length ( Figure 1a). Therefore, we did not include a length term in the G-score, other than to separate arm-level SCNAs. We found both amplifications and deletions to be exponentially less frequent with increasing amplitude (measured as number of copies); therefore we scored each SCNA proportional to its amplitude. We also found focal deletions (not amplifications) were less frequent in regions with arm-level deletions in the same sample; these were therefore scored with more weight.
Another possible factor determining the background rate of SCNAs is the presence of repeat sequences or segmental duplications. Recombination of homologous DNA sequences such as segmental duplications has been posited to be a mechanism by which focal SCNAs are generated 26 . Although we did observe a statistically significant enrichment of breakpoints in regions of segmental duplication (see Main Text), the effects on the distribution of SCNAs across the genome are not clear. One expectation might be that more SCNAs would be observed near centromeres and telomeres, which are heavily enriched for repetitive sequences. We evaluated this by rescaling each chromosome arm to a single size and summing copy-number profiles across all samples and arms (Supplementary Figure 8). There was little bias toward telomeric or centromeric amplifications. Some excess of telomeric deletions were observed (at approximately 1/3 of the level required to attain significance), but we did not observe excess centromeric deletions. Due to the small magnitude of these effects and the uncertainty as to their source, we did not account for them in our model of the background rate.
An additional modification was implemented for the deletions analysis to account for the fact that deletions affecting any part of a gene are likely to have similar functional consequences. In this new approach, termed 'Gene-GISTIC', each gene (rather than SNP marker) is given a single G-score reflecting the maximal level of deletion seen anywhere in that gene, summed over all samples. One complication is that genes with more SNPs are more likely to score higher by chance alone. Gene-GISTIC corrects for this by using G-scores generated from similar-sized windows in permuted data as the null distribution when calculating significance values (to be described in detail in Mermel, et al, in preparation).
The Gene-GISTIC approach provides a more accurate weighting of the significance of genes subject to frequent but non-overlapping deletions and an increase in overall power due to a reduction in the number of independent hypotheses tested (from the total number of markers on the array to the number of genes in the genome). A direct comparison of the results of Gene-GISTIC and the traditional SNP-based GISTIC deletions analysis found 82 peaks by Gene-GISTIC compared to 64 by SNP-GISTIC; 62 peaks overlapped (Supplementary Table 7). Known tumor suppressor genes tended to rank higher in the Gene GISTIC results (not shown). One potential drawback to the Gene-GISTIC approach is that regions without known genes (RefSeq genes and miRNAs were included in this study) will not be scored and potentially significant deletions may be missed. Indeed, 11 peaks were more significant according to SNP-GISTIC than Gene-GISTIC (Supplementary Table 7), likely due to the underweighting of deletions occurring outside of known genes.
B. Peak Region Identification
To identify independently significant regions in a single chromosome, GISTIC employed a greedy "peel-off" algorithm approach that identifies the most significant peak, removes all SCNAs spanning that peak, and then rescores the chromosome to identify additional significant peaks. We have modified the algorithm to increase the sensitivity for additional peaks. SCNAs are allowed to contribute to secondary peaks with a weighting proportional to the evidence that the secondary peak represents a separate event from the primary peak. In brief, after removing the SCNAs overlapping the primary peak, the next highest-scoring peak is identified. "Disjoint G-scores" for both the primary and secondary peaks are calculated based only on SCNAs that overlap one or the other peak but not both. SCNAs that overlap both peaks are then allowed to contribute to each peak with a weighting proportional to the disjoint G-score of that peak divided by the sum of disjoint G-scores over both peaks, and the significance of each peak is redetermined. The procedure is performed iteratively until no further significant peaks are identified. The modification improves the sensitivity of the method for identifying known cancer genes without substantially decreasing its specificity (to be described in detail in Mermel et al, in preparation).
C. Peak Region Boundary Determination
We have also modified the method employed by GISTIC to define the boundaries of each peak region, to add an explicit accounting for the likelihood passenger events or other sources of noise have displaced the local G-score peak from the gene target (Supplementary Figure 3d). The variations in G-scores across the genome in permuted data are tabulated to determine the likelihood of observing any given change in G-score (ΔG) over any given distance. We set the boundaries of each peak region such that the decrease in the G-score from peak to boundary had a likelihood of 5% or less, representing the 95% confidence interval for inclusion of the gene target.
The effect of gene size and density on observed SCNA frequency
To determine whether large genes are associated with peak regions of amplification or deletion, we ranked genes according to the genomic footprint of their coding sequence, defined as the largest difference between transcription start and stop sites over all annotated transcripts in genome build hg18. We computed the local gene density around each gene by counting the number of annotated genes residing within a 4 Mb window centered around the midpoint of the gene and dividing by the average number of genes in the 4 Mb window around all genes in the genome.
To determine the relationship between SCNA frequency and gene density, we first discretized each copy number profile based on the following 7 copy number ranges: < 1, 1-1.5, 1.5-1.75, 1.75-2.3, 2.3-3, 3-4, and > 4. The gene density within each of these copy number ranges was calculated by dividing the total number of genes residing within each copy number bin across all samples by the number of SNP markers covered by those regions; these density values (in genes per SNP; similar values were obtained using genes per Mb) were normalized against the average gene density across the genome in Figure 2b. We computed the significance of deviations from the average gene density by comparing the gene density for each copy number bin to the distribution of gene densities in 1e6 random permutations of identically sized regions across the genome. The green lines in Figure 2b correspond to the gene densities giving Bonferonni-corrected p-values of .01. These lines spread outward at extreme copy numbers because the number of segments residing within these bins is smaller.
GRAIL Analysis
To compare the functional relatedness of the genes identified by our focal SCNA analysis, we utilized the GRAIL algorithm 27 (full methods and algorithm available at www.broad.mit.edu/mpg/grail) on amplification and deletion peak regions separately, using the default parameters. In brief, GRAIL determines the relatedness between any two genes in different peak regions based upon the frequency with which the same terms are found in PubMed abstracts citing each gene (all PubMed abstracts until December 2006 are used). Each gene is scored by its level of relatedness to all genes in all other peak regions, and assigned a p-value reflecting the likelihood of achieving such a score by chance. Each peak region is assigned the p-value of its most significant gene with a multiple hypothesis correction to reflect the number of genes in the peak. The literature terms most associated with the top genes in each peak region are noted. To confirm the pvalues assigned to the peak regions were not overestimates of significance, we compared them to similar p-values generated using 1000 permutations of the locations of the peak regions ("permuted controls" in Figure 2c).
GO Term Analysis
The latest Gene Ontology annotations were downloaded from The Gene Ontology website (http://www.geneontology.org/GO.downloads.ontology.shtml). We associated each GO term with all genes that are annotated with that term or any of its descendent terms in the GO hierarchy. We assessed enrichment of each GO term by comparing the number of genes associated with that term and present in our amplification and deletion peak regions to the number expected if these genes were distributed at random across the genome. Peak regions with greater than 25 genes were eliminated from the analysis to maximize power, and at most 2 genes from each peak region were allowed to count towards the enrichment score to eliminate confounding due to genomic clustering of close homologues. GO terms with fewer than 10 associated genes were excluded from the analysis to avoid significant enrichments based only on very small numbers of genes. The significance of the enrichment for each peak was calculated using the G-test, with an FDR correction to account for the number of hypotheses being tested.
Peak Region Overlap
To quantify the degree of overlap among peak regions identified in different datasets, we counted two peaks as being the same if their 95% confidence intervals overlap. P-values, representing the likelihood of obtaining the observed levels of overlap if peak regions were randomly distributed, were determined by permuting the locations of the peak regions in each dataset 1,000 times and determining the fraction of peaks that overlap in each permutation.
To count the total number of non-overlapping peak regions identified across all cancer sets, we first removed peaks that overlapped with any of the 158 peaks in the pooled analysis. The remaining peak regions were sorted by size (smallest to largest); starting with the smallest peak, we examined each peak for overlap with any smaller peak. If overlap was observed, the larger of the two peaks was removed.
Fluorescence in-situ hybridization (FISH)
Four-micron tissue microarray (TMA) sections were mounted on standard glass slides and baked at 60°C for at least two hours, then de-paraffinized and digested using methods described previously 28 .
TMA sections and probes were co-denatured, hybridized at least 16 hrs at 37°C in a darkened humid chamber, washed in 2X SSC at 70°C for 10 min, rinsed in room temperature 2X SSC, and counterstained with DAPI (4',6-diamidino-2-phenylindole, Abbott Molecular/Vysis, Inc.). Slides were imaged using an Olympus BX51 fluorescence microscope. Individual images were captured using an Applied Imaging system running CytoVision Genus version 3.9.
Quantitative PCR
Quantitative real-time PCR was performed with an ABI 7900 HT Sequence Detection System (Applied Biosystems) using the QuantiTect SYBR Green kit (Qiagen). Copynumbers were quantified relative to the repetitive sequence element Line-1 as previously described 30 . For MCL1, the forward and reverse primer sequences were CTTCCAAGGTAAGGGGGTTC and ACTGACTCGTTTCGGTTTCC, respectively; for BCL2L1 the forward and reverse primer sequences were CCTCTCCCGACCTGTGATAC and CTTCCTCGGAAAGTCACTCC, respectively.
RNAi and cDNA expression
Inducible shRNA vectors were generated as previously described 31 using sequences targeted against MCL1 (GCATTGGCATCTTTGGATTTC) and scrambled control (GTGGACTCTTGAAAGTACTAT) 32 . Stable shRNA vectors were provided by The RNAi Consortium 33 and sequences were inserted to target MCL1 (GCTAAACACTTGAAGACCATA, GGATTGTGACTCTCATTTCTT, and GCAGGATTGTGACTCTCATTT), and BCL2L1 (CGTGTCTGTATTTATGTGTGA, CCACCAGGAGAACCACTACAT, and TGGCCTCAGAATTGATCATTT), as well luciferase and LacZ (CGCGATCGTAATCACCCGAGT and CTCTGGCTAACGGTACGCGTA) controls. Lentiviruses were made by transfection of 293T packaging cells with a three plasmid system 34,35 . Target cells were incubated with lentivirus for one hour in the presence of 8 µg/ml polybrene. Infections leading to >30% decreases in proliferation due to viral toxicity were repeated at lower titer. Cells were selected using puromycin at 2 mg/ml over 2-3 days or until all of the non-infected cells died.
Retroviral vectors were used to introduce specific genes into immortalized lung epithelial cells 37 . MCL1 and BCL2L1 cDNAs were each introduced into pWZL-BLAST; MYC cDNA was introduced into pBABE-Puro.
Xenografts
Female nu/nu mice maintained in pathogen-free facilities were implanted subcutaneously with 5e6 cells infected with inducible shRNA vectors against MCL1 or scrambled control. Tumor size was assessed by calipers twice weekly. When tumors reached 100 mm 3 (11 days post-implant), eight mice in each group were fed doxycycline 25 mg/kg po qd and eight additional control mice were fed D5W for an additional 11 days.
Flow cytometry
Adherent and floating cells were harvested after incubation overnight and stained with Annexin V-FITC (Sigma) and propidium iodide (BioVision). Flow cytometric analysis was performed on 3e4 cells using the BD LSR II flow cytometer (BD Biosciences).
Supplementary Note 1: Background and Terminology a) Somatic vs. Germline Copy Number Changes
Throughout this paper, we use the term somatic copy number alteration (SCNA) to refer to somatic changes in the number of copies of a DNA sequence that arise during the process of cancer development. SCNA should not be confused with two similar terms, copy number variation (CNV) and copy number polymorphism (CNP), which refer to copy number changes in DNA segments present in an individual's germline DNA.
Definitions of these terms, as used throughout the manuscript, are as follows:
Somatic Copy Number Alteration (SCNA):
A sequence that is found at different copy numbers in an individual's germline DNA and in the DNA of a clonal sub-population of cells.
Copy Number Variation (CNV):
A DNA sequence that is found at different copy numbers in the germline DNA of two different individuals.
Copy Number Polymorphism (CNP):
A locus that exhibits CNV above some specified frequency (typically 1-5%) among individuals within a population.
Because not all of the cancer DNA specimens in our dataset are matched to normal DNA specimens, we cannot be entirely confident that any given copy number change observed in a cancer DNA sample was not present in the germline of the patient. To avoid confounding our analysis of somatic CNAs with germline CNVs, we have masked from our dataset all markers covering previously annotated CNPs 22 , as well as those markers found to be altered in at least 1% of the normal samples in our dataset (see Supplementary Methods, above).
The amplitude of copy number change
In the cytogenetics literature, "gains" has traditionally referred to increases of one or a small number of copies of a DNA segment, typically spanning a large genomic region. In contrast, "amplifications" has referred to more focal events that can reach much higher copy numbers. A similar distinction has been made between "losses" and "deletions". Current analytical methods do not allow the determination of absolute copy number from array-based platforms, rendering these distinctions less clear. For consistency, we refer to arm-level events as "gains" or "losses" because of their large genomic extent and tendency to involve limited copy number changes, and focal events as "amplifications" and "deletions" due to their more limited extent and propensity to reach higher copy numbers.
b) Background Rates and Selection of SCNAs
Oncogenesis is an evolutionary process 38 . DNA alterations are acquired at random according to a rate of generation that is determined by the competing processes of mutation and repair, and which may vary according to the type of aberration and the genetic and cellular context. Once acquired, these alterations may be neutral, or may be subject to positive selection (if they promote oncogenesis) or negative selection (if they have deleterious effects on the cell). In the absence of selective pressure, an alteration will be observed at a "background rate" equal to its generation rate times the number of cell divisions. The frequency with which an alteration is observed in cancer specimens is determined by both this background rate and the degree of selective advantage or disadvantage it confers.
Alterations that promote oncogenesis (often referred to as "driver events"), in particular, should be present at above the background rate. Alterations that do not contribute to the cancer phenotype (often referred to as "passenger events") may nevertheless be observed in the bulk of a cancer sample if a subsequent beneficial alteration (driver event) provides the cell a net fitness advantage. This process is often referred to as "hitch-hiking" 39 . Indeed, even somewhat deleterious alterations may achieve fixation through hitch-hiking if the subsequent driver events confer a net fitness advantage to the cell. The process by which a cell is able to reach fixation through a less fit intermediate has been described as "stochastic tunneling" 40 . The result of hitch-hiking and stochastic tunneling is that many alterations observed in cancer genomes do not promote oncogenesis.
Systematic efforts to discover all oncogenic somatic genetic alterations therefore require both an accurate model of the background rate and a sufficiently large collection of cancer samples to provide sufficient power to detect alterations occurring above this frequency 19,41,42,43 . For point mutations, reasonable estimates of the background rates are provided by the synonymous and intergenic mutation frequencies, which are believed to be selectively neutral 44 . By contrast, no clear distinction has been defined between selected and neutral SCNAs, making precise estimation of the background rates difficult. A common approach to making these estimates is to assume the background rates are similar to the overall rate of SCNA within each chromosome 45,46 or across the entire genome 19,47 .
While this approach of estimating the background mutation rate from the observed data is statistically unbiased, the fact that the observed data has already been subjected to a selective process in vivo makes it is impossible to precisely distinguish between variation due to differences in mutation rates from variation due to differences in the level of selective advantage or disadvantage conferred by each mutation. An additional complication is that the background rate estimated from the data will also include false positive events (due to technical noise from the measuring platform) and false negative events (that occur below the detection limit of the measuring platform). Therefore, somatic alterations may appear to occur at a significantly elevated frequency across samples for at least four reasons: (i) they are generated in that region at a rate significantly above the genome-wide average, (ii) they occur in a region subject to significantly less negative selection than the typical genomic region, (iii) they give a selective advantage to cells harboring them (i.e. they are driver alterations), or (iv) they represent systematic artifact. While the statistical background rate estimated from the observed data is useful in the identification of regions altered at statistically significant frequencies -potentially suggesting the presence of positive selection -one should not simply equate this rate with the biological background mutation rate, or assume that all mutations occurring at an elevated frequency are drivers. Conversely, one should not assume that all mutations occurring at rates equal to or lower than the estimated background rate are passengers.
The interpretation of the significance of a frequent mutation therefore depends on our understanding of its particular background rate. This rate may vary according to specific features of the mutation, such as the type of base pair substitution for point mutations or the length, magnitude, and surrounding sequence for copy number alterations. Naïve analyses which do not account for these features -by assuming, for example, that SCNAs are equally likely to occur anywhere in the genome or to be of any size -will be biased towards regions with high background mutation rates and away from regions with low background mutation rates. For example, it is known that point mutation rates vary significantly according to the type of substitution (e.g. transition vs. transversion) and sequence context (e.g. CpG vs. non CpG); various statistical methods for the analysis of point mutations take this variation into account to avoid biasing the results towards genes or regions with many mutable bases 41,42,43 . The background mutation rate may also be underestimated if many mutations confer negative selective pressure and therefore are observed less commonly than they occur. In this case, a neutral mutation observed at the true background rate may appear to be significantly enriched in cancer.
One of the goals in the analysis of SCNAs is to identify features that correlate with the frequency with which these SCNAs are observed. Whether these features influence SCNA frequencies through mechanistic effects on background mutation rates, through selective pressure, or through association with technical artifact should be determined by appropriate validation experiments.
Supplementary Note 2: The impact of sample size on focal SCNA analysis
In this paper, we have utilized the large sample collection generated by analyzing DNA specimens across multiple cancer types to increase our power to identify and resolve the targets of significant regions of focal SCNA. To understand the effects of sample size on the ability to discover targets of focal SCNA, we must separately consider the two critical steps in our focal SCNA analysis: 1) identifying that a region is undergoing SCNA significantly above the background rate, and therefore is likely be subject to positive selection; and 2) given that a region of SCNA is undergoing selection, resolving the genomic region most likely to contain the target gene(s).
Step 1: Identifying that a region of SCNA is undergoing positive selection The GISTIC G-score at each marker locus is constructed to estimate the probability of observing the set of SCNAs covering that locus by chance, taking into account both the frequency and mean amplitude of SCNA (see Supplementary Methods). To compute the significance of each region, the G-score is compared to the distribution of G-scores expected if the SCNAs in the region were all random events generated at the background rate. GISTIC estimates this background rate using the overall rate of focal SCNA across the genome.
For a focal SCNA occurring at a fixed frequency and average amplitude, the power to detect that region generally increases with sample size. However, the relationship between detection power and sample size is complicated by several additional issues. For one thing, combining heterogeneous sample sets can reduce the power to detect SCNAs that are primarily enriched in a single subset (by reducing the frequency of the region of interest in the combined dataset). That is, mixing cancer specimens across tissue types will diminish the power to detect true lineage restricted SCNAs. Mixing samples with different background rates of alteration can similarly affect the statistical power of the combined analysis in ways that obscure the effect of sample size alone.
Across the 17 individual cancer types studied in our dataset, there is a weak but significant association between sample size and the number of significant focal SCNAs detected (r = 0.51, p = .04; data not shown). Of course, because the total number of 'true' driver SCNAs in each cancer type is unknown, the number of significant SCNAs identified in any given cancer type is not a direct measure of statistical power. A more informative measure of the relationship between sample size and statistical power is demonstrated by an analysis of randomly selected subsets of the entire dataset ( Supplementary Fig 3a). Since each subset is drawn from the same total dataset, the expected frequency and background rate of each subset is, on average, the same. As can be seen, increasing the number of samples increased the number of peaks identified over all subset sizes, indicating that our increased sample size led to increased power overall. However, it is also clear that the number of peaks appears to be saturating by 3131 samples, suggesting that adding additional samples will not greatly increase our power to detect novel SCNA targets (at least for a similarly composed dataset).
Step 2: Resolving the genomic region most likely to contain the target gene(s)
Once a region of significant focal SCNA has been identified, the next step is to define the genomic boundaries likely to contain the target gene(s) of that SCNA. Most approaches to resolving this region directly or indirectly compute the minimal common region (MCR) of overlap among the SCNAs covering the significant locus, as this is the region most likely to contain a targeted gene. However, due to both technical and biological noise (e.g. segmentation artifacts or random "passenger" SCNAs that confer no selective advantage to the cell), the MCR may be displaced from the actual location of the gene targets. We have developed a statistically based approach (see above and Mermel et al., manuscript in preparation) that models the expected variations in the G-score using the observed level of noise across the genome to determine a wider region than the MCR for which we are 95% confident contains the true target gene.
The two major determinants to how narrowly a significant region of SCNA can be refined are the size of the MCR due to SCNAs overlapping the target gene (here called "driver SCNAs") and the noise level contributed by SCNAs that do not necessarily overlap the target gene (here called "passenger SCNAs"; these may represent real SCNAs or analytic artifact).
By definition, the size of the MCR can never increase with the addition of samples containing driver SCNAs, and will more typically decrease. Indeed, under the simplifying assumptions that driver SCNA breakpoints are random with a uniform distribution between 0 and some maximal distance L units away from a target gene, the minimum distance to a breakpoint will scale as 1/(n+1) 48 , where n represents the number of samples with driver SCNAs. This reduces to 1/n when n is large, implying that the expected size of the MCR is inversely proportional to the number of samples harboring the driver SCNA. In reality, the assumptions behind this derivation do not hold exactly, as there is a minimal observed SCNA length that depends on the resolution of the measuring platform, and SCNA breakpoints are likely to be scattered non-uniformly across the genome. Nonetheless, for the vast majority of focal peak regions, the model does a reasonably good job of approximating actual MCR sizes in random subsets of the dataset (data not shown), suggesting that number of samples remains the major factor limiting the resolution of most focal peaks. The fact that the MCR resolution scales inversely with the absolute number of driver SCNAs, rather than the overall frequency of aberration, implies that once a region of significant SCNA has been detected, the addition of extra samples (even if they contain a low frequency of alteration at a given locus) will only help to resolve the target gene. In particular, doubling the number of samples with the driver SCNA will halve the expected size of the MCR.
The relationship between the noise level due to passenger SCNAs and sample size is difficult to model as it depends on the particular mix of samples in the dataset as well as the underlying error model of the measuring platform and analytical methods. Insofar as the noise around a given locus is unbiased, the errors from additional samples with passenger SCNAs will tend to cancel, whereas the signal contributed by samples with driver SCNAs will tend to add. Overall, this will result in more confident boundary estimation with greater numbers of samples. In fact, according to the central limit theorem, the error in boundary estimation will decrease as 1/√N, where N represents the total number of samples (including those with driver and passenger SCNAs). This result, like the one above, suggests that increasing numbers of samples will tend to provide more precise estimates of the location of the target gene.
Empirically, we observe that for all but the most frequent regions of SCNA (where we are likely saturating the resolution limit of the array), our ability to resolve the target region is roughly inversely proportional to the size of a randomly chosen subset (see Supplementary Figure 3e), as predicted by the models above. The median number of genes per peak region roughly halves when increasing the sample size from 1600 to 3131, suggesting that further improvements in resolution could be achieved with further increases in the sample size.
Supplementary Note 4: A Pooled analysis of arm-level SCNAs
Several previous studies have analyzed arm-level SCNAs in large numbers of cancer samples characterized by low-resolution array or cytogenetic technologies 49,50 . These studies have identified arm-level SCNAs observed frequently both within and across cancer subtypes. Moreover, these arm-level SCNAs have been shown to segregate by cancer type, with cancers of similar developmental origin showing similar patterns of SCNA.
In parallel to our approach to focal SCNAs, we compared frequencies of arm-level SCNA to estimates of their background rates. In many ways, this analysis serves to highlight certain broad similarities between arm-level and focal SCNAs.
As with our focal SCNA analysis, our analysis of arm-level SCNAs began with a systematic evaluation of the observed rate of these events across the genome. We observed that arm-level alterations are more common in short rather than long chromosome arms (Supplementary Figure 6). The correlation is stronger when the length of the chromosome arm is measured by number of genes rather than megabases (p = 0.0005). This trend is observed in separate analyses of 25 of the 26 cancer types most represented in our dataset. The sole exception is hepatocellular carcinoma, which shows no trend in either direction-in part due to a very high frequency of amplification of the longest chromosome arm, 1q. In 13 of these 26 cancer types, including examples from all developmental lineages, this trend reached statistical signficance within a single type (data not shown). Although both focal and arm-level SCNAs exhibit decreasing frequency with length, the strength of the trend differs in the two cases. Several possibilities may account for this, including differences in the mechanisms by which these SCNAs are generated, the effects of selection, and experimental artifact.
A caveat to this analysis is that we do not distinguish between whole-chromosome and single-arm-level SCNAs, although the mechanisms and rates between these may differ. Indeed, in separate analyses of these two types of SCNA, both trend towards fewer events in SCNAs covering more genes. However, this trend was significant only for wholechromosome SCNAs (p = 0.003), not single-arm-level events (p = 0.28) (data not shown). This may be due to the ambiguities inherent in attempting to separate these two types of SCNA: namely, any whole-chromosome SCNA is equivalent to concordant SCNAs in both of its arms. Single-arm-level SCNAs can only be detected when the two arms are discordant (as is frequently observed with deletion of 8p and amplification of 8q). As a result, fewer single-arm-level SCNAs will be detected, reducing the power available to identify significant trends. Moreover, any SCNA of an acrocentric chromosome (chromosomes 13, 14, 15, 21, and 22) is inherently ambiguous, as it is simultaneously a whole-chromosome and single-arm SCNA. For these reasons, we present a unified analysis of arm-level SCNAs that includes whole-chromosome SCNAs.
The prevalence of specific arm-level SCNAs, however, is not fully explained by the number of genes present in each of these arms. Indeed, the high frequency of specific arm-level gains and losses suggests enrichment due to selective pressure, as has been noted in many prior publications 50,51,52 . To our knowledge, however, none of these prior publications has determined the statistical significance of arm-level SCNA by explicitly comparing the frequencies of arm-level SCNAs to the expected rate given their gene number (see Supplementary Methods, above). Across all cancers, 11 of the 39 autosomal chromosome arms exhibit copy number gains and 17 exhibit copy number losses significantly more often than predicted by the number of genes they contain (Figure 1b; see Supplementary Methods). The vast majority of these are strikingly significant, with the most prominent being amplifications of 1q, 20q, and 7p (p < 1e-85 in each case), and deletions of 17p, 9p, and 13q (p < 1e-33 for each). Interestingly, the most significantly deleted arms contain some of the most frequently mutated tumor suppressor genes, including TP53 (17p), CDKN2A/B (9p), and RB1 (13q), suggesting that the striking enrichment of loss of these arms may be due largely to these genes (Supplementary Table 8). Only nine of the 39 chromosome arms are neither significantly gained nor lost. Despite the finding that most chromosome arms exhibit significant gains or losses, only one (14q) shows both (p = 0.003).
Indeed, the striking significance of these arm-level SCNAs across cancer reflects a directional consistency across many different cancer types. In particular, we analyzed arm-level SCNAs separately in each of the 17 cancer types represented by greater than 40 samples (Supplementary Table 8). The 11 significantly gained chromosome arms identified in the pooled analysis were found to be independently gained in a median of 8 cancer types (range 2-11); these same arms were only rarely found to undergo significant loss in any cancer type (median 0, range 0-2 types). Similarly, the 17 significantly deleted arms in the pooled analysis were found to be independently lost in a median of 4 cancer types (range 2-9), and were only sporadically gained in specific subtypes (median 1, range 0-2 types; note that these gains were predominantly seen in hematopoietic cancers). Chromosome 14q, the only arm found to be both gained and lost in the pooled analysis, was significantly gained in 4 cancer types (acute lymphoblastic leukemia, non-small cell lung carcinoma, small cell lung cancer, and prostate carcinoma) and lost in 3 cancer types (GIST, melanoma, and renal carcinoma). The mutually exclusive gains or losses observed for nearly all chromosome arms across large numbers of cancer types suggest that the selective pressures that shape these events operate in tissues throughout the body rather than being confined to limited, tissue-specific microenvironments.
We were also interested in the extent to which the significant arm-level SCNAs are shared across tissue boundaries. Prior studies have shown many arm-level SCNAs to be prevalent in multiple cancer types 50,51,52 . We compared the arm-level SCNAs identified as significant in each of the 17 well-represented cancer types to those identified in their "complement" (i.e. the entire dataset excluding the cancer type in question). Similar to focal SCNAs, we observed that the large majority (median of 87%) of the arm-level SCNAs identified in any cancer type were also significant in the complement (versus 37% overlap expected by chance). Across all the cancer types, we identified 26 'lineagerestricted' events not found in the complementary pooled analysis (19 arm-level gains and 7 arm level losses), for an average of 1.6 new arm-level SCNAs per tissue type (range 0-7). Nine of these arm-level gains are identified exclusively among hematopoietic cancers. These lineage-restricted arm-level SCNAs may reflect important lineage-specific biology. An interesting example is 13q, which is frequently lost across most cancer types, but is gained in 50% of colorectal cancers, possibly due to the oncogenic effects of CDK8 and the unique requirement for intact RB1 (both on 13q) observed in colorectal cancer 28,53 . Chromosome 2 is the only chromosome not significantly altered in at least one cancer type.
Supplementary Note 5: Comparison of focal peak regions to 18 prior publications
To compare our focal peak regions to the results of prior high-resolution cancer copynumber analyses, we compared these regions to a set of 18 publications which reported copy-number regions of interest determined through the use of oligonucleotide arrays on at least 40 samples within any of the 17 major cancer types in our dataset 1,4,6,7,11,19,54,55,56,57,58,59,60,61,62,63,64,65 .
Among the 76 peak regions of amplification reported here, 18 had not been identified in any of the prior publications (Supplementary Table 3). For each region of interest, most of these publications reported the minimal common region of overlap across their sample set; here we report a more conservative peak region that is much wider than the minimal common region of overlap to account for the effects of biological and technical noise. Nevertheless, of the 58 amplified regions identified in both this study and at least one of the prior 18 publications, 33 were found to be narrower (and therefore better-resolved) in the present analysis. The size of these regions was a median of 30% of the minimum size of the overlapping regions of interest in any of these prior 18 publications. For example, the peak region including GRB2 was identified in one of these 18 publications, but is only 2% of the size of the region in that publication, engendering a much greater ability to focus in on GRB2 as a possible target. Indeed, GRB2 is a member of the molecular adaptor family of genes, which we find to be highly enriched among the peak regions of amplification (see Main Text) and, although not known to be an oncogene, is known to play a central role in cancer cell cycle and motility 66 .
Among the 82 peak regions of deletion reported here, 18 had also not been identified in any of the prior publications. Our deletion analysis was performed at gene-level resolution to achieve greater power in detecting non-overlapping deletions affecting large genes (see Supplementary Methods), whereas all the prior publications extended to marker-level resolution. Nevertheless, among the 64 regions identified in both this study and at least one of the prior publications, 21 were found to be narrower in the present analysis, with a median size of 10% of the minimum size from the prior publications. A more comparable marker-level analysis of our data (SNP-GISTIC, see Supplementary Methods) exhibited narrower peak regions than in 73% of those regions that had been previously reported (data not shown).
Supplementary Note 6: Tissue-type clustering of arm-level and focal SCNAs
We were interested in examining how the SCNAs identified in the pooled analysis vary across individual cancer types, focusing on the 26 cancer types represented by at least 20 samples in our collection. Some of the arm-level SCNAs occur at very high frequencies within individual subtypes (Supplementary Figure 7a). Indeed, 13 of the 26 cancer types exhibited at least one arm-level SCNA that was present in the majority of samples of that tumor type. By contrast, focal SCNAs were rarely present in the majority of samples of a given cancer type, with only 6 of 26 types exhibiting a focal SCNA present in a majority of samples.
We were also interested in quantifying the extent to which arm-level and focal SCNAs are shared between cancers of similar developmental lineage. Prior studies have demonstrated a tendency for cancers of similar developmental lineage to cluster together on the basis of overall copy number 67 , but did not separate out the contributions of these two types of events. Therefore, for each cancer type, we generated an aggregate SCNA profile by subtracting the frequency of loss from the frequency of gain for each significant arm-level and focal SCNA. We then clustered the resulting "consensus" SCNA profiles for each cancer type.
This particular clustering metric attempts to capture the net balance of arm-level changes rather than their absolute frequency; for example, a tumor type with 50% gains and 50% losses of a particular locus would receive the same score as a tumor type with no gains or losses of that locus. However, the clustering results were largely robust to the use of alternative clustering metrics, including scoring each cancer type according to the absolute frequency of gain and loss at each locus, and different clustering parameters such as complete vs. average linkage and Euclidean vs. Correlation Distance metrics. Also, the high degree of variability within cancer types suggests that this analysis will be influenced by the particular sampling of cancers within each type. For this reason we restricted the analysis to cancer types with >20 tumors (most were represented by >45 tumors) and looked for general features driving the major clusters rather than the specific placement of each cancer type.
Hierarchical clustering of cancer types based on arm-level SCNA profiles (Supplementary Figure 7b) revealed a pattern that closely mimicked the developmental lineage of the tissue types. Three major sub-clusters are readily apparent: a major division between hematopoietic cancers and all other cancer types, followed by a division between epithelial and non-epithelial solid tumors. Within these latter two groups, there are distinct sub-clusters of related tumors, including gastrointestinal (gastric, esophageal adenocarcinoma, and colorectal), gynecologic (ovarian, breast), sarcomas (plus renal cancer), and neural tumors (plus non-Hodgkin's lymphoma). The segregation of cancer types by developmental lineage is highly non-random (p < 1e-5; see Supplementary Methods), and more consistent than the previous attempts using overall SCNA profiles 49 . Specific arm-level SCNAs that distinguish these major subclusters, such as gain of chromosome arm 8q and loss of 17p in epithelial tumors, were identified through comparative marker selection analysis 24 and are listed in Supplementary Table 6.
In contrast, hierarchical clustering of cancer types based on focal SCNAs does not recapitulate developmental lineage as closely (Supplementary Figure 7c). Although there was a tendency for tumors of similar lineages to cluster together (p = .01), all three major clusters contained several representatives of each lineage. Consistent with this observation, the ten most significant amplified regions (Supplementary Figure 7d The finding that arm-level CNAs, but not focal CNAs, appear to cluster predominantly on the basis on developmental lineage suggests that developmentally encoded selective pressures shape the pattern of these events within specific cancer types. By contrast, such pressures appear to be less important in shaping the pattern of focal CNAs observed within and between individual cancer types.
Supplementary Note 7: How to use the cancer copy number web portal
The cancer copy number portal accompanying this paper (www.broadinstitute.org/tumorscape) was designed to facilitate interpretation of this copy number dataset for the general research community. In addition to allowing download and visualization of both the raw and segmented copy number data, we have integrated a web service that allows for rapid querying of pre-processed analyses of the copy number data for all the well-represented cancer subtypes in the dataset, as well as several defined aggregated datasets (such as all cancers, all epithelial cancers and all sarcoma cancers).
At present, there are two primary modes for querying these analyses: by gene and by cancer type. Below, we summarize the typical use case for each of these modes and present an outline for how to approach and interpret the portal data.
1) By Gene Analysis: The "By Gene" analysis mode is designed to quickly summarize the evidence that any given gene is the target of SCNA within a given cancer subtype. It is based on GISTIC analyses of 17 individual cancer types and an additional 6 aggregated datasets, as described in the Supplementary Methods above.
To access, first click on the 'Analyses' tab on the navigation bar on the left side of the portal, then click on the 'by Gene' sub-tab. Enter the HUGO gene symbol (e.g. KRAS, MYC, CDK4) of any Refseq gene, then hit "Search". After a few seconds, the results from your gene should be loaded. You will see three tabs ("Summary", "Amplifications", and "Deletions"), followed by the gene symbol you queried and its genomic coordinates (in genome build hg18).
Below that, you will see two paragraphs separately summarizing the evidence for that gene being a target of amplifications (first paragraph) and deletions (second paragraph). The first sentence of this summary paragraph states whether or not the gene is significantly amplified or deleted across the entire cancer copy number dataset, and whether or not the gene is present within a peak region of amplification or deletion in the entire dataset. A gene may be significantly altered but fail to reside within the peak region of alteration; although we cannot rule out the possibility that the gene is targeted by focal SCNAs, the fact that it is not in the peak region means that there is greater evidence for at least one other region on the same chromosome. Conversely, a gene may reside in a peak region of alteration but be insignificantly altered; this is usually due to an inability to confidently resolve the peak region and provides very little evidence that the gene is an actual target of SCNA. For genes that lie within a peak region of alteration, the number of additional genes in that peak are also listed; the fewer the genes in the peak, the more likely it is that that gene is the actual target.
After the summary for the entire cancer dataset, we provide a summary of the results across the different independent cancer subtypes. In particular, we list the number of independent subtypes in which that gene was significantly altered and the number of subtypes in which the gene was located in a peak region of alteration. Because looking across many different datasets increases the likelihood that a gene will be in a peak region by chance alone, care must be taken before interpreting the significance of these numbers. For comparison, we list the fraction of genes in the genome which are significantly altered or located in a peak region of alteration in at least as many subtypes as the current gene of interest. This allows some estimation of the likelihood that the gene in question is a false positive arising due to the number of hypotheses being tested.
To see more detailed information on the Amplifications or Deletions affecting this gene, click on the "Amplifications" or "Deletions" tab above the summary statements. This will load a table of the GISTIC results, where each row corresponds to one of the analyzed subsets. The rows are color-coded to quickly summarize the significance of SCNA for that gene and whether it is located in a peak region. For each row, we list the coordinates of the nearest peak region in that subtype (this will include the gene if it is located within a peak region) along with the number of genes in the peak and the false-discovery rate (FDR) q-value for the queried gene. The smaller the number of genes and the smaller the qvalue, the more likely it is that the given gene is actually the target of SCNA in that cancer type. Note that when there are no peak regions identified in the chromosome in question in a cancer type, no peak region is listed and the number of genes is set to 0 by default.
We also list three different measures of the frequency of SCNA for the gene in each cancer type. Overall frequency measures the fraction of cancers that exhibit any SCNA at that gene. Focal frequency measures the fraction of cancers that exhibit SCNAs spanning less than half a chromosome arm in length. High-level frequency measures the fraction of cancers that exhibit SCNAs of greater than 1 copy. All these numbers are likely to be underestimates due to the effects of contaminating normal cells in many of the cancer samples and the limited resolution of the copy number platform.
There are several additional navigation features that can be unveiled by clicking on various parts of the table. Clicking on any underlined cancer subtype name will take you to the "By Cancer Type" analysis page for that subtype (see below). Clicking on the underlined coordinates for any peak region will open the copy number data in that region for that cancer type in the integrated genome viewer (IGV) (Robinson et al, in preparation). Finally, clicking anywhere else in any row with at least 1 gene in the nearest peak region will cause the gene symbols for the all genes in that peak to be listed in the sidebar to the right of the table. Clicking on any gene in this sidebar will load the "By Gene" analysis page for that gene.
2) By Cancer Type Analysis
The "By Cancer Type" analysis mode is designed to quickly summarize the significant regions of focal CNA within each cancer subtype. It is based on the same GISTIC analyses of 17 individual cancer types and an additional 6 aggregated datasets, as described in the Supplementary Methods and "By Gene" analysis section above.
To access, first click on the "Analyses" tab on the navigation bar on the left side of the portal, then click on the "By Cancer Type" sub-tab. By default, the "all_cancers" subtype (representing all 3,131 cancer DNA samples present in our dataset) is selected first. By convention, aggregated tumor subsets are denoted by the prefix "all_" to distinguish them from individual cancer subtypes. To select a new cancer subtype, simply click the down arrow next to the name of the cancer type, select the cancer type of interest from the drop-down list, and hit "Search". After a few seconds, the data from that cancer type should be loaded.
The first tab you will see is the "Summary" tab, which contains a summary of the samples comprising the selected subset. In particular, we list the total number of DNA samples and cell lines for each subtype contained within that subset; for aggregated datasets, we also list the total number of samples and subtypes contained within the subset. Finally, we list the number of peak regions of focal SCNA identified in the dataset.
To view the regions of SCNA in more detail, click on the "Amplifications" or "Deletions" tab. This will load a table of the GISTIC results for that subset, sorted from most to least significant according to the FDR q-value. For each significant region of SCNA (represented by a single row in the table), we list the genomic coordinates of the peak region boundaries, the number of genes contained in the peak, the residual q-value for that peak (a measure of the likelihood that the peak was falsely discovered), and three different measures of the frequency of that event (as in the "By Gene" analysis described above). Note that the residual q-value for a peak will tend to differ from the overall q-value for genes in that peak, for two reasons: 1) the peak region may extend over genes with varied q-values, and 2) unlike the overall q-value, the residual q-value accounts for the possibility that a single SCNA may extend across more than one peak region by penalizing each of those peak regions (see Supplementary Methods).
As with the "By Gene" tables, clicking on any row with more than one gene in the peak will result in a list of the genes in that peak region appearing in the righthand sidebar. Clicking on one of these genes will load the corresponding "By Gene" Analysis page. Clicking on the underlined peak region will load the copy number data for that region in the selected cancer subtype in the integrated genome viewer (IGV).
|
2016-04-23T08:45:58.166Z
|
2010-01-22T00:00:00.000
|
{
"year": 2010,
"sha1": "31bb581bccd676268e50ea4c18a970deec9514de",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc2826709?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "54350fe3a8e6f040b2d1cbf88529bb103c3a51d7",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
44057646
|
pes2o/s2orc
|
v3-fos-license
|
Sex-specific role of education on the associations of socioeconomic status indicators with obesity risk: A population-based study in South Korea
Background No study of obesity risk for people in developed countries has conducted a multi-dimensional analysis of the association of socioeconomic status with obesity. In this paper, we investigated if education functions as either a confounder or an effect modifier in the association of another socioeconomic status indicator with obesity. Methods This cross-sectional study analyzed data of an adult population sample (10,905 men and 14,580 women) from the Korea National Health and Nutrition Examination Survey (2010–2014). The study performed multivariate logistic regression analyses for three education levels and four indicators of socioeconomic status (i.e., marital status, residential area, occupation, and income). Results The overall prevalence of obesity was 38.1% in men and 29.1% in women (p < 0.001). In men, while education functioned as an effect modifier in the association between marital status and obesity (p for interaction = 0.006), it functioned as both a confounder (p < 0.001) and an effect modifier (p for interaction < 0.001) in the association between residential area and obesity. In contrast, in women, education functioned as a confounder in the association of residential area with obesity (p = 0.010). However, it functioned as both a confounder (p < 0.001) and an effect modifier (p for interaction = 0.012) in the association between income and obesity. A prediction showed that unlike in women, education was positively associated with obesity risk for some socioeconomic indicator groups in men; for example, in a rural resident group, a higher level of education increased the probability of being obese by 19.7%. Conclusions The present study suggests the need to examine sex-specific studies regarding the role of education on the association between other socioeconomic status indicators and obesity. This should be considered in planning education policies to reduce the risk of obesity.
Introduction Worldwide, obesity has become an important public health problem. Obesity can cause various diseases and a diminished quality of life for individuals [1], and it can cause a heavy economic burden by increasing a society's medical expenditures, decreasing manpower, and thereby reducing labor productivity [2,3].
Regardless of whether they were for academic curiosity or policy development, numerous studies have examined factors associated with obesity risk. Among these factors, the association between socioeconomic status and obesity risk has attracted much attention across many disciplines. Generally, the consensus has been that in developed countries, a higher socioeconomic status is associated with a lower risk of obesity in both men and women [4][5][6].
Meanwhile, recent studies from developed countries point to a more complex association between socioeconomic status indicators and obesity risk, thereby asking for more and better research on the association. For example, studies from Canada [7], France [8], Luxembourg [9], the United States [10], and South Korea [11] suggest that the association between a particular socioeconomic status indicator and obesity risk may not only be positive in one sex, but negative in the other sex.
Unfortunately, despite a lot of attention paid to the associations between socioeconomic status indicators and obesity risk, a study of a multi-dimensional analysis on the associations has not been performed for people in developed countries. This lack of studies may lead to the unavailability of adequate information for developing theories and designing efficient public health policies aimed at reducing obesity risk in specific groups of people.
Therefore, the aim of the present study was to employ a multi-dimensional analysis and examine the role of education on the association between other socioeconomic status indicators (such as marital status, residential area, occupation, and income) and obesity. In the present study, we elected to focus on the role of education among the various socioeconomic status indicators because education level is established during early adulthood and generally remains unchanged unlike the other socioeconomic indicators that are more susceptible to change.
In this study, we sought to identify education as either a confounder or an effect modifier or both a confounder and effect modifier in the association between another socioeconomic status indicator and obesity. In addition, after considering the role of education in the association between another socioeconomic status indicator and obesity, the study aimed to investigate if a higher level of education was associated with a reduced risk of obesity in both men and women. To fulfill the aims of the study, we analyzed a sample adult population aged !25 years from the nationally representative data in South Korea; this population was selected because the country is one of the largest developed countries in the world [12], and people in this age group were thought to have most likely completed their education.
Data source and study sample
We used data from the Fifth and Sixth Korea National Health and Nutrition Examination Survey (KNHANES V and VI, 2010-2014), performed by the Korea Centers for Disease Control and Prevention. The sampling design for the KNHANES was a stratified, multistage probability survey of the non-institutionalized general population of South Korea. This survey included a health interview, health examination, and a nutrition survey that were conducted at participants' homes, as well as a physical examination that was conducted by physicians at designated examination centers.
For KNHANES V and VI, 41,102 individuals participated in the interviews (8,958 in 2010; 8,518 in 2011; 8,058 in 2012; 8,018 in 2013; and 7,550 in 2014). This study chose 29,266 participants from the total number of participants in the 2010-2014 survey, including only those aged !25 years (n = 29,752) to ensure they had completed their education [13]. As bodyweight of pregnant or breast-feeding (n = 486) women is affected by childbearing, they were excluded from the study.
Finally, the study analyzed the findings from the 25,485 (87.1%) participants (10,905 men and 14,580 women) with complete information. The χ 2 tests showed no significant differences in participant characteristics before and after the exclusion of participants with incomplete information (for age, p-values were 0.485 in men and 0.185 in women; for residential area, pvalues were 0.507 in men and 0.271 in women).
All KNHANES participants provided written consent to participate in the survey and for their personal data to be used. This study used publicly available data, and ethical approval was obtained from the institutional review board of Yonsei University Graduate School of Public Health (IRB No. 2-1040939-AB-N-01-2016-157).
Measures and variables
The obesity status of each participant was determined anthropometrically using data from the physical examination. As recommended by Asian criteria suggested by the World Health Organization, general obesity was defined as a body mass index of !25 kg/m 2 [14].
This study examined five socioeconomic status indicators: education, marital status, residential area, occupation, and income. Education, defined as the highest level of formal education completed at the time of interview, was divided into the following three levels: middle school or less, high school, and college or higher. Marital status was denoted as married or non-married (i.e., never married, separated, widowed, or divorced). Residential area was denoted as urban area or rural area. Occupation was defined according to the following three groups: office worker, manual worker, and no job (e.g., those who had no job in the labor market). For income, this study calculated an equivalized monthly household income for each household to adjust for household size ([monthly overall household income] [household size] -0.5 ) [15,16] that divided participants into four quartiles. Nine variables were used in this study as potential confounders, and except survey year, these variables were grouped into the following two categories: sex (men and women), smoking status (smoking and non-smoking), risk from alcohol intake (no or low risk and medium or higher risk), routine walk exercise activity (active and inactive), daily sleep duration (short sleep and long sleep), daily energy intake (under-reported and not under-reported), self-perceived stress level(stressed and not stressed), chronic disease (yes and no), and survey year.
In details, risk from alcohol intake was based on the sex-specific guidelines of the World Health Organization [17]. Routine walk exercise activity was categorized as "active" if a participant walked for at least 30 minutes per day, for !5 days per week [18]. Daily sleep duration was denoted as "short sleep" if a participant slept for 6 hours per day [19]. Daily energy intake obtained from a 24-hour dietary recall of a participant was defined as "under-reported" if the participant consumed energy lower than the participant's estimated energy requirement (EER). The Institute of Medicine developed the EER predictive equations, where an individual's EER is defined as the individual's dietary energy intake required to maintain energy balance according to the individual's age, sex, weight, height, and level of physical activity [20]. Chronic disease was defined as "yes" if a participant had at least one of the following diseases at the time of the survey: hypertension, dyslipidemia, and diabetes mellitus.
In a preliminary analysis, this study included age as a discrete variable and housing tenure as a proxy of wealth. However, because of the lack of significance and a high level of multicollinearity, this study posited age as a continuous variable and removed the housing tenure variable. For each multivariate model that focused on the role of education level on the association between each socioeconomic status indicator and obesity, the other socioeconomic status indicators were added as potential confounders to the above-mentioned potential confounders.
Statistical analysis
We first tested differences in the distributions of variables among men and women using the ttest for continuous, age variable and the χ 2 test for categorical variables. Second, the prevalence of obesity for each group of all socioeconomic status indicators was estimated and compared according to education level among men and women using χ 2 tests. Third, we carried out a Wald test for the significance of the three-way interaction-effect term among sex, education and each socioeconomic status indicator in the logistic regression model with (1) three maineffect terms of sex, education and the socioeconomic status indicator, (2) the two-way interaction-effect term between education and the socioeconomic status indicator, and (3) the threeway interaction-effect term among sex, education and each socioeconomic status indicator. Because the three-way interaction-effect term was highly significant for every socioeconomic status indicator (p for interaction <0.0001), we stratified the remaining analyses by sex.
Fourth, to examine the role of education in the association between each socioeconomic status indicator and obesity, we employed three different methods as follows.
Method 1: To examine a possibility that education level modifies the association between each socioeconomic status indicator and obesity, we estimated the adjusted odds ratios (ORs) of obesity (and their 95% confidence intervals, CIs) for each socioeconomic status indicator with and without being stratified by education level for each sex; these were obtained from the logistic regression models adjusted for all the studied confounders. According to statistical rules of thumb distinguishing a confounder from an effect modifier, if the ORs of obesity for a socioeconomic status indicator without being stratified by education level were outside the range of the stratum-specific ORs of obesity for the socioeconomic status indicator, we considered that education was very likely a confounder. Meanwhile, if the ORs of obesity for a socioeconomic status indicator, not stratified by education level, were inside the range of the stratum-specific ORs of obesity for the socioeconomic status indicator and the stratum-specific ORs of obesity for that indicator were very different from one another, we postulated education very likely to be an effect modifier.
Method 2: To examine the role of education between each socioeconomic status indicator and obesity, we obtained a first set of unadjusted ORs of obesity for the socioeconomic status indicator in a logistic regression model with only the socioeconomic status indicator as an independent variable (Model 1). We then obtained the second set of unadjusted ORs of obesity for the socioeconomic status indicator in another logistic regression model (Model 2), after adding the main-effect term of education as well as an interaction-effect term between the socioeconomic status indicator and education to Model 1. If the first set of unadjusted ORs of obesity for the socioeconomic status indicator, obtained from Model 1, was significantly different from the second set, obtained from Model 2, based on the seemingly unrelated estimation method and Wald test [21], we considered education as a confounder, because the adding it significantly changed the association between the socioeconomic status indicator and obesity.
In addition, if the interaction-effect terms from combinations of the socioeconomic status indicator groups and education levels in relation to obesity in Models 2 were jointly significant, we considered education as an effect modifier, because the association between the socioeconomic status indicator and obesity changed significantly across education levels. As for marital status, for example, we have two combinations of marital status groups and education levels constructing the interaction-effect terms in relation to obesity; one is non-married and high school and the other non-married and college or higher. For the test for the joint significance of interaction-effect terms from two combinations of marital status groups and education levels, the null hypothesis states that ORs of interaction-effect terms from both combinations are set as zero, whereas the alternative hypothesis states that at least one OR of interaction-effect terms from both combinations is non-zero.
Method 3: To examine if the role of education between each socioeconomic status indicator and obesity changes from the previous, unadjusted models to adjusted models, we conducted additional analyses of the models in Method 2 using all studied confounders and the other socioeconomic status indicators. In the case of these adjusted models, this study found no evidence of lack of goodness-of-fit in each model; p-values based on the Hosmer-Lemeshow statistic were ! 0.178.
Finally, to examine if obesity risk decreases with education after considering the role of education in the association between each socioeconomic indicator and obesity, we estimated the change in an individual's predicted probability of being obese (and its 95% CIs), if the individual belonging to a socioeconomic status indicator group would increase the individual's level of education from the lowest level (middle school or less) to a higher level (either high school; or college or higher), all the other factors held constant at the individual's own values.
This study used the STATA version 13 (StataCorp, College Station, TX, USA) and conducted all analyses and tests using the method to deal with the complex survey design, that is, using the weighted sample. However, for convenience, the descriptive statistics in Table 1 were shown as unweighted; p-values < 0.05 were regarded as statistically significant.
Results
In Table 1, the participant characteristics that were significantly different across education levels for each sex, with the exception of sleep duration (p = 0.295), energy intake (p = 0.435), and survey year (p = 0.335) in men and survey year (p = 0.054) in women are shown.
The percentage of obesity was estimated in 38.1% (standard error, 0.6) in men and 29.1% (standard error, 0.5) in women, being significantly different between sexes (p < 0.001). The percentage of obesity in each socioeconomic status indicator group according to educational level and sex is shown in Table 2. The obesity rate for each group of socioeconomic status indicators varied significantly across education levels except for office worker (p = 0.449), the second lowest income group (p = 0.122), the third lowest income group (p = 0.699), and the highest income group (p = 0.132) in men. This suggests that each education level may play a differentiated role on the association between a socioeconomic status indicator and obesity in either men or women. Table 3 displays the adjusted odds ratios (ORs) of obesity (and their 95% CIs) that were obtained from the logistic regression models adjusted for all the studied confounders for each socioeconomic status indicator with and without being stratified by education level for each sex. According to the statistical rules of thumb that help distinguish a confounder from an effect modifier described as Method 1 in the statistical analysis section, in men, education was very likely an effect modifier in the association between each socioeconomic status indicator and obesity. In contrast, in women, education was very likely to play a different role in the association between a socioeconomic status indicator and obesity, depending on which socioeconomic status indicator was associated with obesity; a confounder for both marital status and residential area; an effect modifier for income; and both a confounder and an effect modifier for occupation. Accordingly, the results from the statistical rules of thumb suggest that education may be either a confounder or an effect modifier or both in the association between a socioeconomic status indicator and obesity, being different between sexes.
According to Method 2 described in the statistical analysis section, unadjusted results of the main and interaction effects of each socioeconomic status indicator and education on obesity in men and in women, respectively, is given in Tables 4 and 5. In men, among all studied socioeconomic status indicators, only residential area showed significant differences in ORs of obesity (p < 0.001) from the model including only the socioeconomic status indicator (Model 1) to the model considering the main effect and the interaction effect of the socioeconomic status indicator and education (Model 2). As for interaction-effect terms between groups of each All analyses were conducted considering the complex survey design.
P-values for interaction were obtained by the Wald test. *P-values were obtained by the Wald test to examine if estimates of odds ratios of all socioeconomic status indicator groups differ jointly between Models 1and 2 on the basis of the seemingly unrelated estimation method.
Model 1 included each socioeconomic status indicator only.
Model 2 included two main-effect terms of each socioeconomic status indicator and education as well as the interaction-effect term of the two variables. All estimates were obtained from logistic regression models.
OR odds ratio, CI confidence interval, Obesity body mass index !25, Ref reference group socioeconomic status indicator and education levels in regards to obesity in Model 2, those between groups of marital status and education levels (p for interaction = 0.005) and those between groups of residential area and education levels (p for interaction < 0.001) were jointly significant, respectively. Meanwhile, in women, residential area (p < 0.001), occupation (p = 0.003) and income (p < 0.001) showed significant differences in ORs of obesity from Model 1 to Model 2. Only the interaction-effect terms between groups of income and education levels in Model 2 were jointly significant (p for interaction = 0.027). These results suggest that in men, education may play a role as an effect modifier in the association between marital status and obesity, while functioning as both a confounder and an effect modifier in the association between residential area and obesity. In contrast, in women, education may work as a confounder in the association of each of residential area and occupation with obesity, while functioning as both a confounder and an effect modifier in the association between income and obesity.
After adjustments for all studied confounders, based on Method 3 described in the statistical analysis section, the results of the main and interaction effects of each socioeconomic status indicator and education on obesity are presented in Table 6 for men and Table 7 for women. In Table 6, the differences in the associations between each socioeconomic status indicator and obesity between Model 1 and Model 2 were significant for residential area (p < 0.001), the interaction-effect terms between groups of marital status and education levels with regard to obesity (p for interaction = 0.006), and those between residential area and education levels (p for interaction < 0.001) in Model 2 were jointly significant.
Meanwhile, in women, residential area (p = 0.010) and income (p <0.001) showed significant differences in ORs of obesity between Model 1 and Model 2. Similar to the unadjusted Table 6 model in Table 5, only the interaction-effect terms between groups of income and education levels in Model 2 were jointly significant (p for interaction = 0.012). According to these adjusted results, in men, education may work as an effect modifier in the association between marital status and obesity and as both a confounder and an effect modifier in the association between residential area and obesity. In contrast, in women, education may work as a confounder in the association of residential area with obesity and as both a confounder and an effect modifier in the association between income and obesity. Fig 1 shows the change in an individual's predicted probability of being obese (percentage point), if the socioeconomic status indicator group they belonged to were to increase their education from a lower level (middle school or less) to a higher level (either high school; or college or higher) with all the other factors held constant at each individual's own values. In men, for most socioeconomic status indicator groups, an increase in education seemed to show no significant change in obesity risk. However, for the following socioeconomic status indicator groups, obesity risk appeared to increase significantly owing to an increase in their education level: 1) in the married group, the predicted probability of being obese significantly increased by 7.7% from middle school or less to college or higher (p = 0.045); 2) the rural resident group demonstrated an increase of 15.7% to high school (p < 0.001) and a 19.7% to college or higher (p < 0.001); 3) for the group who had no job, an 11.4% increase to high school (p = 0.028); and 4) for the lowest income group, an 11.5% increase to high school (p = 0.025) and 14.7% increase to college or higher (p = 0.025) was observed.
Characteristics
Meanwhile, in women, education seemed to have a negative association with obesity risk for most of the socioeconomic status indicator groups, i.e., the predicted probability of being All analyses were conducted considering the complex survey design.
P-values for interaction were obtained by the Wald test. *P-values were obtained by the Wald test to examine if estimates of odds ratios of all socioeconomic status indicator groups differ jointly between Models 1and 2 on the basis of the seemingly unrelated estimation method.
Model 1 included each socioeconomic status indicator only. Model 2 included two main-effect terms of each socioeconomic status indicator and education as well as the interaction-effect term of the two variables.
All estimates were obtained from logistic regression models, adjusted for age, smoking, alcohol intake, walk exercise, sleep duration, daily energy intake, obese decreased significantly from the lowest level (middle school or less) to a higher level. For example, for the married group, the predicted probability of being obese showed an 11.1% decrease to college or higher (p < 0.001), and for the highest income group, a 12.1% decrease to college or higher (p < 0.001). However, for the following socioeconomic status indicator groups, an increase in education from middle school or less to a higher level showed no significant association with obesity risk: 1) for the rural resident group, the predicted probability of being obese had no significant change when their education level increased to high school; 2) for the office worker group, when their education level increased both to high school and to college or higher; 3) for the lowest income group, when their education level increased both to high school and to college or higher; and 4) for the second lowest income group, when their education level increased to high school. These results suggest that the role of education on the association between a socioeconomic status indicator and obesity may differ depending on both the type of the socioeconomic status indicator and sex under investigation.
Discussion
In this study, we investigated the role of education on the association of each socioeconomic status indicator with obesity. From the results obtained from all methods mentioned previously, we discovered that education might or might not play a role on the association of each socioeconomic status indicator with obesity, depending on the socioeconomic status indicator and sex under consideration. For example, the results from logistic regression models after Interactions between education and socioeconomic status indicators in relation to obesity in South Korea adjustments for all the studied confounders provides interesting suggestions: in men, education may be neither a confounder nor an effect modifier in the associations between occupation and obesity as well as between income and obesity; whereas, education may be an effect modifier in the association between marital status and obesity and function both as a confounder and an effect modifier in the associations between residential area and obesity. In contrast, in women, education may be neither a confounder nor an effect modifier in the association between marital status and obesity as well as between occupation and obesity, whereas education may be a confounder in the association between residential area and obesity and both a confounder and an effect modifier in the associations between income and obesity. This study also suggests that because the role of education on the association between each socioeconomic status indicator and obesity differs according to the socioeconomic status indicator and sex under investigation, education may be either negatively or positively associated with obesity risk according to the socioeconomic status indicator and sex under investigation. This study found that an enhanced education might be associated with a higher risk of obesity in men for the following groups like the married group, the rural resident group, the unemployed group, and for the lowest income group, in sharp contrast that in women, education may have a negative association with obesity in most groups of all socioeconomic status indicators.
With regard to the relationship between socioeconomic status indicators and obesity in developed countries, previous studies without considering interaction effects of such indicators reported a so-called "inverse association between socioeconomic status and obesity risk," stating that socioeconomic status is negatively associated with obesity risk in both men and women [4][5][6]22]. However, after considering the role of education on the associations of other socioeconomic status indicators with obesity risk, this study found no evidence of a negative association of education with obesity risk in men.
Some recent studies of developed countries shed doubt on the perceived inverse association. These studies suggest that the direction of the associations between socioeconomic status indicators and obesity risk may differ by sex and that the associations may not be significant for a specific sex. Further, the type of socioeconomic status indicator associated with obesity risk may also vary by sex. In Canada, the association between income and obesity risk was significant both in men and in women, but the direction of the association was sharply contrasted by sex; obesity risk was higher in rich men, but obesity risk was higher in poor women [7]. In France, obesity risk was associated with occupation in men, whereas it was associated with educational level and frequency of holiday trips in women [8]. In Luxembourg, education was significantly associated with obesity in women, but not in men [9]. In the United States, education had no significant association with obesity in men, but in women, those with college degrees had a higher likelihood of being obese than their less educated counterparts [10]. In South Korea, both income and education showed no association with obesity in men, whereas education, not income, was inversely associated with obesity in women [11].
To date, no study of obesity risk for people in developed countries has assessed, in detail, the role of education on the associations between other socioeconomic status indicators and obesity. It is surprising that in developing countries, the role of education as a socioeconomic status indicator linked to obesity has been examined in some studies, although they were only for women of limited age without considering different socioeconomic status indicators. In Egypt, for women of reproductive age, education reduced obesity risk in its interplay with wealth [23]. In China, education interacted with occupation in regards to abdominal obesity of women at least 60 years old. In women with no education, individuals with a sedentary occupation were more likely to be obese than those with an agricultural occupation. However, there was no difference in the likelihood between occupational groups in women with any education [24].
Meanwhile, we will discuss plausible mechanisms that may explain the two important findings in this study. The first important finding is that education may play a role as either a confounder or an effect modifier in the association of another socioeconomic status indicator with obesity risk. The reasons for this may be partly attributed to how education and another socioeconomic status indicator determine each other, thereby influencing obesity risk in a combined manner. Particularly, in the case of this study, which analyzed an adult population aged !25 years that had most likely completed their education, education is more likely to influence another socioeconomic status indicator, rather than the socioeconomic status indicator influencing education level. Many studies from different disciplines document significant evidence regarding the effects of education level on marital status, residential area, occupation, and income [25][26][27][28][29][30].
The second important finding in this study is that the role of education on the associations between another socioeconomic status indicator and obesity risk differs by sex. These differences by sex may result from sex differences in knowledge on certain choice of diets, nutrition and nutritional beliefs through education, as implied in a study of college students in the United States [31]. In addition, as shown in previous studies examining the main effects of socioeconomic status indicators on obesity risk [7][8][9][10][11], the association of each socioeconomic status indicator including education with obesity risk may differ by sex, and the effect of education on other socioeconomic status indicators may also differ by sex [25][26][27][28][29][30]. In addition, there may be different socioeconomic circumstances experienced by men and women, although it may be difficult to sufficiently control for these circumstances in most empirical models. For example, even in developed countries, in order to marry a desirable partner, a woman with a college degree and working in an office may take greater efforts to avoid appearing obese than her male counterpart. Various literature documents existing sex differences involving an obesity penalty in employment settings, in health-care settings, in educational settings, in interpersonal relationships and in marriage settings [32][33][34][35][36][37]. In relation to this, it needs to be noted that although South Korea ranked 11 th in the size of national economy in 2015 according to the World Bank [12], ironically, it ranked 115 th in the global gender gap index according to the World Economic Forum [38]. Therefore, it is not difficult to find evidence of pronounced sex discrimination against women in South Korea [39][40][41][42], hence highly educated women in South Korea seem to be at a higher risk towards the obesity penalty than that of their counterparts in other developed countries.
As far as we know, this is the first multi-dimensional study to investigate the role of education on the associations between other socioeconomic status indicators and obesity in a developed country. Though caution must be exercised in drawing policy suggestions from crosssectional data results, this study suggests that depending on sex, increased education may raise obesity risk or have no effect on it through its interplay with other socioeconomic characteristics. These results could raise a question whether or not an enhanced education is an efficient policy tool to achieve a goal of health attainment for a certain population group like the reduction of obesity risk for adult men in South Korea, as discussed in previous studies [43][44][45].
This study analyzed data from the most recent sample of nationally representative adults in South Korea that included rich information about anthropometric measures, demographic characteristics, socioeconomic status, health behaviors, dietary intake, psychological characteristics, and diagnosed diseases. Most advantageously, this study explored the role of education on the associations between various other indicators of socioeconomic status and obesity in developed countries.
This study has several limitations. First, because this was a cross-sectional study, we could not draw a causal relationship between education, other socioeconomic status indicators and obesity. If we had obtained cohort data, we could have included time-varying covariates in our statistical analysis. Second, self-reporting methods for some information may have caused recall bias and measurement error. Third, other potential covariates, such as quality of education, genetics, peer effects, diet quality, and parental obesity, could not be considered because of lack of information. Fourth, this study examined the interactions on the multiplicative scale because it did not aim to examine if the interactions were either on an additive scale or on a multiplicative scale. As shown in most published epidemiological studies, interactions have been reported on the multiplicative scale [46,47]. However, we would like to note that the presence and direction of interaction on the additive scale is important for public health relevance [48]. Fifth, we failed to draw directed acyclic graphs (DAGs) in this study, because it was very difficult to draw them from the models with all the studied, socioeconomic status indicators, as well as all the studied, potential confounders. We fully understand that DAGs would be very useful when researchers try to explain confounding and effect modification between exposure and outcome, as a respected, anonymous reviewer has commented [49]. Finally, unobserved factors, such as time preference and risk aversion, may have influenced both socioeconomic status and obesity [50,51].
Conclusions
The results of this study suggests role of education on the association between other socioeconomic status indicators and obesity risk, which is influenced by sex. This should be considered in policy efforts to reduce obesity risk in South Korea. Future research is needed to examine whether these results are valid in other settings in terms of either socio-culture or economic development.
|
2018-04-03T05:01:56.948Z
|
2018-01-03T00:00:00.000
|
{
"year": 2018,
"sha1": "55d2461994a4ecfae086462e4b024ac4d38e9f39",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0190499&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6372cd536f49c22f7cbc63f22f4bb11b405c681a",
"s2fieldsofstudy": [
"Sociology",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
85340376
|
pes2o/s2orc
|
v3-fos-license
|
Bottom trawl fishery discards on the Black Sea coast of Turkey
The purpose of this study is to determine the amount of bycatch and discards of fish caught by bottom trawlers operating along the Black Sea coasts of Turkey and discards changes depending on the depth. The study was conducted during the September 2009–April 2010 fishing season. Twenty-one bottom trawler operations were sampled and the catch composition was determined. A total of 26 species were caught, which included 22 species of fish, 2 species of arthropods, 1 gastropod and 1 bivalve. Two of these were target species ( Mullus barbatus, Merlangius merlangus ), while 25 species were discarded, including trash fish and specimens below the legal size. A total of 2142.76 kg of biomass was caught during the operations, of which 53.99 % was bycatch. The weighed discard rate was determined as 42.06% and two different groups were identified in discards (T1: 10-57 m, T2: 72-118 m) based on the depth. Significant differences were identified between these depth groups ( p <0.05). It was determined that the biomass (kg h -1 ), the evenness index (J) ( p <0.05), the average species number and species richness (D) ( p <0.01) of the discards showed significant differences, but that the difference between species diversity (H) was negligible. No difference was found ( p >0.05) between the ecological parameters of landings.
Introduction
By-catch has been a serious problem in worldwide fisheries (Zhou, 2008) and recently observed for the fishing industry in Turkey.Discards first attracted attention in the 1960s with the accidental death of dolphins caused by tuna fishing (Hall & Mainprize, 2005;Harrington et al., 2005).In the study conducted by Kelleher (2005), the amount of discard worldwide was estimated at 7.3 million tons.In addition to the effects on fish stocks, commercial fishing also affects other marine organisms.One of the ecological effects of commercial fishing is the incidentally capture of non-target species.The increase in the amount of bycatch within the target catch affects not only the fishing industry but also other marine organisms (Alverson et al., 1994;Hall, 1996;Hall et al., 2000;Sanchez et al., 2004).The marine ecosystem has been exposed to direct and indirect impacts by trawl fishing since trawl fishing collects huge amount of organisms and causes their death (Kumar & Deepthi, 2006).To determine these adverse effects and to maintain a sustainable fishing industry based on the ecosystem, it is essential that discard rates are estimated.Even though many countries and organizations have taken a number of measures to decrease the discard rate, political, fishery management, technical and economic problems were encountered with regards to their implementation (Hall, 1994;Allain et al., 2003;Kelleher, 2005;Zollett, 2009).Shrimp and demersal finfish trawl fisheries account for over 50 percent of total estimated discards while representing approximately 22 percent of total landings (Kelleher, 2005).Numerous studies have been conducted to develop new fishing gears that decrease bycatch and increase the chances of survival of discarded fish in bottom trawl fishing (Probert et al., 1997;Hall et al., 2000;Hannah & Jones, 2000;Stratoudakis et al., 2001;Stobutzki et al., 2001;Diamond, 2004;Beutel et al., 2006;Zeeberg et al., 2006;Chen et al., 2007;Costa et al., 2008).The high discard rate (0.5% -83%) determined in studies conducted on bottom trawl fishing in various regions is an indication of this issue (Kelleher, 2005).
The Black Sea is one of the world's largest semienclosed seas.In Turkey, annual yield obtained by fishing activities is 477,658 tons and the Black Sea provides 77.92% of this yield (TUİK, 2012).While a large portion of this yield is obtained from seine nets and midwater trawls (anchovies, sprat, mackerel, bonito), the yield obtained from bottom trawling is far from negligible.
The Black Sea coasts of Turkey have a considerably narrow continental shelf, so some parts of this region are not suitable for bottom trawl fishing.In addition, the anoxic layer commences at depths of 150-200 m (Zaitsev & Mamaev, 1997;Badescu, 2007;Petrov et al., 2011).These conditions limit both habitat and bottom trawl fishery along the Black Sea coasts.In some regions (West and Middle Coasts of the Southern Black Sea), bottom trawl fishing is performed throughout the fishing season.
In these regions, there are 470 fishing boats that are capable of bottom trawl fishing during a season (TUİK, 2012).Vessels can operate multiple fishing activities according to the Turkish licensing system.During some periods such as, lower income from bottom trawl fishing or in the seasons when Atlantic bonito is fished, fisherman can use other fishing gears (e.g.purse seine, drift net, mid-water trawl, hydraulic dredge).Therefore, it is not easy to estimate the fishing effort of trawl fisheries in this region.The bycatch and discard rate for the gears used for fishing in these regions is unknown.At the same time, no studies have been conducted to reduce bycatch (e.g.selection, modification of fishing gears).
The aim of this study was to determine the bycatch and discard rate of the traditional bottom trawls used in the Black Sea, their changes based on depth and also to serve as a reference for future studies to be conducted in this region.
Sampling procedures
Samplings were performed in two-month periods on commercial fishing boats along the south-western coasts of the Black Sea between the beginning of the fishing season (September 2009) and the end of the fishing season (April 2010) (Fig. 1).The existence of the anoxic layer at depths greater than 150-200 m in the Black Sea causes organisms to condense at depths less than 150 m (Zaitsev & Mamaev, 1997;Badescu, 2007;Petrov et al., 2011).Consequently, bottom trawl fishing is carried out in these regions.
Data regarding the operations (tow duration, depth, and time) were recorded.During the study, 21 fishing operations were performed at depths of 10 to 118 m, where fishing activities are the most intense.The study was conducted on three commercial trawl boats with different capacities (engine power and tonnage).The lengths of the vessels were 12, 14 and 21 m, respectively.The tow duration ranged between 0.45 and 2 h with tow speed ranging between 2.2-2.6 knots.There was no interference with the fishing activities of the crewmen.
All samplings were performed with the local fishermen's nets, which were not fitted with any equipment allowing for selectivity and their cod end diamond mesh size was 36 mm.Although no time limitations were effective in the study area, the operations began at dawn and continued throughout the day.The length of undersized species was measured according to the regulation of the Ministry of Food, Agriculture and Livestock of the Republic of Turkey.Following the selection of commercial species, the catch composition was determined, and the commercial and discarded catches were weighed on deck.It was not possible to use a digital balance because of lurching.Thus, weighing was carried out at the harbour.Species taxonomy was performed at the Recep Tayyip Erdoğan University Faculty of Fisheries laboratories.The composition of the discards and commercial catch by species used in the analysis was standardized as kg -1 superscript format.Trawl catch composition and definition of the terms used in the text are listed below.
Target catch: Catch of a species that is primarily sought by fishermen.Target species in some operations could be commercial bycatch in the other operations.
Bycatch: Total catch of non-target (discard and commercially valuable non-target species) animals.
Discards: Non-commercial species and commercial fish thrown back into the sea due to legal regulations (specimens below the minimum landing size and endangered species).
Data analysis
The amount of discard in total yield and the rate of discard by weight were calculated according to the formulas below (Kelleher, 2005).
Components of total Fishing D = C -L C: Total catch L: Landings D: Discards Weighted discard rate ΣD Σ D + ΣL • 100 = ( ) Similarity analysis of the discard species composition and amount obtained by the hauls was performed using the PRIMER 5 software package.Square root transformation linked with group average fusion was used for clustering the hauls.Multidimensional Scaling (MDS) analysis was performed according to the Bray-Curtis similarity matrix (Kruksal & Wish, 1978).Depth was used as a factor in both cluster and MDS analysis in order to categorize the hauls in terms of amount and species composition of discards.The ANOSIM test was performed on the hierarchical agglomerative clustering formed by the similarity matrix.To determine the contribution of each species to the dissimilarity rate (cut-off percentage = 90) observed between groups, Similarity Percentages (SIMPER) analysis was used (Clarke, 1993).To determine the effective use of total biomass caught in the depth groups, the EUE (Ecological Use Efficiency) of each haul and average of depth groups were calculated (Alverson & Huges, 1996).
EUE
The univariate indices of species richness (Margalef's D), Shannon's index of diversity (H) and Pielou's measure of evenness (J), total number of species and biomass were calculated for each haul in the depth groups.These parameters were calculated separately for each haul corresponding to the landed and the discarded catch.Differences between the groups were determined with the Mann-Whitney test.
Results
During the samplings, 26 species including 22 species of fish, 2 species of arthropods, 1 gastropod and 1 bivalve were caught.Only two species Mullus barbatus, Merganlius merlangus were targeted, and these were the two species caught in greatest abundance.Twenty-five species were identified as discards.2. Following the selection of commercial catch, it was observed that nearly the whole discards thrown back to sea died and that most were consumed by sea birds.Psetta maxima was the only commercial species with no discarded fraction at all throughout the survey.During one operation a single endangered Huso huso was immediately released alive by the fishermen.
According to cluster and MDS analysis, two groups (T 1 , T 2 ) were identified (Fig. 3, 4).T1 consisted of trawl operations conducted between depths of 10-57 m, while the T2 group consisted of operations between 72-118 m.MDS stress value was 0.1.In addition, the ANOSIM test determined that the two groups are significantly different from one another (p<0.05).
The amount of organisms and the number of species that were discarded varied considerably according to the depths at which the hauls were performed.While the target species in the T1 and T2 groups during 16 hauls was M. barbatus, the target species in the T2 group during 5 hauls was M. merlangus.The species and corresponding average landed and discard quantities (kg h -1 ) obtained during the sampling period are listed in Table 3.
Table 3 shows the mean hourly biomass of discard- ed and landed species in the T1 and T2 groups.In both groups, the quantity of commercial species other than the target species is quite low.In the T1 group, R. clavata was the most discarded species (16.64%), while Sygnathus sp.(0.01%) was the least discarded species.In the T2 group, M. merlangus (87.47%) was the most discarded species, while Crangon crangon (0.002%) was the least discarded (Table 3).
The discard range of fishing operations in the T1 group was 5.2-18.9kg h -1 , with an average value of 9.47 ± 4.26 kg h -1 .The weighted discard rate of T1 was 16.47 %.The Ecological Use Efficiency (EUE) calculated for this group was between 0.7-0.88,and the average value of EUE was 0.814±0.06.The discard range for the T2 group was calculated as 4.7-235.6kg h -1 , and the average discard was 39.27±16 kg h -1 .The weighted discard rate The significance of the difference between the EUE values of the groups was determined (Table 4).In the T2 depth group, discards were found to be higher both in terms of proportion and quantity.There was no significant difference between the T1 and T2 groups in commercial biomass (p>0,05) (Table 4).The total number of species discarded in the T1 depth group was found to be greater than the T2 group.Furthermore, in the T1 group, where the operations were performed in a shallower region, the average number of species was greater than that of T2.For the discarded species of both groups, significant differences were noted in the species richness (D) and evenness (J) indices, with the exception of the species diversity index (H) (Table 5).
Discussion
The demand for fish and sea food is increasing, while the supply of fish from wild capture fisheries has stagnated, (FAO, 2008).Hence effective use of marine resources is becoming increasingly important.When fisheries are categorized according to their exploitation, it is noted that 4% are under exploited, 25% are sustainably exploited, 47% are fully exploited, 18% are over-exploited, 9% face depletion and 1% are recovering (Mullon et al., 2005;FAO, 2008).After a long period of overexploitation, increasing efforts to restore marine ecosystems and rebuild fisheries are under way.Efforts have been made considerably for the recovery of overexploited stocks in the world and the amount of fish stocks required to rebuild is 63%.Stocks should be harvested with lower exploitation rates to prevent the collapse of vulnerable species (Worm et al., 2009).A similar evaluation of Turkey's fishery supplies is not possible due to the near absence of studies for determining supply size, the replenishment of these supplies, and the current fishing fleet's catch per unit effort (CPUE).However, fluctuations since 1988 in the production in Turkey, along with the increase of fishing fleets in terms of number, size, engine power, fish detecting devices and size of fishing tools in the same period are as much a cause for concern as the situation of supplies around the world.It is possible to say that these changes in fish supplies worldwide are caused by an increase in fishing capabilities, along with changes in the ecosystem caused by a sharp decrease in non-target species aside from the target species (Alverson et al., 1994;Hall, 1996;Ye, 2002;Sanchez et al., 2004;Kelleher, 2005;Kumar & Deepthi, 2006).Trawls are one of the most important fishing gears used by the fishing industry for obtaining yield from the seas and are in widespread use for fishing benthic species in Turkey.Even though the Black Sea presents only limited regions suitable for bottom trawl fishing, it is used prominently in certain regions.In this context, the region where the Sakarya river flows into the Black Sea, which one of the regions where bottom trawls are used extensively was chosen as a station for this study.There are very few studies on selectivity, other management measures and catch composition in this area.
As is the case with this study, other studies conducted in Turkey and around the world on bottom trawls and other fishing gears similar to bottom trawls, revealed that the proportion of non-target species and discard is quite high.Similarly, the discard rate of bottom trawls was found to be 37% in the Bay of Izmir in the Aegean Sea; bycatch in shrimp trawls was found to be 29% in the Marmara Sea; and the bycatch of beam trawlers in the Marmara Sea for mesh size of 36 mm and 40 mm was found to be 28.9% and 27.8% respectively (Özbilgin et al., 2006;Zengin & Akyol, 2009;Bök et al., 2011).It was reported that the discard rate of bottom trawlers targeting different species of fish ranges between 19% -64% in many studies conducted in the Mediterranean Sea (Tsagarakis et al., in press).However, the discard rates of trawlers in different regions (e.g., Ireland, North-eastern Atlantic, USA) were found to be quite high (Allain et al., 2003;Borges et al. 2005;Harrington et al., 2005).The discard rate in this study is similar to the results of other studies conducted in different seas.In light of these data, it was found that bottom trawl fishing in the world, just as in the Black Sea region, is characterized by a very high discard rate.This is because, this type of fishery retains large amounts of non-target species due to lack of selectivity.
It has been reported that the catch composition of trawls used for fish and benthic organisms, differ depending on the depth (Probert et al., 1997;Sartor, et al., 2003;Sanchez et al., 2004;Gücü, 2012).In the current study, the depth was determined as a factor influencing the number of discards and the biomass of some species.While the number of discarded species in T1 was 24, it was 11 in T2 (Table 2).When considering the biomass, the discard ratio of M. merlangus in particular, which was dominant in the deeper area, was found to be higher.Additionally, a significant difference was noted between the discard rates of these two depth groups.EUE was applied to evaluate the impact of bycatch on total catch (Sartor, et al., 2003).The significant differences (p<0.05) in discard rate and EUE values of these two depth groups show that depth affects the catch composition of bottom trawls (Table 4).It can be said that trawl fishing in T1 region affects a larger number of organisms than in T2 based on the mean number of species, average species diversity and the evenness index.
The difference observed in the T2 depth group with regard to biomass is probably due to the presence of considerable M. merlangus stocks at this depth.In the studies that were performed, the number of species, bycatch, and discarded biomass showed considerable differences according to region, season and depth (Probert et al., 1997;Stobutzki et al., 2001;Sartor et al., 2003;Sanchez et al., 2004).In this context, bottom trawlers affect ecosystems differently according to depth.As a result, the limited continental shelf size of the Turkish coasts in the Black Sea and the anoxic layer that commences at depths greater than 150-200m limit bottom trawl fishing in the region.Despite the limited area, extensive bottom trawl fishing activities have been carried out in Turkey's middle and western coasts of the Black sea.One of these regions is the area where the Sakarya River flows into the Black Sea.According to the legal regulation, trawl cod end mesh size has to be 40 mm in the Black sea but fishermen don't use this type of net.Therefore, the bycatch of traditional fish trawls used in the region is high and they lack the characteristics that would allow for sustainable stocks; this adversely affects both the fish stocks and the ecosystem.To minimize this negative effect, the traditional fishing gears need to be redesigned and new devices should be developed to reduce bycatch.For instance, the discard rate of M. merlangus could be reduced by using square mesh panel placed in the cod-end (Özdemir et al., 2012).Mesh size should be determined according to the size of the target species; the cod end of the trawl must be standardized because, increasing cod end circumference negatively affects selectivity (Tokaç et al., 2009).Fishing gears with these characteristics should be designed, and the most suitable gears for the region should be determined and recommended to local fishermen.Furthermore, fishing activities should be controlled.Despite these, it should be taken into account that discards into the sea constitute an easily available food resource for many scavengers such as seabirds and species living on the sea bottom (Valeiras, 2003).
Bycatch may be affected by several factors such as season, depth, region and characteristics of fishing gear.Thus, more comprehensive research should be designed in future.Moreover, a bycatch monitoring program should be developed to track changes in discarding and to gain a better understanding of the factors affecting bycatch.
On the other hand, selective fishing of only the target species is not necessarily useful to any of the target species, the by-catch species, or the ecosystem.Selectively and intensively taking out single species from an ecosystem will upset the existing relationships such as productivity of the species and sizes of fish in the ecosystem.Consequently, selectivity regulations are needed to balance the impact of all fisheries in an area (Zhou, 2008;Garcia et al., 2012).Therefore, balanced harvesting method should be developed for sustainable fisheries in the area.
Bycatch in bottom trawlers constitutes a serious issue.As a significant factor contributing to population decrease and adversely affecting marine ecosystems, bycatch is considered by scientists, ecologists and politicians alike as being a significant problem.Therefore solving the problems of the fishing industry, or ensuring sustainable fishing, is inconceivable without taking bycatch into consideration (Hall et al., 2000;Lewison et al., 2004;Zhou, 2008;Davies et al., 2009;Zollet, 2009).In this context, a fisheries management plan for the Turkish coasts of the Black Sea should be elaborated that accounts for bycatch in regions where bottom trawl fishing is employed.
Fig. 4 :
Fig. 4: Multidimensional scaling ordination of hauls of discarded catch for both groups T1 and T2.
Fig. 3 :
Fig. 3: Similarity dendrogram for discarded species composition based on trawl samples by depth.
Table 1 .
The total biomass was 2142.77kg, of which 46.01 % (985.86 kg) were identified as the target, 53.99% (1156.91)as bycatch and the weighted dis-Total biomass of species caught in samplings.
card rate was determined as 42.06% (Table1).The rate of discard within the bycatch was determined as 77.89% (901.21kg) of which 83.62% is constituted by M. merlangus below commercial size.The most abundant target, commercial bycath and discard species caught was M. merlangus.Catch compositions and bycatch components are shown in Figure
Table 2 .
SIMPER analysis of discards species abundance at the T1 and T2 groups.Average dissimilarity between groups = 93.13.
Table 3 .
Mean hourly biomass of discarded and landed species in the depth groups.
Table 4 .
Minimum, maximum and mean values of hourly yields estimated for the landed and discarded catch (±S.D.), discard rates of groups and EUE in the groups and statistical test results .
Table 5 .
Calculated values of abundance, ecological parameters in the depth groups and statistical test results (± S.D.).
|
2019-03-22T16:07:04.622Z
|
2013-12-16T00:00:00.000
|
{
"year": 2013,
"sha1": "37898e531c2d4fc182b3b13521a618ca5c47b6f2",
"oa_license": "CCBYNC",
"oa_url": "https://ejournals.epublishing.ekt.gr/index.php/hcmr-med-mar-sc/article/download/12383/12347",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1f9e82dc938c1c761f5e410a594f3bb80d6246f7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
13571822
|
pes2o/s2orc
|
v3-fos-license
|
From lithium to sodium: cell chemistry of room temperature sodium–air and sodium–sulfur batteries
Research devoted to room temperature lithium–sulfur (Li/S8) and lithium–oxygen (Li/O2) batteries has significantly increased over the past ten years. The race to develop such cell systems is mainly motivated by the very high theoretical energy density and the abundance of sulfur and oxygen. The cell chemistry, however, is complex, and progress toward practical device development remains hampered by some fundamental key issues, which are currently being tackled by numerous approaches. Quite surprisingly, not much is known about the analogous sodium-based battery systems, although the already commercialized, high-temperature Na/S8 and Na/NiCl2 batteries suggest that a rechargeable battery based on sodium is feasible on a large scale. Moreover, the natural abundance of sodium is an attractive benefit for the development of batteries based on low cost components. This review provides a summary of the state-of-the-art knowledge on lithium–sulfur and lithium–oxygen batteries and a direct comparison with the analogous sodium systems. The general properties, major benefits and challenges, recent strategies for performance improvements and general guidelines for further development are summarized and critically discussed. In general, the substitution of lithium for sodium has a strong impact on the overall properties of the cell reaction and differences in ion transport, phase stability, electrode potential, energy density, etc. can be thus expected. Whether these differences will benefit a more reversible cell chemistry is still an open question, but some of the first reports on room temperature Na/S8 and Na/O2 cells already show some exciting differences as compared to the established Li/S8 and Li/O2 systems.
Review 1 Introduction
Rechargeable lithium-ion batteries (LIBs) have rapidly become the most important form of energy storage for all mobile applications since their commercialization in the early 1990s. This is mainly due to their unrivaled energy density that easily surpasses other rechargeable battery systems such as metal-hydride or lead-acid. However, the ongoing need to store electricity even more safely, more compactly and more affordably necessitates continuous research and development. The need for inexpensive stationary energy storage has become an additional challenge, which also triggers research on alternative batteries. Major efforts are directed towards continuous improvements of the different Li-ion technologies by more efficient packaging, processing, better electrolytes and optimized electrode materials, for example. Although significant progress has been achieved with respect to the power density over the last years, the increase in energy density (volumetrically and gravimetrically) was relatively small [1]. A comparison of different battery technologies with respect to their energy densities is shown in Figure 1. Pb-acid -lead acid, NiMH -nickel metal hydride, Na-ion -estimate derived from data for Li-ion assuming a slightly lower cell voltage, Li-ion -average over different types, HT-Na/S 8 -high temperature sodium-sulfur battery, Li/S 8 and Na/S 8lithium-sulfur and sodium-sulfur battery assuming Li 2 S and Na 2 S as discharge products, Li/O 2 and Na/O 2 -lithium-oxygen battery (theoretical values include the weight of oxygen and depend on the stoichiometry of the assumed discharge product, i.e., oxide, peroxide or superoxide). Note that the values for practical energy densities can largely vary depending on the battery design (size, high power, high energy, single cell or battery) and the state of development. All values for practical energy densities refer to the cell level (except Pb-acid, 12 V). The values for the Li/S 8 and Li/O 2 batteries were taken from the literature (cited within the main text) and are used to estimate the energy densities for the Na/S 8 and Na/O 2 cells. Of the above technologies, only the lead acid, NiMH, Li-ion and high temperature Na/S 8 technologies have been commercialized to date.
Ultimately, the energy density of a practical battery is determined by the cell reaction itself, that is, the electrode materials being used. The need for a proper cell design and packaging considerably reduces the practical energy density of a battery compared to the theoretical energy density. The cell reaction of Li-ion batteries is not fixed and different electrode materials and mixtures are used depending on the type of application. Graphite/carbon and to a lesser degree Li 4/3 Ti 5/3 O 4 (LTO) serve as the negative electrodes. Recently, silicon has been added in small amounts to graphite to increase the capacity. Layered oxides (the classic LiCoO 2 , LCO) and related materials (LiNi 1−x−y Mn x Co y O 2 , NMC; LiNi 0.8 Co 0.15 Al 0.05 O 2 , NCA; olivines, LiFePO 4 , LFP; spinels, LiMn 2 O 4 , LMO) are applied as positive electrodes. The underlying storage principle of all these electrode materials is a one-electron transfer per formula unit. In this process, the de-/intercalation of one Li-ion is linked to a change in the transition metal oxidation state by one (Co 3+/4+ , Fe 2+/3+ , Mn 3+/4+ , etc.), as illustrated in Figure 2a. However, since the positive electrode materials often suffer from stability issues at too low lithium contents, only a fraction of the theoretical capacity can be achieved in practice (with LFP being an exception). For example, only 0.5 electrons per formula unit can be reversibly exchanged for LCO. The electrode reaction for LCO can therefore be written as (1) The amount of charge that can be stored during this process is therefore limited and the capacities of positive insertion-type and intercalation-type electrode materials are around 120-180 mAh/g. Employing graphite as a negative electrode (372 mAh/g), the theoretical energy densities of single cells for current Li-ion technology are limited to around 350-400 Wh/kg and 1200-1400 Wh/L. Roughly about one fourth to one half is achieved in practice due to the additional weight and volume of the current collectors, separator, electrolyte, cell housing, and so forth.
Significantly higher energy densities can only be achieved by using electrode reactions such as multielectron transfer and/or lighter elements. A broad range of so-called conversion reactions has been studied which are based on the full reduction of the transition metal [2]. The general electrode reaction can be written as: (2) Figure 2: Operating principles of (a) a lithium-ion battery, (b) a metal-oxygen battery (non-aqueous electrolyte) and (c) a metal-sulfur battery during discharge. (A = Li, Na). A lithium-ion battery is based on intercalation compounds as electrodes. The exact cell reaction depends on the materials used. In this example, the reaction equation is formulated for the classical LIB with graphite as the negative and LiCoO 2 as the positive electrode. The same concept can be applied for a sodium-ion battery. Metal-oxygen and metal-sulfur batteries perform best with a lithium or sodium metal as the anode. The positive electrode consists of a porous support, usually carbon. In a metal-oxygen battery, this support enables the reduction of atmospheric oxygen and accommodates the insulating discharge products of Li 2 O 2 , Na 2 O 2 , NaO 2 , or ideally, Li 2 O and Na 2 O. In metal-sulfur batteries, the support hosts the insulating end members of the cell reaction, which are sulfur (before discharge) and ideally Li 2 S and Na 2 S (after discharge). The sketch in Figure 2 illustrates the most frequently studied cell concepts for metal-oxygen and metal-sulfur cells. Other concepts, for example, solid electrolytes or liquid electrodes, are also currently being studied.
where M is either a transition metal (Cu, Co, Fe, etc.) or Mg, and X is an anion (F, O, S, etc.). The overall success has been limited as conversion reactions typically show large irreversible capacities during the first cycle and a large hysteresis during cycling. This irreversible capacity is mostly caused by the need for complete lattice reconstruction and the corresponding formation of new interfaces.
The most appealing multielectron transfer systems are the lithium-sulfur battery and the lithium-air (or more precisely, the lithium-oxygen battery) in which a non-metal is the redoxactive element. Both batteries combine very high theoretical energy densities with the advantage of using abundant and thus resource-uncritical elements. Both systems have been intensively studied over the last years. For example, more than 250 publications appeared in the field of lithium-sulfur batteries in 2014 alone and about 200 publications in 2014 are concerned with lithium-oxygen batteries. The cell concepts are entirely different from conventional Li-ion technology, as depicted in Figure 2. Here, elemental sulfur and atmospheric oxygen are reduced at the positive electrode to form Li 2 S and Li 2 O 2 during discharge, which is expressed by: Moreover, the cells ideally operate with metallic lithium as the negative electrode. No heavy transition metals participate in the cell reaction and theoretical energy densities of 2613 Wh/kg for the Li/S 8 and 3458 Wh/kg for the Li/O 2 cell can be calculated.
Perhaps the most important conceptual differences between these cell systems and Li-ion batteries are (1) that the redox centers (oxygen and sulfur) are lighter and spatially more concentrated, allowing for higher energy densities and (2) that the redox-active (molecular) species are mobile in liquid electrolytes and new phases form and decompose during cycling. In intercalation compounds, the redox centers (transition metal cations) are immobile as they are pinned to the fixed positions of the crystal lattice and are, therefore, spatially diluted. However, due to the poor conductivity of sulfur, Li 2 S and Li 2 O 2 , the non-metal redox materials also require a suitable conductive support structure. For the Li/S 8 and Li/O 2 batteries, this means that significant complexity is added, as a series of transport steps and nucleation/decomposition processes take place that will depend on the morphology, microstructure and surface chemistry of the conductive support. Side reactions with the metallic anode and dendrite formation further complicate the cell chemistry, and therefore, the cycle life of both cell systems remains insufficient to date. The Li/O 2 cell particularly suffers from additional side reactions related to electrolyte decomposition at the positive electrode. Many challenges therefore must be tackled in order to develop practical systems.
Research on sodium-ion batteries (NIBs) has recently been revived and is largely motivated by the natural abundance of sodium [3][4][5][6][7][8][9][10]. The sodium content in the earth's crust and water amount to 28,400 mg/kg and 11,000 mg/L compared to 20 mg/ kg and 0.18 mg/L for lithium [11]. Additionally, the number of known sodium compounds is much larger as compared to lithium, and thus combinations of electrode materials that enable the development of batteries based solely on low cost elements (or that provide specific advantages that complement Li-ion technology in special applications) are expected. It is interesting to note that sodium-ion and lithium-ion batteries were studied in the 1970s and 1980s. However, due to the success of the lithium-ion battery (and probably the insufficient overall quality of materials, electrolytes and glove boxes [3]), research on sodium-based batteries was largely abandoned. The only exceptions were the high temperature systems Na/S 8 and Na/NiCl 2 [12][13][14][15].
Although one would initially assume very similar cell chemistries for otherwise identical LIBs and NIBs, the behavior is in most cases quite different. The reason is related to the larger size of the sodium ion that affects the phase stability, the transport properties and the interphase formation. The basic characteristics of multielectron transfer reactions involving sodium-based conversion reactions have been recently summarized and appear quite attractive. However, similar challenges compared to lithium-based conversion reactions are also found [10].
The intriguing question is whether the chemical differences between sodium and lithium could help to solve some of the challenges known for the Li/S 8 and Li/O 2 cells. Although an unavoidable penalty with respect to the energy density is paid when replacing lithium by sodium, the theoretical value for a room-temperature Na/S 8 battery with Na 2 S as a discharge product (1273 Wh/kg) and a Na/O 2 cell with Na 2 O 2 as a discharge product (1600 Wh/kg) are still very high compared to LIBs. However, to date, only very little is known about the room temperature chemistry of Na/S 8 and Na/O 2 cells. Only around thirty studies have been published as of 2014 in total. Although there is some dispute about the stoichiometry of the discharge products in these cells, it has been demonstrated that Na/O 2 cells can be cycled with much better performance as compared to the analogue Li/O 2 cell. Replacing lithium by sodium might therefore be an effective strategy to improve the reversibility of high energy battery systems, notwithstanding the reduced theoretical energy capacity. Some general differences between lithium and sodium cells are immediately apparent: 1. The lower melting point of sodium (T m,Na = 98 °C) as compared to lithium (T m,Li = 181 °C) and its generally higher chemical reactivity pose additional safety issues for cells using metal anodes. On the other hand, cell concepts with a molten anode might be easier to realize given the advantages of better kinetics and prevention of dendrite formation. 2. Sodium is softer than lithium, making handling and processing more difficult. On the other hand, avoiding dendrite formation by means of mechanical pressure can be easier. 3. Sodium is less reducing than lithium, meaning that more substances are thermodynamically stable in direct contact with the metal. This can be an important advantage when designing cell concepts including solid ionconducting membranes. Many Li-ion conducting solid electrolytes degrade exposed to direct contact with metallic lithium [16]. Moreover, by employing betaalumina, an excellent Na-ion conducting solid electrolyte is commercially available. 4. The total number of known sodium compounds is larger compared to lithium, so cell reactions might require more intermediate steps or stop at a different stoichiometry. Two notable exceptions exist that might be of advantage for sodium cells. Aluminium forms binary alloys with lithium but not with sodium. Therefore, aluminium instead of the more expensive copper can be used as a current collector for the negative electrode in sodium batteries. Another exception that might have practical relevance is that sodium, in contrast to lithium, does not form a stable nitride when exposed to N 2 atmosphere. This has an immediate impact on Li/O 2 and Na/O 2 cells when operated under air. 5. The larger sizes of the sodium atom and ion compared to lithium (+82% for the atom and +25% to +55% for the ion, depending on the coordination) lead to larger volume changes during cycling. Sodium-based electrodes might therefore degrade faster and the formation of stable interfaces might become more difficult. But the smaller size of the lithium ion corresponds to a larger charge density, and the lithium ion polarizes it environment stronger than the sodium ion. This causes severe differences in chemical bonding and ion mobility. 6. The solubility of sodium and lithium compounds in solvents are different. The discharge products and/or interphases (SEI formation) can therefore dissolve to different degrees and electrolyte solutions might have different properties. This section is organized as follows. Firstly, the basic operating principles and energy densities of Li/O 2 and Na/O 2 cells are discussed. Secondly, the state-of-the-art knowledge on Li/O 2 cells is summarized. As several reviews have been published in this field, we will only briefly highlight important achievements and discuss recent developments. Thirdly, the available literature on the Na/O 2 cell is summarized and similarities and differences to the analogue Li/O 2 cell are discussed. Li/S 8 and Na/S 8 batteries are discussed the same way in chapter 3. The section will end with a brief summary and outlook.
Operating principles and general remarks
The operating principle of a lithium-oxygen battery is depicted in Figure 2b. The major difference compared to Li-ion batteries is that the battery is designed as an open system that enables uptake and release of atmospheric oxygen at the cathode during cycling (hence the name "lithium-air battery", which is misleading as mostly pure oxygen gas is used). During discharge, lithium is oxidized at the negative electrode and oxygen is reduced on the positive electrode. Similar to a fuel cell cathode, the positive electrode is a porous, electron-conducting support (gas diffusion layer, GDL) that enables oxygen transport, oxygen reduction (ORR) and oxygen evolution (OER) during cell cycling. Carbon-based materials are mostly used for this purpose. Considering the basic principle of this cell concept, some challenges are immediately obvious: (1) The implementation of special membranes is necessary to prevent contamination of the cell by unwanted gases from the atmosphere (N 2 , CO 2 , and also H 2 O for the case of non-aqueous systems) and to protect the metal electrode from oxygen exposure. At the same time, drying out of the cell due to solvent evaporation must be avoided. (2) The gas transport must be fast enough to enable sufficiently fast discharging and charging.
(3) The cell needs to provide enough free volume to accommodate the discharge product.
The reaction product depends on the type of electrolyte used. In aqueous electrolytes, water becomes part of the cell reaction and dissolved LiOH is formed during discharge, which precipitates as LiOH·H 2 O once the solubility limit is reached. The need to protect the lithium anode from direct contact with water is experimentally challenging, so most research has been devoted to lithium-oxygen batteries with an aprotic electrolyte. Some possible discharge products can be directly predicted from the Li-O phase diagram shown in Figure 3a. Under ambient conditions, the thermodynamically stable phases are lithium oxide (Li 2 O) and lithium peroxide (Li 2 O 2 ). As these compounds are insulators, GDLs with a high surface area are used to improve the kinetics. Two other cell concepts that have been studied to a lesser extent are cells with a mixed aprotic/ aqueous electrolyte and cells based on solid electrolytes. A sodium-oxygen battery can be designed exactly the same way but the phase diagram ( Figure 3b) shows that in addition to Na 2 O 2 and Na 2 O, sodium superoxide (NaO 2 ) can also be formed (although possibly only kinetically stable under ambient conditions). The relative stability of NaO 2 was recently calculated by two groups with somewhat controversial results (see the section The sodium-oxygen (Na/O 2 ) battery for more details). Sodium ozonide (NaO 3 ) has been frequently reported as being unstable under ambient conditions and hence is not considered. Different discharge products may form in alkalimetal-oxygen cells. As will be discussed later in more detail, the discharge products in aprotic electrolytes are Li 2 O 2 in Li/O 2 cells, and Na 2 O 2 and NaO 2 (and Na 2 O 2 ·2H 2 O) in Na/O 2 cells.
It is an open and interesting question whether the relative stability of the different alkali oxides is correctly represented in the phase diagrams, as the influence of water may have been overlooked. It is well known that even small amounts of water can stabilize oxide phases, which are otherwise absent in the phase diagram [17].
The theoretical cell voltages and energy densities of the cell reactions are summarized in Table 1. We note that also potassium-oxygen batteries are being studied [20,21]. The energy densities however, are lower. The values for energy densities vary depending on whether the weight of oxygen is included or not, but all metal-oxygen batteries are superior compared to Li-ion batteries in terms of theoretical energy capacity. This is also the case for cells with NaO 2 as a discharge product, although they are based on one-electron transfer. It is important to note that all values in Table 1 are theoretical values. As the concept of metal-oxygen batteries requires many additional design-related components (e.g., gas diffusion layer, membranes to minimize oxygen diffusion towards the metal anode and to minimize access of other detrimental gases from the atmosphere) the weight penalty for reaching a commercial product will be much higher as compared to LIBs. The estimated values of the practical energy density vary greatly. Values of 1700 Wh/kg at the cell level and 850 Wh/kg at the battery level have been suggested by Girishkumar et al. [22], while Christensen et al. estimated around 1300 Wh/kg for the cell level [23]. PolyPlus, one of the leading companies working on lithium-air batteries, project 600 Wh/kg and 1000 Wh/L, respectively [24]. Recently, Gallagher et al. comprehensively studied the use of Li-air batteries for electric vehicles (EVs) and predicted values of around 250-500 Wh/kg and 300-450 Wh/L on the system level. The authors concluded that Li-air batteries will not be a viable option for commercial automotive applications [25], which then also would exclude Na-air systems. An additional challenge for electric vehicle application is that the current densities of lithium-oxygen cells (usually below 1 mA/cm 2 ) are still too small and an improvement by one to two orders of magnitude is necessary, as the target current density should be in the range of 8-80 mA/cm 2 [23,26]. Although these estimates depend on the assumptions made, it is clear that the competition between lithium-oxygen batteries and LIB technology will depend on the application. In any case, the limits of such a technology will only be fully apparent once a meaningful prototype has been built. The only report of a fully engineered cell reported in the literature is given by PolyPlus for a primary, aqueous, lithium-air battery. Their cells with a total capacity of about 10 Ah achieved 800 Wh/kg at a current density of 0.3 mA/cm 2 [24]. Given the fact that research on rechargeable lithium-oxygen cells is still at a more fundamental level, possible applications should therefore not be restricted to EVs.
For sodium cells, the theoretical energy densities are smaller compared to the analogue lithium systems. Therefore, the development of a high energy device might be more challenging unless the sodium cell chemistry provides specific advantages which might include: (1) faster kinetics of the oxygen electrode in the case of NaO 2 as a discharge product, (2) a higher tolerance against atmospheric nitrogen as no stable nitride exists, (3) cell concepts with a molten sodium electrode [26], or (4) the availability of beta-alumina as a solid electrolyte that might enable cell concepts including solid membranes.
Considering all of these aspects, lithium-oxygen and sodium-oxygen batteries are very attractive means for energy storage in theory, but the development of practical cells is an ambitious goal. Even in the best scenario, such materials are unlikely to be developed for EV applications. However, the major showstopper for the development of rechargeable alkali-air devices is that the cell systems usually suffer from severe side reactions that hinder stable cell cycling for a large number of cycles. As will be discussed below, the sodium-oxygen cell indeed shows some promising advantages over the lithium system but several fundamental challenges must be understood and solved before the development of a practical battery might become feasible.
Classification of voltage profiles
The basic properties of a cell reaction can be easily discerned from diagrams showing the voltage profiles (discharge/charge curves) as their shape provides direct information on the complexity, reversibility and efficiency of the cell reactions. At moderate currents, most of the Li/O 2 and Na/O 2 batteries show quite similar discharge curves: the discharge voltage is more or less constant and comparably close to the theoretical cell potential. The discharging stage ends with a sudden potential drop ("sudden death"). The charging curves, however, vary significantly and heavily depend on the cell configuration (sodium or lithium cell, type of electrolyte, use of catalysts, type of GDL, etc.). So in order to more easily discuss the experimental results, the classification of the voltage profiles according to the shape of the charging curves is useful ( Figure 4).
The starting point of the matrix is the ideal cell reaction, classified as Type 1A. The voltage profile is characterized by negligible overpotentials for discharge and charge and a Coulombic efficiency of Φ = 100%, that is, the charging voltage is close to its theoretical value and charging ends with a sudden increase in cell potential as soon as all discharge products are decomposed. Based on this ideal cell reaction, the following matrix can be derived.
Type 1:
The combined overpotentials (sum of the overpotentials during discharge and charge) approach zero, meaning that kinetic limitations are negligible.
A: Coulombic efficiency = 100%. The cell reaction is completely reversible. B: Coulombic efficiency < 100 %. The reaction is only partially reversible. Possible reasons are that some of the discharge product became electrochemically inactive, lost contact to the electrode, or underwent irreversible side reactions with other cell components. C: Coulombic efficiency > 100 %. Either electrochemical side reactions or a so-called shuttle process (chemical shortcut) between both electrodes takes place. A shuttle process can be intentional (e.g., overcharge protection in LIBs) or unintentional (e.g., polysulfide shuttle in lithium-sulfur batteries). Unless it is intentional, Coulombic efficiencies exceeding 100% are always a sign of undesired side reactions. Note that in this case the Coulombic efficiency of the desired cell reaction is also below 100%. Values exceeding 100% simply arise from the fact the shuttling/ side reactions give rise to additional external currents leading to charging capacities exceeding the discharge capacities.
Type 2:
Considerably high combined overpotential occurs and the cell kinetics are sluggish. Various processes can contribute to overpotential, but using catalysts or optimizing the transport properties might be effective strategies for improvement.
Type 3:
The voltage continuously increases during charging and might exhibit additional plateaus. Such a behavior indicates a more complex electrode reaction. In most cases, this is a strong indication of undesired side reactions. Additional plateaus during charging can originate from the electrochemical decomposition of side products stemming from undesired side reactions between cell components and the discharge product. For example, Li 2 O 2 can react with the electrolyte to form Li 2 CO 3 , which decomposes during charging at high voltages. Another possibility is that the cell discharge was incomplete (e.g., the discharged state is a mixture of Na 2 O 2 and NaO 2 ) and the different discharge products decompose at different potentials during charging.
The matrix certainly includes some simplifications: side reactions might be time dependent, the voltage profile can change during cycling, overpotential increases with current density, etc. However, the matrix allows for a straightforward classification of the large number of different experimental results published. Briefly, the more different the voltage profile is from the ideal case (Type 1A), the more challenges that have to be tackled to achieve a reversible cell reaction. So far most metal-oxygen batteries show the following behavior when cycled at moderate rates: Type 1B is found for Na/O 2 cells with NaO 2 as discharge product. Type 2C, 3B, and 3C are found for Li/O 2 and Na/O 2 cells with either Li 2 O 2 , Na 2 O 2 , or Na 2 O 2 ·2H 2 O as a discharge product.
It is important to note that values for the capacity, Q, of metal-oxygen cells are presented differently as it is usually done. The common way in battery research is to state the capacity in mAh per gram of active material, that is, per gram of LCO or sulfur, for example. This is possible because the electrode contains all active material and the battery is a closed system. In open metal-oxygen batteries, the active material (oxygen) is not part of the electrode and the discharge product forms as a new phase during discharge. Therefore, capacity values are usually given in mAh per gram of carbon support. As the absolute amount of carbon used is usually very small, the reported capacity values can reach very high numbers, easily exceeding 1000 mAh/g. Stating this value only, however, is clearly not sufficient to judge the performance of the cell and may easily mislead the uninformed reader [22,27]. At a minimum, carbon loading (mg/cm 2 ), electrode size and thickness of the carbon layer (if known) and the total amount of charge should be stated. Given this, the charge density (mAh/cm 3 ) and areal capacity (mAh/cm 2 ) can be calculated and benchmarked against commercialized LIB materials (approximately 1-4 mAh/cm 2 and 350-600 mAh/cm 3 ). A comparable problem is that the common definition of the C rate cannot be applied to metal-oxygen cells without further assumptions, and therefore, discharge and charge rates are usually given as current density (calculated by using the cell cross section). Lyall filed a patent application on "A room-temperature-operated fuel cell comprising an oxygen electrode, a lithium metalcontaining electrode, and an electrolyte comprising an inert, aprotic organic solvent […], which contains an inorganic or organic ionizable salt […]" [28]. Interestingly, the components of this Li/O 2 battery are remarkably close to those utilized today. The pioneering work on rechargeable, room temperature, Li/O 2 batteries with a non-aqueous electrolyte can be summarized as follows. In 1996, Abraham et al. reported on "A polymer electrolyte-based rechargeable lithium/oxygen battery" [29]. This cell could be re-charged at room temperature at least three times at potentials as low as 3.8 V. In 2002, Read characterized a Li/O 2 cell comprising different carbon materials and different electrolyte formulations [30]. This was the first work to analyze and correlate the amount of consumed gaseous oxygen with respect to the transferred electric charge, and found that this value varies strongly depending on electrolyte composition. As will be discussed in the following sections, this kind of characterization is crucial for both evaluating and understanding aprotic Li MnO 2 or Au also promote the decomposition of the aprotic electrolyte rather than the oxygen evolution reaction (see also Figure 5) [42]. Although both the functionality and the necessity of heterogeneous catalysts in Li/O 2 cells remain unsolved, the search for improved heterogeneous catalysts for improved cyclability is still the subject of many new articles on Li/O 2 batteries. The most promising catalyst material, ruthenium nanocrystals, was reported by Sun et al., and the cells show a type 3A hysteresis (see Figure 4) with a charge potential as low as 3. [55][56][57], sulfoxides (DMSO) [58][59][60], amides [61,62], and others [62][63][64]. The ether-based glyme solvents with the general structure CH 3 -O-(CH 2 -CH 2 -O) n -CH 3 with n = 1-4 are the current state-of-the-art solvents [65][66][67][68][69], although they are not entirely stable. A solvent with better performance still must be found. Adams et al. recently reported on a chemically modified monoglyme (DME), 2,3-dimethyl-2,3-dimethyoxybutane, as a promising solvent as it leads to a significantly lower CO 2 evolution (see DEMS) and lower overpotentials for both discharge and charge [70]. Analogous to the lithium-sulfur batteries, the use of lithium nitrate (LiNO 3 ) seems to improve the cyclability of Li/O 2 cells as well. In publications by Liox Power Inc., it was shown that LiNO 3 leads to an improved stability of the lithium electrode solid electrolyte interphase (SEI) formation [61]. Kang et al. showed that it also leads to an improved stability of carbon at the cathode [71].
Differential electrochemical mass spectrometry (DEMS) studies:
The electrolyte decomposition is a major drawback that made DEMS studies inevitable in Li/O 2 cell research. Today, this real-time analysis of the gaseous species being consumed or released during cell cycling is a necessary standard technique.
In an ideally operating cell, only oxygen (O 2 ) evolves during recharge, but in reality, other products such as CO 2 , H 2 O or H 2 are detected and give evidence for unwanted side reactions. Therefore, DEMS or online electrochemical mass spectrometry (OEMS) was introduced into the Li/O 2 battery field and is now one of the most important, but seldom employed, diagnostic tools of current research [46,[72][73][74][75][76][77]. Figure 5 shows the potential of DEMS analysis when comparing different electrolyte and oxygen electrode materials in an Li/O 2 cell [42]. Figure 5a,d shows the galvanostatic cycling characteristics for a PC:DME electrolyte and a pure DME electrolyte, respectively. For both electrolytes, in addition to a pure carbon electrode, heterogeneous catalysts, such as Pt, Au and MnO 2 were also tested. It was shown that the catalysts (especially in combination with the PC:DME electrolyte) lead to a significant reduction of the charge overpotential, and in the case of Pt, by almost 1 V in comparison to pure carbon. However, the corresponding DEMS data in Figure 5b,c clearly prove that only minor amounts of oxygen (O 2 ) but mainly CO 2 is evolved during the charging of the cell. Thus, by means of DEMS, McCloskey et al. could clearly prove that the improved rechargeability due to the heterogeneous catalysts is not related to an improvement of the Li 2 O 2 decomposition, but rather to the promotion of the electrolyte decomposition. In contrast, in pure DME electrolyte, oxygen evolution is indeed observed. However, in this case, the catalyst materials had almost no impact on the charge overpotential, but again only led to an increased evolution of CO 2 . 2.3.1.5 Number of electrons per oxygen molecule, e − /O 2 : As already mentioned above, Read observed that in certain electrolytes the oxygen consumption during discharge was too low for the sole formation of Li 2 O 2 and proposed that Li 2 O is formed in concomitance [30]. Looking back to these results, one can now definitively assume that Read observed the partial decomposition of the electrolyte during discharge rather than the formation of Li 2 O species. Hence, it is of crucial importance to understand that for metal-oxygen cells the reversibility cannot be proven by solely stating Coulombic efficiencies. It is, as introduced by Read, the ratio between consumed or released oxygen and the amount of transferred charge that gives the true reversibility. For an ideal Li/O 2 cell, where Li 2 O 2 is reversibly formed, two electrons are transferred for each reacting oxygen molecule, or 2.16 mAh for 1 mL of gaseous oxygen at 298 K and 10 5 Pa. Any deviation from this ratio is a strong indication for (partial) malfunction and hence, this value is essential, especially when new electrolyte or electrode components are tested. A simple but effective way to measure this ratio is the usage of a pressure sensor and a hermetic gas reservoir as introduced by McCloskey et al. [46,78] or via quantitative DEMS/OEMS, which in addition allows for the identification and separation of the gaseous reactants [42,60,66,68,74]. In addition to the analysis of gaseous reactants, first attempts are also made to quantify the amount of discharge product formed [67,[78][79][80]. This will also be an important step towards true reversibility evaluation.
2.3.1.6 Electrode materials: Obviously a Li/O 2 cell is a very reactive environment and it seems likely that the different oxygen species would also react with other components of the oxygen electrode. Black et al. exposed battery components to potassium superoxide dissolved in aprotic liquids and found that polyvinylidene fluoride (PVDF), a common binder material, decomposes while lithium fluoride (LiF) is formed [81]. They suggest that LiO 2 , a strong base that is formed as an intermediate in a Li/O 2 cell, extracts protons from the PVDF polymer. From the thermodynamic point of view, carbon is also reactive towards, for example, Li 2 O 2 or oxygen at high oxidative potentials, too. For this purpose McCloskey et al. employed a 13 C carbon electrode and monitored CO 2 species via DEMS evolved during the charge process [82]. The appearance of 13 CO 2 at the end of the charge process was taken as evidence for carbon oxidation. Similar findings were made by Thotiyl et al. (Figure 6) who proposed that carbon oxidation can be avoided as long as potentials remain below 3.5 V vs Li/Li + [83]. The same group also investigated non-carbon electrodes, such as nanoporous gold or titanium carbide (TiC) [60,84]. Both materials are claimed to significantly improve the cycle performance compared to carbon electrodes due to a higher chemical stability towards lithium oxide species. On the other hand, the solvent employed in their study (DMSO) is known to be unstable in Li/O 2 cells [85,86]. Notwithstanding the above, the understanding of electrode corrosion and the search for stable electrode materials, either modified carbons or non-carbon materials, is of crucial importance for a reliable Li/O 2 battery.
2.3.1.7 Particle growth and dissolution: At first glance, the chemistry of a Li/O 2 cell may appear quite simple, however, due to worldwide research efforts within the last four years, it was recognized that it is in fact, a very complex cell chemistry. As a consequence it was necessary to refocus on fundamental aspects such as the growth and dissolution process of Li 2 O 2 particles during cycling on a microscopic scale. Various morphologies of Li 2 O 2 deposits are reported in literature. On the one hand, so-called Li 2 O 2 "donuts" or toroids are reported that form to a diameter of up to 1 µm, depending on solvent and cycling conditions (see Figure 7). On the other hand, thin film coverage of the carbon electrode is found. It is reported that at low current densities large toroid-like particles form and that at high current densities Li 2 O 2 film formation takes place [32,87]. Interestingly, Read basically made the same observation in 2002 and concluded that large particles could only grow if the oxide (Li 2 O 2 ) is (a) soluble in the electrolyte (b) able to migrate on electrode surface or (c) capable of catalyzing the oxygen reduction [30]. Theoretical studies are particularly focused on possibility (c) and look for electric transport in Li 2 O 2 . Since Li 2 O 2 is an intrinsic wide band gap insulator, additional transport mechanisms such as transport along metal-type surfaces or hole polaron transport are proposed [88][89][90][91]. The assumption of a soluble redox-active species (e.g., soluble O 2 − ), as polysulfides in the case of lithium-sulfur or sodium-sulfur batteries, has only very recently been seriously taken into account. Viswanathan et al. suggest that Li 2 O 2 grows only to film deposits of 5-10 nm in thickness because charge transport through the Li 2 O 2 layer can only proceed by hole tunneling [92,93]. In a very recent study they propose that the comparably large donut structures can only be observed in the presence of water in the electrolyte, which leads to soluble superoxide species [94]. Their findings, however, are in contrast to those of Zheng et al. who were able to operate a model allsolid-state Li/O 2 cell, without any liquid electrolyte, in an environmental SEM and observed the formation of large toroid particles larger than 500 nm [95]. To conclude, even the dissolution process of Li 2 O 2 during battery operation is not fully understood and continues to be a part of research efforts. [97]. Also lithium iodide [98] and TEMPO [99] have been recently studied as RMs with promising results (see Figure 8). It is worth noting that redox mediators (also called "relays") are used also in other applications for the improvement of poor electrode kinetics.
An interesting and complementary approach is to increase the solubility of oxides species (e.g., Li 2 O 2 ) in the liquid electrolyte which would allow fast transport of oxide species to active electrode sites. Lim [100]. As these approaches are quite new, several questions such as long term functionality and stability of the molecular additives in an Li/O 2 battery need to be investigated. Nevertheless, we believe that major improvements are possible due to chemical tailoring of the molecules with respect to desired functionality.
In conclusion, several challenges for the development of aprotic Li/O 2 cells with competitive performance remain. Within the last few years more and more researchers focus on the chemical processes taking place during operation of metal-oxygen batteries, which surely will lead to deeper understanding of Li/O 2 batteries and its potential in application. This is remarkable, especially in the fast moving field of battery research, as experimental mechanistic studies are usually time demanding and require both a careful execution of experiments and the use of complex and often expensive analytical methods.
The sodium-oxygen (Na/O 2 ) battery:
The sodium-oxygen battery is based on the same cell concept as the lithium-oxygen battery, however, only very little literature is Figure 4 for graphical representations of the different types. Some related studies including carbon dioxide assisted cells or high temperature cells are also included. These reports are shown in grey and will be discussed at the end of this literature survey. Also two review papers by Das et al. [112] and Ha et al. [113] have been very recently published.
Peled et al. were the first to publish an electrochemical cell based on the reaction of sodium with oxygen in 2010 [26]. The cell was adopted from a fuel cell design and consisted of a molten sodium electrode, a polyglyme/PC (90:10) based electrolyte with different additives and a Pt containing carbon electrode. The cell operated at 105-110 °C. The high temperature concept with molten anode was chosen for several reasons: Counteracting the sluggish cathode reactions, lowering the cell impedance, eliminating dendrites and minimizing interference with water and carbon dioxide. On the other hand, the high reactivity towards the electrolyte was an issue. The cell discharged at 1.75 V (100 µA) and was charged at 3.0 V (50 µA). The discharge product of a full discharge was assumed to be sodium peroxide without further proof by analytical techniques. Later on, the same group published a follow-up study with the main focus on investigating SEI formation and sodium plating/stripping in an ionic liquid based electrolyte [114]. Na 2 SO 4 was added to the electrolyte as SEI former. Although sodium plating/stripping was obtained for 300 cycles without internal shortcuts, the efficiency with around 70-80% was still unsatisfying. In general, these results underline that studying the reversibility of the ORR/OER reactions in metal-air batteries is not sufficient as also plating/stripping of the alkali metal needs to be reversible in order to achieve a long cycle life. Cell discharge using this IL based electrolyte at 25 µA/cm 2 was characterized by a sloping decrease, charging (250 µA/cm 2 ) mainly occurred at about 3 V. As we will see in the following, the overall cycling behavior of this cell is very different from cells operating with a solid sodium anode at room temperature.
In 2011, Sun et al. showed first results on an aprotic, room temperature sodium oxygen cell (Figure 10a) [101]. In contrast to Peled et al. they made use of a solid sodium foil as anode and a diamond-like carbon thin film electrode as cathode. In accordance with typical lithium-oxygen cells they used 1 M NaPF 6 in EC:DMC 1:1 as the liquid, aprotic electrolyte. The cell setup was an H-shaped glass cell. Using transmission electron microscopy, single area electron diffraction and Fourier transform infrared spectroscopy sodium peroxide (Na 2 O 2 ) and sodium carbonate (Na 2 CO 3 ) were proven as discharge products. These products vanished during charge with overpotentials exceeding 1 V similar to lithium-oxygen cells. Overall, the cell performed just like a typical lithium-oxygen battery, however, the discharge potentials were slightly lower (around 2.4 V), as expected. In 2013, the same group (Liu et al., [103]) used graphene nanosheets as cathode and NaPF 6 dissolved in monoglyme as electrolyte. This way, discharge capacities as high as 9268 mAh/g carbon were achieved. Again, sodium peroxide was described as the discharge product and large overpotentials were observed (Figure 10b). In both cases, the voltage profile can be classified as Type 2C.
In 2012 Hartmann et al. [109] reported a sodium-oxygen battery with sodium superoxide (NaO 2 ) as discharge product. Unequivocal proofs for superoxide formation were provided by X-ray diffraction, Raman spectroscopy and pressure monitoring. SEM studies revealed that, in contrast to Li/O 2 cells for which nanoscopic Li 2 O 2 toroids are found, NaO 2 forms large micrometer-sized cubic crystallites (compare Figure 7 with Figure 11). The cells showed only very small combined overpotentials of about 200 mV during cycling which was attributed to the kinetically favored one-electron transfer. Shortly after, similar findings were reported for potassium-oxygen cells.
Here, KO 2 forms during discharge and a very similar voltage profile has been found [20]. The Coulombic efficiency of the sodium superoxide cell in the first cycle was around 90%, discharging and charging ended with a sudden voltage drop and increase, respectively. The voltage profile can therefore be classified as Type 1B, meaning that the cell cycles more ideal than Li/O 2 cells or Na/O 2 cells with peroxides as discharge products.
The achieved discharge capacity with 300 mAh/g carbon was relatively low due to the high mass of the free standing electrode. On the other hand, the absolute capacities were comparably high. Cycle life, however, was poor and the capacity faded to virtually zero within ten cycles. The study also included a direct comparison in cycling behavior between otherwise identical Na/O 2 and Li/O 2 cells. The latter showed a much smaller discharge capacity and the expected large overpotentials. Although the Na/O 2 cell with NaO 2 as discharge product shows a much more reversible cell reaction compared to the Li/O 2 cell, it should be noted that also the Na/O 2 cell is not entirely free from side reactions either. Overall, this study provided clear evidence that lithium-oxygen and sodium-oxygen batteries can behave completely different.
Later on, the same group published a more comprehensive study on their findings using a range of different methods including DEMS, pressure monitoring, XPS, SEM, UV-vis spectroscopy, XRD and Raman spectroscopy [78]. The reason why NaO 2 grows to such large crystals is still not clear yet, but precipitation of NaO 2 from a supersaturated solution was suggested as a possible growth mechanism. XPS studies Figure 11: Discharge/charge curves (Type 1B) of a sodium-oxygen battery with NaO 2 as discharge product. The main differences compared to Li/O 2 cells are that only small overpotentials are observed and that the crystallite size of the discharge product is much larger (see SEM image on the right) [109].
showed that the reason for the poor overall reversibility might be due to decomposition of the conductive salt. Further, the issue of dendrite formation in Na/O 2 cells was discussed.
Kim et al. studied the influence of the electrolyte solvent on the discharge product in sodium-oxygen cells [106]. The electrode was made of Ketjenblack, a typical high surface area carbon. Capacities of 2800 mAh/g and even 6000 mAh/g were reported for PC and tetraglyme, respectively. The voltage profiles were of Type 2C. The discharge product was not the same as reported in literature before. Using FTIR spectroscopy and X-ray diffraction they found that sodium carbonate was the major discharge product for carbonate based electrolytes and hydrated sodium peroxide (Na 2 O 2 ·2H 2 O) was the discharge product for tetraglyme. The authors suggested that the water molecules stem from the irreversible decomposition of the electrolyte. But comparing this result to the study by Hartmann et al. who found NaO 2 using diglyme as solvent, it becomes clear that a direct link between ether solvents and formation of Na 2 O 2 ·2H 2 O cannot be drawn. Indeed, the reason why different groups find different discharge products is not clear yet.
Liu et al. studied the influence of nitrogen doping of the carbon electrode on the performance of sodium-oxygen batteries [103]. Compared to a pure graphene cathode the doped one showed considerably higher discharge capacities reaching up to 8600 mAh/g carbon . In both cases, Na 2 O 2 formed during discharge as evidenced by XRD. Galvanostatic cycling and cyclic voltammetry revealed that nitrogen doping is effective in reducing the overpotentials during discharge and charge. The hysteresis, however, can be still classified as a Type 3B. SEM was used to study the morphology of the discharge product as a function of the discharge current. In line with what is known from Li/O 2 cells, particles form at low currents whereas film formation is observed at higher currents.
Only a short time later another high capacity cathode was presented by Jian et al. [107]. They used a carbon nanotube electrode in combination with two different electrolytes, namely NaTFSI in tetraglyme and NaTfO in diglyme. Although the latter showed a higher discharge capacity (7530 mAh/g compared to 6000 mAh/g), the overall performance was similar. During discharge hydrated sodium peroxide was formed as evidenced by XRD. Charging started at small overpotentials but was quickly followed by a rapid increase in voltage. Only 50% of the capacity could be recovered during charging. The performance could be improved by shallow cycling at around 13% of the full capacity, however, all voltage profiles can by classified as Type 3B.
Additional physicochemical aspects of the Na/O 2 cell with NaO 2 as discharge product were discussed by Hartmann et al. in 2014 [110]. Here, pressure monitoring was successfully combined with the standard electrochemical methods galvanostatic cycling and cyclic voltammetry. Furthermore, electrochemical pressure impedance spectroscopy (EPIS) was introduced as a tool to study the transport properties within the cell. With this, the experimental data were fitted by a quantitative microkinetic model that is based relevant parameters and transport process describing the cell. Further, solubility and diffusion coefficients of oxygen in several solvents were determined and operation of the Na/O 2 cell under mixed O 2 /N 2 gas atmosphere was demonstrated. Importantly, NaO 2 was found as discharge product despite the addition of nitrogen gas. On the other hand, the discharge capacity under synthetic air was much lower compared to pure oxygen. This result underlines that metal-air batteries need to be studied also at lower oxygen partial pressures when aiming at practical applications.
Around the same time two theoretical studies were published. Lee et al. studied the phase stabilities of different possible discharge products as a function of the oxygen partial pressure and calculated that NaO 2 and respectively Li 2 O 2 are most stable under standard conditions [115]. Surface energies were calculated and used to predict the Wulff equilibrium shape of the different phases. The cubic crystallites predicted for NaO 2 are well in line with what has been experimentally reported (see Figure 11). Finally, it was calculated that the OER from superoxides is kinetically favored compared to peroxides. Kang et al. studied the phase stabilities of sodium-oxygen compounds as a function of temperature, partial pressure and, importantly, also crystal size [116]. In contrast to the results of Lee et al., they found that Na 2 O 2 is the most stable phase at standard conditions in the bulk phase. In the nanometer regime, however, NaO 2 becomes more stable due to its lower surface energy. [27]. A graphical representation is shown in Figure 12 and is based on thermodynamic data of the bulk phases (T = 298 K, p = 1 bar of the thermodynamic data it becomes clear that the discharge mechanism cannot be simply derived from the discharge potential. (3) The phase stability naturally depends on the oxygen partial pressure, meaning that NaO 2 or LiO 2 might become more stable than the peroxides at elevated pressures. For NaO 2 , the threshold can be estimated to 133 bar, which well explains why the chemical synthesis of phase pure NaO 2 from Na 2 O 2 in autoclaves occurs at partial pressures and temperatures of around 280 bar and 475 °C [117].
The authors suggested that as the energetic difference between NaO 2 and Na 2 O 2 is so small (about 12 kJ/mol), slight differences in the kinetic properties might lead to either of them as discharge products. A reasonable assumption for what controls the kinetics of the cell reaction is the type of carbon electrode. Indeed, the different groups reporting on Na/O 2 cells all used different carbon materials which might explain the different findings. The authors therefore tested a range of different carbon materials but concluded that the type of carbon has no influence on the nature of the discharge product as in all cases NaO 2 was found as major discharge product. Overall, Type 1B behavior was found in all cases. The achievable capacities, however, were significantly affected by the type of carbon (Figure 13, left). Furthermore, shallow cycling at around 33% of full capacity enabled cycling of the cell for more than 50 cycles with a capacity of 1666 mAh/g using a Ketjenblack electrode with 0.5 M NaOTf in diglyme as electrolyte.
Liu et al. substituted the commonly used carbon electrode by a nickel based composite electrode consisting of nickel foam covered with NiCo 2 O 4 nanosheets [105]. NaClO 4 in monoglyme was used as electrolyte. The pure nickel foam was shown to be inactive. For the composite, however, a discharge capacity of 1762 mAh/g (at 20 mA/g based on the mass of the nanosheets was found). A strong capacity fade was observed during cycling. The voltage profiles can be classified as Type 3B/3C. IR spectroscopy and TEM/SAED were used to determine the discharge products. Sodium peroxide and, as a result of side reactions, Na 2 CO 3 were found. The electrodes after discharge were further studied by SEM. Flat sheets with a diameter of around 20 µm were found (Figure 13, right). Obviously, this morphology is very different from the cubic particles reported for cells with NaO 2 formation.
Another study discussing reasons for the different types of discharge products reported in literature was published Zhao et al. [111]. Vertically aligned carbon nanotubes grown on a steel substrate were used as oxygen electrode, sodium triflate in tetraglyme was used as electrolyte. Voltage profiles were of Type 1B and consequently also NaO 2 in form of cubic particles was observed as discharge product. The cell delivered a capacity of more than 4000 mAh/g carbon . Improved cycle life was achieved with shallow cycling at 750 mAh/g (19% DOD). More than 100 cycles have been achieved this way. Rate performance was improved by electrochemically predepositing a thin layer of NaO 2 at low currents (67 mA/g). This procedure was applied to increase the overall number of nucleation sites for product formation during subsequent cycles at higher currents. By doing so, a capacity of around 1500 mAh/g was achieved at 667 mA/g, for example. An important feature of the study was that the cells were not only cycled under static atmosphere in a sealed container but additionally also under continuous gas flow. Pure oxygen or an Ar/O 2 (80/20) mixture were used. Interestingly, the authors found NaO 2 under static conditions and Na 2 O 2 ·2H 2 O under continuous gas flow. The authors suggest that humidity is likely to be introduced when applying a constant flow (presumably due to leakage or gas impurity). Charging was followed by XRD and it was found that Na 2 O 2 ·2H 2 O decomposes to form water, O 2 and NaOH leading to higher overall potentials and a Type 3B behavior, see Figure 14. It is important to note that a continuous gas flow is closer to the operation mode of a practical cell operating with atmospheric oxygen. Further studies are therefore needed to clarify the source and impact of H 2 O on the cell reaction.
Yadegari et al. studied the relation between specific surface area and discharge capacity using chemical activation of commercial carbon black by NH 3 or a CO 2 gas [108]. Sodium triflate in diglyme was used as electrolyte. The results can be summarized as follows: The longer the chemical treatment, the higher the specific surface area, the higher the discharge capacity. The major discharge product was Na 2 O 2 ·2H 2 O although small amounts of Na 2 O 2 and NaO 2 were also detected by combining different methods. As the PVDF binder used in this study is known to be unstable against the superoxide radical, the authors suggested that the formation of the hydrated peroxide is related to the binder decomposition. As a result of the complex mixture of discharge products, the charging curves were characterized by several steps. Overall, all voltage profiles were of Type 3C. The morphology of the electrode after discharge showed quite some similarities compared to the study by Liu et al. It was further shown that the discharge rate influences the voltage behavior during charging.
Overall comparison
For a better comparison of the published literature, we digitalized the voltage profiles and grouped them according the different discharge products. The result is shown in Figure 15. Groups finding sodium superoxide as discharge product find a Type 1B behavior with low overpotentials and a sudden voltage The Na-S phase diagram. Redrawn from references [129,130]. The Na-S phase diagram also depicts the operating window of the commercialized high temperature cell and alternative cell concepts operating at low temperature -including room temperature -that are on the research level.
increase once the end or recharge is reached. Efficiencies are typically above 80%. Groups finding Na 2 O 2 ·2H 2 O as discharge product find a Type 3C behavior. Characteristic for this behavior are increasing potentials and no defined end point of charge, indicating a complex charging mechanism and side reactions. Different sources for H 2 O have been suggested, but its origin is still a matter of debate. Groups finding Na 2 O 2 as discharge product usually observe voltage profiles with Type 2C or 3C behavior. A sudden or sloping increase in potential during charging and no defined end point of charge are observed in these cases.
Related concepts
In addition to the studies discussed so far some other related concepts have been suggested. Das et al. proposed a cell concept that mainly aims at CO 2 capture while at the same time generating electrical energy [102]. Their cells can be therefore described as Na/(O 2 + CO 2 ). The authors investigated the cell discharge behavior under different gas ratios and found that a 50:50 mixture of O 2 and CO 2 yielded higher discharge capacities than the single gases. Na 2 CO 3 and Na 2 C 2 O 4 were suggested as discharge products. No charging curves were shown as the cell was designed as primary cell. In a later study, the same group used an organic/inorganic hybrid liquid electrolyte in order to enable partial recharge [118]. The voltage profiles are of Type 3C and show combined overpotentials of up to around 2.5 V. The discharge product was found to be NaHCO 3 .
Hayashi et al. published results on a Na/O 2 battery with a mixed aqueous/aprotic electrolyte. Both electrolytes were separated by a Nasicon solid electrolyte [119]. Discharge capacities of about 600 mAh/g (based on the weight of Na and H 2 O) with NaOH as the discharge product were achieved, which is only 30% lower than the theoretical capacity of the cell reaction; however, no data on rechargeability was shown. The concept of combining different types of electrolytes has been already applied for Li/O 2 cells. But the authors point out that the much higher solubility of NaOH in aqueous electrolytes compared to LiOH might be of an important advantage. Clogging of the cathode by precipitated hydroxide might be delayed and an even higher energy density could be obtained.
Operating principles and general remarks
The lithium-sulfur battery system has been studied for several decades. The first patents and reports on lithium-sulfur batteries date back to the 1960s and 70s [120][121][122]. However, a rapid increase in research efforts and progress in development was only achieved within the last 10 to 15 years. The number of research publications is growing exponentially. The most studied cell concept is based on lithium as a negative electrode and solid sulfur as a positive electrode. Lithium sulfide (Li 2 S) is the final discharge product and the only thermodynamically stable binary Li-S phase, as shown in Figure 16a. The theoretical cell voltage of 2.24 V is comparably low but due to the high capacity of sulfur (1672 mAh/g) the theoretical energy density by weight (2615 Wh/kg) exceeds that of LIB by a factor of five. The basic cell concept of a lithium-sulfur battery is depicted in Figure 2c. The main challenges of the lithium-sulfur battery are related to two intrinsic properties: 1. Sulfur and Li 2 S are insulators, and intimate contact to a conductive support and sufficiently small particle sizes are necessary to render a complete cell reaction. At the same time, the support must accommodate the volume change of 80% that arises from the difference in molar volumes of sulfur (15.5 mL/mol) and Li 2 S (28.0 mL/mol). Figure 18: Schematic illustration of the polysulfide shuttle mechanism after Mikhaylik and Akridge [123]. Long polysulfides diffuse towards the lithium electrode where they are reduced to shorter polysulfides. Subsequently, these shorter polysulfides diffuse back to the positive electrode where they are oxidized. As a result, a cyclic process ("shuttle mechanism") develops that corresponds to a chemical shortcut of the cell. Illustration adapted from [124].
Formation of Li 2 S from sulfur does not occur directly but via a series of polysulfide intermediates (Li 2 S 2 and
Li 2 S x , x > 2). Polysulfides of the stoichiometry Li 2 S x are highly soluble in commonly used electrolytes, meaning that the active material diffuses out of the positive electrode and eventually reacts with the negative electrode or deposits somewhere else in the cell where it remains inactive. So cycling sulfur in a Li/S 8 battery is essentially based on dissolution and precipitation processes as schematically illustrated in Figure 17. Despite several efforts, however, it is still not well understood in which amounts and stoichiometries polysulfides form. The polysulfide solubility leads to a parasitic phenomenon called the ''shuttle mechanism'' [123] (Figure 18) that corresponds to a chemical shortcut of the cell. This effect essentially leads to continuous self-discharging during discharge, charge and rest. The degree of the shuttle effect heavily depends on the experimental conditions. Shuttling becomes stronger at small current and/or higher temperatures [123,124]. Moreover, also sulfur S 8 itself is mobile and was found to diffuse rapidly [125].
The complex cell reaction gives rise to a characteristic discharge/charge profile as shown in Figure 19. Both the discharge and the charge voltage profiles consist of two voltage plateaus occurring at about 2.3 V and 2.1 V (discharge) or 2.3 V and 2.4 V (charge), respectively. Within the higher discharge plateau the soluble intermediate polysulfides are formed, corresponding to reduction of S 0 to S −0.5 ( ), accounting for a quarter of the overall capacity. Further reduction leads to formation and precipitation of insoluble species leading to an overall two electron reduction of S with Li 2 S as end product. During the following charge, Li 2 S is reconverted to S 8 via intermediate polysulfides, ideally . The characteristic minimum between the upper and the lower discharge plateau is attributed to the nucleation of solid products [126,127]. The exact position of the potentials also depends on the electrolyte solvent [128]. Figure 17: Schematic illustration of the reduction processes at the negative electrode during discharge of a Li/S 8 battery. Reduction of sulfur S 8 proceeds over several soluble polysulfide intermediates (Li 2 S x ) before the final precipitation of solid phases, Li 2 S and eventually Li 2 S 2 occurs. The cell discharge can be also followed by UV-vis spectroscopy, as different polysulfides give rise to different coloration. Illustration adapted from [124].
As a result of these effects, the Coulombic efficiency is low, utilization of sulfur in Li/S 8 cells is poor and the capacity diminishes within a few cycles. Therefore special measures have to be taken in order to improve the performance of Li/S 8 cells.
The most frequently applied strategy to improve the cell performance is to use (nano)porous carbon materials as support that provide high surface area and electronic conductivity and at the same time prevent or delay the loss of active material towards During the last 5-10 years, a large number of different sulfur/ carbon nanocomposite materials has been studied and often considerable improvements in terms of sulfur utilization and cycle life were achieved compared to cells with conventional carbon materials. Overall, nowadays several tenths to several hundreds of cycles with capacity values around 700-1000 mAh/g are realized and the combined overpotentials in the first cycles are roughly around 200 mV. But whether the improvements are really due to specific structural properties of the nanocomposite is, however, not easy to answer considering the complexity of the possible reactions in a lithium-sulfur cell. It also turned out that the characterization of sulfur/carbon nanocomposite materials may pose problems and results can be misleading due to the high sulfur mobility [125]. The main issue, however, is that the performance of Li/S 8 cells is particularly sensitive to the properties of the electrode (thickness, sulfur content, sulfur loading, preparation method, etc.) and the amount of electrolyte and lithium. In fact, quite reasonable results can be obtained with commercially available carbon materials once the electrode preparation is optimized [131,132]. Assessing the achievements of the last years, in general, long cycle life and high sulfur utilization has so far obtained only for low sulfur loadings (often <1 mg/cm 2 ) and large excess of both electrolyte and lithium. Excess of lithium and electrolyte are necessary as both continuously react with each other during cycling. However, low loadings and large excess of lithium and electrolyte are no option for practical devices, and it will be the key to competitive Li/S 8 cells to bring cathodes with high sulfur loading (about 5 mg/cm 2 ) and a low electrolyte/sulfur ratio to function [131,[133][134][135][136][137]. Overall, to enable a high energy battery, the electrolyte:sulfur ratio should be smaller than 5:1 (for comparison, the ratio of electrolyte and active material in conventional LIBs is around 1:3) and the sulfur content of the electrode should be at least 70% providing at least 2-4 mAh/cm 2 (i.e., the typical areal capacity for LIBs).
Besides the attempts to improve the cathode design, also a number of other strategies are followed in order to improve the performance of lithium/sulfur batteries (see section, The lithium-sulfur (Li/S 8 ) battery). The cell concept shown in Figure 2c is by far the most studied one but also other concepts have been proposed. The high solubility of polysulfides can be used to design cells with a liquid electrode (catholyte), for example. Although this concept has been studied already many years ago [121], it only recently regained attention [138]. On the other hand, solid-state concepts are being considered [139][140][141].
The theoretical energy densities of the lithium-sulfur battery are summarized in Table 3. But from the above arguments it becomes clear that experimental energy densities will be much lower. No lithium-sulfur cell has been commercialized yet but several companies announced (gravimetric) energy densities for rechargeable cells significantly exceeding lithium-ion technology. Sion Power currently reports 350 Wh/kg on the cell level but aims for over 600 Wh/kg and 600 Wh/L in the near future [142]. Oxis Energy reports 300 Wh/kg (2014) and predicts 400 Wh/kg (forecast in 2016) [143]. The rate capability of lithium-sulfur cells is thought to be competitive with high-rate LIBs [144]. At moderate rates of C/10, the combined overpotentials of Li/S 8 amount to roughly 150-250 mV. By and large, the lithium-sulfur cell as rechargeable energy store appears to have a realistic chance for commercialization, but will compete with continuously optimized LIB.
In contrast to the lithium-sulfur battery, the analogue room temperature sodium-sulfur battery has been hardly studied to date but the challenges for the construction of well functioning cells will be quite similar. However, the theoretical energy density of a Na/S 8 cell is roughly 50% smaller compared to the analogous Li/S 8 cell, due to higher atomic mass of sodium. So if only energy density is considered, the Na/S 8 cell will not be competitive with LIB technology both in terms of volumetric and probably also gravimetric energy density. Besides, the even larger volume change of the sulfur electrode during cycling (170% for Na 2 S formation compared to 80% for Li 2 S formation) will pose additional problems. Table 3: Theoretical cell voltages, gravimetric and volumetric energy (Wh/kg, Wh/L) and charge (mAh/g, mAh/cm 3 ) densities for lithium-and sodium-sulfur batteries with a metal anode. Due to the large differences in their densities, the volumetric energy densities of metal-sulfur cells strongly depend on whether they are in the charged or discharged state. Charge densities refer to the discharged state, that is, to the sulfides. Thermodynamic data were derived from HSC Chemistry for all compounds in their standard state at 25 °C or 300 °C. Densities at 300 °C are estimates. In contrast to LIBs, metal-sulfur cells are usually assembled in the charged state. The theoretical capacity of the positive electrode is therefore usually given based on the mass of sulfur only, so the theoretical capacity is Q th = 1672 mAh/g for full reduction of sulfur to form Li 2 S or Na 2 S. A look at the phase diagrams shows that different cell reactions might occur in Li/S 8 and Na/S 8 cells, as several Na 2 S x compounds are thermodynamically stable at room temperature. This means that during cell discharge, polysulfides might not only dissolve in the electrolyte, but may also precipitate as solids.
Cell reaction
Whether the stability of solid Na 2 S x polysulfides is of advantage or disadvantage for a reversible cell reaction remains an open question, but -generally speaking -solid phases are likely to have detrimental effects on the cell kinetics compared to dissolved Na 2 S x species. It is worth noting that also Na 2 S 3 has been reported as stable phase, however, it turned out to be a eutectic mixture of the stable polysulfides Na 2 S 2 and Na 2 S 4 [130]. The Na-S phase diagram (see Figure 16b) also depicts the high-temperature Na/S 8 cell that operates with molten electrodes and a solid electrolyte. As the polysulfides Na 2 S x have high melting points, the cell reaction at around 300 °C is limited to a narrower stoichiometric window, meaning that full reduction of sulfur cannot be achieved. The theoretical energy density for high temperature Na/S 8 cells is therefore limited. In practice, 200 Wh/kg has been achieved on the battery level.
Overall, one can look at the room-temperature Na/S 8 cell from two perspectives: (1) Compared to a Li/S 8 cell, substituting lithium by the more abundant sodium appears attractive, and the same strategies for improving Li/S 8 batteries (sulfur utilization, cycle life) might apply for Na/S 8 batteries. An advantage for sodium could be that sodium solid electrolytes are commercially available, that would enable efficient protection of the metal anode from polysulfides. On the other hand, the theoretical energy densities are lower and the larger volume expansion might lead to severe problems. (2) Compared to a high-tempera-ture Na/S 8 cell, decreasing the operating temperature would be attractive because safety and corrosion issues are reduced. In addition, if full reduction of sulfur to Na 2 S can be accomplished, an increase in system's energy density might be possible.
A compromise could be to operate the cell at intermediate temperatures below 200 °C [145-147]. Here, the sodium anode (T m = 98 °C) can be either solid or liquid, a NASICONmembrane (Na Super Ionic Conductor) or beta-alumina membrane is used as solid electrolyte and the cathode is based on a mixture of sulfur or Na 2 S x in an organic solvent. Such an approach has been already discussed in 1980 by G. Weddigen [148].
The lithium-sulfur (Li/S 8 ) battery:
As mentioned earlier, a considerable number of papers are currently being published in the field of lithium-sulfur batteries. This summary is intended to highlight the key strategies currently followed for improving the performance of Li/S 8 batteries. The same strategies might be adopted to improve the performance of the analogue room temperature Na/S 8 battery, although research in this field is still on an exploratory level. For a more comprehensive and complete overview on lithium-sulfur batteries, the authors refer to more specialized reviews [149][150][151][152][153][154][155].
The challenges of the Li/S 8 system address all of its main components. Hence, main approaches striving to find a solution for these challenges, address (1) cathode composition and architecture, (2) electrolyte composition and additives and (3) improve-ments or alternatives to the Li anode. Beyond the improvement of the single components -both from fundamental and engineering point of view -a comprehensive understanding of the complicated redox chemistry of the Li/S 8 system has to be obtained. Therefore, the demand in analytics and simulation studies of the electrochemistry is constantly growing. This section will close with an outlook to new cell design approaches to address the special chemistry of Li/S 8 batteries.
3.2.1.1 Cathode: The ideal cathode of a lithium-sulfur battery should provide the following features: (a) A high electronic conductivity and fine dispersion of the active material to achieve a complete active mass utilization and high rate capability. (b) A structure confining the active mass to prevent the loss of polysulfides and hence the shuttle effect. (c) A flexible structure to accommodate the volume changes during cycling.
(d) A sufficient active mass loading to compete at least with current lithium ion batteries (LIBs). Points a-c can be addressed by developing and engineering conductive supports. Mostly porous carbon or carbon composite materials are used for this purpose. Again, we emphasize that the sulfur loading on the electrodes needs to be sufficiently high in order to achieve high energy densities in practice. For example, a sulfur loading of more than 2 mg/cm 2 and 100% sulfur utilization is necessary in order to reach technically relevant areal capacities of about 3.5 mAh/cm 2 . This aspect has been often overlooked in the last years but needs to be considered when claims on the practical rather than the academic relevance of new electrode architectures are made.
A few of the recent approaches are highlighted in the following. General remarks on the electrode preparation methods will be given at first.
Electrode preparation and binders: Intimate contact between carbon and sulfur is usually obtained by heating sulfur/carbon mixtures above the melting point of sulfur, leading to melt infiltration of the porous support. Some more specific approaches combine a first melting step followed by evaporation of excess surface-sulfur [156] or deposition of sulfur over the gas phase [157]. Apart from some binder-free cathode approaches (see below), binders play a particularly important role when preparing the final electrodes from the sulfur/carbon mixtures. Beyond the ability to bond the cathode components and link them to the current collector, binders have to be flexible enough to accommodate the volume change. Furthermore, they should favor a maximum dispersion of the active material and the conductive agent and limit polysulfide dissolution. Established binders for LIBs such as polytetrafluorethylene (PTFE) or polyvinylidene fluoride (PVDF) have been used long time for Li/S 8 cells but may not provide sufficiently good properties.
Polyethylene glycol (PEO, PEG) as one of the earliest alternative binders may improve cycle life [158,159] by electrolyte modification through partial dissolution. As first published by Sun et al. [160], gelatin as an environmentally benign and abundant binder shows improved bonding and helps to improve the dispersion of the active mass. It also may cause an improvement of the redox reversibility [160] and the rate capability [161]. Other binders, such as polyvinylpyrrolidone (PVP)/polyethyleneimine (PEI) show similar abilities [162]. Furthermore, the water-soluble binder SBR/CMC (styrene-butadiene rubber/ carboxyl methyl cellulose) favors a uniform distribution and a network-like cathode structure [163]. [165]. In more recent studies by Guo and coworkers, an effective steering of the chain length of the active material was obtained by pore sizes smaller than S 8 molecules of orthorhombic sulfur needing a space of about 0.7 nm [166][167][168]. The shorter chain length polysulfides show strong adsorption to the carbon matrix and the unfavorable transition between S 8 and with intermediate polysulfides is hindered, resulting in high cycle life at a lower discharge plateau of 1.9 V [153,169].
Especially for microporous supports, a sulfur loading exceeding 50% is difficult due to the limited overall porosity that is provided by microporous carbons [165,[169][170][171]. Also mesoporous carbons are able to trap polysulfides and provide space for a higher sulfur loading [127,170]. As published by Li et al. [172], there is always a tradeoff between complete filling with sulfur resulting in topmost energy density and partial filling leading to better battery performance but lowering energy output. Macroporous supports have been less investigated despite of their high pore volume, as the open structure does not seem to favor polysulfide confinement. However, when immobilizing the polysulfides by providing strong interaction to the matrix [170,173,174] or the use of a highly viscous electrolyte [175], macroporous carbon frameworks may be useful. For both meso-and macro-porous supports, nitrogen doping is promising to improve polysulfide confinement [176]. Bimodal or hierarchical porous carbons were used as compromise to combine confinement of sulfur in small pores while enabling also a higher sulfur loading due to larger pores. Bimodal pore struc- [178], copyright 2009 Macmillan Publishers Ltd. Process of formation of S-TiO 2 yolk-shell structures via core-shell formation and partial dissolution of sulfur (right) [180]. TEM image of the yolk-shell structure with nanoparticles of 800 nm size and shell thickness of 15 nm. Figure adapted with permission from [180], copyright 2013 Macmillan Publishers Ltd. tures were first published by Liang et al. [177]. Although possessing a 3D structure (see below), it should be noted that the CMK-3 ordered mesoporous carbon published by Ji, Lee and Nazar [178,179] was a major starting point for studying tailored, hierarchical carbon materials (see Figure 20).
A range of other special carbon nanostructures have been tested for Li/S 8 batteries. They are applied in pure form or in combination with conventional carbon materials such as carbon black or activated carbon. Interwoven networks can be obtained by using carbon fibers or nanotubes, for example [179,181]. Cao et al., Zhou et al. and others have reported on sandwich-like electrodes with two graphene layers incorporating the active material, one used as a lightweight current collector, the second used as a barrier for polysulfides [33,182,183]. On the other hand, graphene oxide sheets have been used for wrapping poly(ethylene glycol) covered sulfur particles to obtain confining structures [184].
To completely avoid polysulfide leakage, core-shell-or yolk-shell-structures have been developed to confine the active material inside their electronic and ionic conductive hull. Hollow carbon spheres (void up to 500 nm) with porous shell (up to 50 nm thickness) can be obtained via a hard template nanocasting [157], for example. However, when dealing with an active material that undergoes volumetric expansion and constriction during cycling, closed structures can break. Therefore "yolk-shell"-structures have been suggested that leave enough room for expansion. The latter approach was published by Cui and coworkers [180] comprising sulfur nanoparticles as yolk inside a TiO 2 shell. The material showed excellent stability for more than 1000 cycles and high Coulombic efficiencies, but only low cathode loadings were reported. Binder-free electrodes: As the additional weight of the binder reduces the overall energy density of Li/S 8 cells, binder-free electrodes are studied as alternative. The preparation of binderfree electrodes also avoids the use of often toxic solvents that are necessary for conventional electrode preparation. Elazari et al. reported on a carbon fiber cloth that was able to maintain mechanical strength and conductivity during cycling [170], for example. Vertically aligned carbon nanotubes (VACNTs), directly grown via CVD-process on a metal current collector were published by Dörfler et al. [185]. The high void volume inside the ≈200 µm thick (94 vol %) films was especially favorable for high sulfur uptake, which was later on shown by Hagen and Dörfler et al. [185,186]. Vertically aligned CNTs without a substrate were produced by Zhou [187] using an aluminum anodic oxidized template. Another attempt was published by Manthiram et al., using self-interweaving MWCNTs as freestanding electrodes [188]. Overall, binder-free electrodes might be a viable alternative to standard electrodes. Areal loadings of 7.1 mg/cm 2 yielding areal capacities of about 5.5 mAh/cm 2 (50% S utilization) were achieved, although at a low rate of 5/C, for example [185]. Lower loadings allow higher rates of up to 3.5C with specific capacities around 700 mAh/g after 25 cycles [187]. However, reports of more than 100 cycles have not been published yet.
Lithium-sulfide cathode: Li/S 8 cells are usually assembled in the charged state which is less ideal considering safety. Cell assembly in the discharged state, that is, with Li 2 S as positive electrode is intrinsically more safe and has another advantage: The use of anode materials such as Si [189] and Sn [190] and other alloys becomes feasible [189,191,192]. Beginning in the 1970s [193], numerous approaches for Li 2 S cathode formation and investigation on the basic principles have been published. As claimed by Yang et al. [194], when cycling Li 2 S as a cathode material, the first charge is hindered by a potential barrier originating from the slow charge transfer during the oxidation of Li 2 S to Li 2−x S, requiring a higher cut-off voltage up to 4 V. Beyond, the hygroscopic property of Li 2 S prohibits handling in air. As stated above, Li 2 S is also an ionic and electronic insulator and requires conductive agents to function as an electrode, hence, comparable approaches to the S composite cathodes have been used [189,190,192,195]. More interesting is the direct chemical synthesis of Li 2 S electrodes without Li 2 S as the starting material: It can be obtained by lithiating a sulfur-carbon composite with stabilized lithium metal powder in situ by compression [196] or with n-butyllithium [189]. Archer and coworkers have investigated two different novel approaches towards Li 2 S-C composites: (1) The well-known Leblanc process can be used to reduce sulfates with carbon [197] and (2) Li 2 S builds strong crosslinks with the nitrile groups of polyacrylonitrile (PAN) [198]. Both result in Li 2 S-C composites after carbonization and show promising results. Recently, Lin and coworkers used the reaction of Li 2 S and P 2 S 5 in THF to form a Li 2 S-Li 3 PS 4 core-shell structure [199].
3.2.1.2 Electrolytes: The electrolyte will probably play the most fundamental role in the Li/S 8 battery -potentially even more important than the cathode microstructure, as the solubility of polysulfides and hence the shuttle-effect are dramatically affected by the solvent [121,[200][201][202]. Furthermore, the electrolyte has to be suitable for both the highly reactive Li anode and the sulfur-composite cathode with its special requirements. One important property is good polysulfide solubility to ensure fast and complete reactions between Li and the sulfur [155,200]. On the other hand, a high solubility will accelerate shuttling and loss of active material. Most ether-based solvents can dissolve polysulfides very well, most prominent examples are 1,3-dioxolane (DOL) and 1,2-dimethoxyethane (DME), tetraethylene glycol dimethyl ether (TEGDME, tetraglyme) and sometimes ethers with longer chain length [200,[203][204][205]. Carbonate-based solvents used for conventional LIBs will most likely not be used in future Li/S 8 batteries. This is due to their reactivity with polysulfides and because they are less compatible with lithium [205][206][207][208]. Nowadays, the most common solvent is a binary mixture of a cyclic ether (DOL) and a linear ether (DME), which was found to provide a good overall compromise between sulfur utilization, rate capability, temperature window and anode compatibility [209]. Lithium bis(trifluoromethanesulfonyl)imide (LiN(SO 2 CF 3 ) 2 , LiTFSI) is commonly used as a conductive salt. Aurbach et al. pointed out the significance of LiNO 3 (lithium nitrate) as an electrolyte additive [205,[210][211][212][213][214][215] to build up a both relatively stable and flexible SEI on the lithium anode that suppresses the polysulfide shuttle. However, LiNO 3 is progressively consumed during cycling and decomposes at the cathode at potentials below 1.6 V [215]. Increasing the conductive salt concentration might alleviate the polysulfide shuttle due to increased viscosity and salting-out effects as stated by Suo et al. [216]. In their work on "solvent-in-salt" electrolytes, an electrolyte with 7 M LiTFSI was found to suppress both polysulfide dissolution and dendrite growth. On the other hand, an increased viscosity generally opposes fast kinetics. Recently, Cuisinier et al. reported on a new "binary" electrolyte comprising a solvent-salt complex (acetonitrile(CAN) 2 -LiTFSI) and hydrofluoroether (HFE) that provide minimum solubility of polysulfides [217]. Hence, a different electrochemical behavior occurs, still forming polysulfide intermediates but suppressing parasitic disproportionation, enabling an earlier Li 2 S formation. Based on the weak Lewis acidity or basicity of ionic liquids (ILs) the solubility of PS is limited as well [218]. Drawbacks of ILs are their high viscosity and therefore lower conductivity resulting in low active mass utilization. The combination with lower viscosity solvents such as DME should be favorable [219] but at the cost of increased polysulfide dissolution. Beyond liquid electrolytes, polymer electrolytes are also used in Li/S 8 cells that show favorable properties with respect to polysulfide blocking but yet suffer from low ionic conductivity [140,191,213,220]. Despite intense research efforts, the ideal electrolyte has not been identified yet. The possible cure could be to combine a fast conducting liquid electrolyte with a solid lithium-ion-selective separator or solid electrolyte membrane separating both electrodes, thus relying on reliably protected lithium anodes (PLAs) [221,222].
3.2.1.3 Anodes: As the reduction of sulfur occurs at potentials below 2.5 V vs Li/Li + , lithium metal is the preferred choice as negative electrode in order to achieve reasonable cell voltages. Moreover, the high theoretical capacity of lithium (3860 mAh/g) is a good match with the high capacity of sulfur (1672 mAhg −1 ). The well-known drawbacks of lithium electrodes (chemical reactivity and dendrite formation) are tried to be minimized by an ex situ applied protection layer or the in situ formed solid electrolyte interphase (SEI) as noted in the previous section. Both in situ and ex situ have to accommodate the changes in volume and morphology during cycling without fracture [223]. To obtain artificial protection layers (artificial SEI), polymer films [224] and inorganic solid electrolytes [221,222] have been applied on the Lithium metal surface. More common is the use of electrolyte additives to favor the formation of a stable SEI, as first published by Aurbach et al. [210,225] referring to LiNO 3 . More recently P 2 S 5 was suggested as promising additive: A passivating layer mainly consisting of Li 3 PS 4 with rather high ionic conductivity is formed throughout the reaction of P 2 S 5 with Li 2 S x [226]. The SEI formation in situ results from the reaction of lithium with the electrolyte components. Therefore, a fraction of the anode material is irreversibly lost and has to be provided as excess. An alternative route to suppress dendrite growth was suggested by Ding et al. [227]. Here, selected cations (Cs + and Rb + ) are added that shield emerging lithium dendrites from further Li + access, thus enabling a smoother lithium deposition.
The interest in non-lithium anodes such as Si [189] and Sn [190] has been growing, but these -apart of being pre-lithiated [228] -can only be combined with Li 2 S composite cathodes. Due to the severe volumetric expansion exceeding 300% from Si to Li 15 Si 4 , Si in anodes can only provide stable cycling behavior when being nanosized [229]. Beyond, the theoretical energy density of Li-Si/S 8 cells is reduced to 1862.45 Wh/kg (3299.25 Wh/L) and to 922.84 Wh/kg (2628.19 Wh/L) for Li-Sn/S 8 cells, respectively, due to the additional weight and the reduced cell voltage. Also high capacity carbon materials have been studied [230]. The supposed advantages of these anode materials over lithium are improved safety and possibly increased cycle life. But whether this can outweigh the lower energy densities and the disadvantages arising from the decreased cell voltage remains to be clarified.
3.2.1.4 Analytics: Despite the fact that the Li/S 8 cell has been investigated for a long time, a complete understanding of the redox chemistry and all the electrochemical and chemical processes has still not been achieved. This is foremost due to two reasons: (a) In contrast to the rocking chair LIB, the cell chemistry of Li/S 8 cells is very complicated, and the reduction of the S 8 molecule to Li 2 S requires the transfer of 16 electrons. (b) As the processes are particularly sensitive to -for examplethe electrolyte composition, often different studies are hardly comparable [124,231,232]. Only recently, in situ methods have been applied to achieve a more realistic overview on the real cell reactions.
X-ray diffraction is generally a powerful tool to analyze cell reactions in situ [127,233,234] and has been applied to follow the crystalline solid phases appearing during cell cycling. Unfortunately, some discrepancies still remain: The final discharge product Li 2 S is not detected (to be crystalline) in some works ex situ [235] and in situ [233] while others show evidence ex situ [236] and in situ [127,234]. Furthermore, the re-oxidation to orthorhombic sulfur is detected by some groups [233] via XRD, while others see evidence for a different allotrope [127,234] or contradict the formation of elemental sulfur from polysulfides [236][237][238][239] at all. One of the most recent studies of in situ XRD is shown in Figure 21 (right), detecting formation of crystalline Li 2 S at the beginning of the 2nd discharge step and precipitation of monoclinic β-sulfur at the end of the charge step. Other methods are necessary to study the soluble polysulfide intermediates. Barchasz et al. proposed a possible mechanism for sulfur reduction in Li/S 8 batteries by combining high performance liquid chromatography (HPLC), UV-vis absorption and electron spin resonance (ESR) [232]. Further UV-vis analysis was carried out by Patel et al. [240]. Cuisinier et al. published a study on sulfur speciation during cycling using K-edge XANES (X-ray absorption near-edge spectroscopy) [241]. They analyzed intermediate species and followed dissolution and precipitation of redox end members during cycling, finally proposing a cell reaction as denoted in Figure 21 (left). Combination of in situ and in operando techniques is a powerful tool to obtain a clearer qualitative understanding of the cell chemistry. However, challenges remain because -as stated before -the redox chemistry highly depends on the electrolyte, making different approaches hardly comparable. To understand the cell chemistry from a theoretical point of view, microkinetic models of the processes with special focus on the polysulfide shuttling were published by Mikhaylik et al. [123] and Kumaresan et al. [126]. Fronczek et al. used a Figure 22: Literature timeline of research papers on room temperature Na/S 8 batteries (ranked after date of acceptance). Experimental studies: all journal publications in which full discharge-charge capacity profiles were shown for at least one complete cycle. The paper by Yu et al. [253] describes a related concept based on a catholyte. modeling framework based on computational fluid dynamics (CFD) to develop a one-dimensional continuum model of a Li/S 8 cell with parameters based on this reference [126] to simulate concentration profiles, voltage and current curves as well as impedance behavior during cycling [231]. Kinetics play a particular role in the Li/S 8 battery especially because of the divided appearance of fast reactions in solution and sluggish solid state reactions as shown by transient galvanostatic intermittent titration technique (GITT) studies [124]. Hence, both cycling characteristics and performance are affected by the cycling rate and temperature.
3.2.1.5 Alternative cell concepts: As the cell chemistry of Li/S 8 cells is very different from conventional LIBs, it is also worth considering alternative cell concepts. Negative effects arising from the shuttle effect can be obviated by separating both electrodes with an additional membrane that conducts lithium ions only. This way, polysulfides cannot reach the lithium electrode as suggested by Visco et al, for example [242]. A range of different membranes has recently been tested: lithium ionexchanged Nafion [243], Nafion-coated polymeric separator [244], Al 2 O 3 -coated [244] and V 2 O 5 -coated [245] separators, and a commercial glass ceramic from Ohara Inc. [185,246]. Manthiram et al. introduced different electronically conductive interlayers between cathode and separator to absorb and reactivate dissolved polysulfides [152]. Obviously, the extra weight and extra resistance of a membrane or layer decreases energy density and rate capability, respectively. However, with the current state-of-the-art, it might be the only reliable cure to the shuttle effect apart from designing an all solid state sulfur battery. This latter attempt may imply new challenges, including (1) low ionic conductivity of solid electrolytes compared to liquid electrolytes for most solid Li-ion conductors, (2) stability of the interface SE/Li-anode and (3) sluggish interfacial kinetics at both electrodes. Additionally, as the ionic contact of the active mass is no longer provided by the liquid electrolyte, a reasonable fraction of finely dispersed ion conductor has to be introduced into the cathode architecture. This leads to a further decrease in energy density. However, with solid electrolytes approaching conductivities that are on par with liquid electrolytes, that is, members of the thio-LISICON (Li Super Ionic Conductor) and Li 2 S-P 2 S 5 families [247][248][249][250][251][252], all-solid-state lithium-sulfur batteries might be an attractive option. Moreover, avoiding flammable liquid electrolytes would be an important advantage with respect to battery safety.
The sodium-sulfur (Na/S 8 ) battery:
The large amount of research publications on lithium-sulfur batteries is in stark contrast to what has been reported on the cell chemistry of the analogue sodium system. Altogether only a few publications on the room temperature cell chemistry of sodium-sulfur batteries are currently available but -similarly to the Na/O 2 battery -the majority appeared within the last two years. An overview of the available literature is shown in form of a timeline ( Figure 22).
Assuming an ideal discharge process, that is, considering thermodynamically stable solids only, sulfur is subsequently reduced to form different polysulfides (Na 2 S x , x = 2, 4, 5) and finally the end product Na 2 S. The theoretical cell potentials of the different steps can be calculated from the corresponding thermodynamic data (no data was found for Na 2 S 5 ): The weighted average voltage of the different steps equals the standard cell potential of the overall reaction. In cells with liquid electrolytes, the reaction path is of course more complex, as, similarly to the Li/S 8 cell, the phase behavior becomes much more complex as many polysulfides are highly soluble and metastable phases exist. Na 2 S 2 and Na 2 S, however, are the least soluble compounds in organic solvents so a solid state reaction as stated in Equation 7 is expected at the calculated potential.
Before providing an overview of the current literature it is worth noting beforehand that the overall understanding of the cell chemistry is poor and quite different results have been reported with respect to sulfur utilization and cycle life. This is probably also due to the fact that the experimental conditions were very different ( Table 4).
The first recent report on room temperature sodium-sulfur batteries was published by Park et al. [254] who prepared a cell using a PVDF/tetraglyme-based gel polymer electrolyte with sodium triflate (NaCF 3 SO 3 ) as conductive salt (σ = 5.1 • 10 -4 S/cm at 25 °C). The discharge profile was characterized by two plateaus separated by a sloping potential region, indicative for a stepwise reduction of sulfur over polysulfides. The first discharge capacity was 489 mAh/g and a rapid capacity fading was observed for the subsequent cycles. The authors concluded that a mixture of Na 2 S 2 and Na 2 S 3 has been formed during discharge and some sulfur remained inactive. Similar results were obtained for a PEO-based polymer electrolyte but at 90 °C [264]. Later on, the same group (Kim et al. [256]) studied the cell with gel polymer electrolyte in more detail. Again, a similar behavior was found with a capacity of 392 mAh/g for the first discharge followed by a rapid capacity decay. Moreover, the impedance of the cell increased during cell storage which was attributed to the growth of a passivation layer between sodium anode and the gel polymer electrolyte.
Wang et al. [255] reported on a Na/S 8 cell with liquid electrolyte (NaClO 4 in EC:DMC) with a high capacity of 1455 mAh/g (or 655 mAh/g cathode ) and stable cycling over 20 cycles. The cathode material was prepared by heat treating a mixture of PAN and sulfur under inert atmosphere [176]. The sulfur induced the cyclization of the PAN polymer forming H 2 S. The resulting composite consisted of heterocyclic structures and it was suggested that excess sulfur was finely dispersed and eventually covalently bonded to the carbon. The enhanced interaction between sulfur and carbon might explain the high sulfur utilization and stability, at the same time it might be the reason for the unexpected shape of the voltage profile and the lower average cell voltage. No further characterization of the discharge or charge products was provided.
In 2011, Ryu et al. [257] studied the performance of Na/S 8 cells in a liquid ether based electrolyte (NaCF 3 SO 3 in tetraglyme). Again, the discharge profile and capacity (538 mAh/g) were comparable to what the same group reported for the cell with gel polymer electrolyte. The voltage profile is shown in Figure 23. In order to provide further insight into the cell reaction, electrodes at different states of discharge and charge (points (a) to (e)) were characterized by differential scanning calorimetry (DSC) (Figure 3b). As several Na 2 S x polysulfides are thermodynamically stable, their presence in the electrode might be confirmed over their melting points, as shown in the phase diagram in Figure 16b. Notably, this is not possible for the Li/S 8 cell as Li 2 S is the only stable compound with a defined melting point. The DSC curves indicate that the elemental sulfur disappears during discharge (signal at 114 °C disappears) and sodium polysulfides Na 2 S 4 and Na 2 S 5 form (signals 303 °C and 321 °C appear). After full discharge, these polysulfides are absent. After charging, the melting points of sulfur and Na 2 S 5 reappear. Combined with results from XRD the authors concluded that Na 2 S n (4 > n ≥ 2) forms during discharge and sulfur and Na 2 S n (5 > n ≥ 3) during charge. The ideal discharge product, Na 2 S, was not detected.
Lee et al. [258] studied the performance of a sodium-sulfur battery with the same ether-based electrolyte (NaCF 3 SO 3 in tetraglyme), but using a cathode based on a composite of hollow carbon spheres and sulfur. The cell showed a high initial discharge capacity (1200 mAh/g with a low voltage cut-off at 0.5 V) with the following 20 cycles achieving around 600 mAh/g. At the same time, the discharge potential was only around 1 V (see Figure 24a). No further characterization on the discharge products was provided. In another configuration, the sodium anode was replaced by a Na-Sn-C composite electrode and so presented the first room temperature sodium-ion sulfur battery. Wenzel et al. [259] studied cells with an ether based electrolyte. Similarly to the results from Ryu et al., an initital discharge capacity of around 450 mAh/g and poor cycle life was found (see Figure 24b). Both the sodium anode and the sulfur cathode were studied by XPS. It was shown for the first time thatalthough sulfur reduction was incomplete -the ideal discharge product Na 2 S formed during discharge and disappeared during charging (see Figure 24c). At the same time, a large amount of polysulfides and Na 2 S was found on the sodium anode, indicating a very strong shuttle mechanism -in line what can be expected from Li/S 8 cells. To prevent this shuttle mechanism, an additional inorganic solid electrolyte membrane (betaalumina) was implemented. With this hybrid electrolyte system, Coulombic efficencies close to 100% were found and somewhat higher capacities could be achieved during cycling. More importantly, cycling at a reasonable rate of 0.1C was still possible meaning that the solid electrolyte did not significantly increase the cell resistance (Figure 24d). This is different from Li/S 8 cells, where so far only poor kinetics were found for cells with free standing solid electrolyte membranes. We note that the availability of commercially available sodium-ion conducting solid electrolytes with good transport properties in the bulk and through the interfaces with liquid electrolytes offers additional opportunities in designing catholyte based cells. Nevertheless the cells still suffered from strong fading which was finally attributed to the decomposition of the PVDF binder in the presence of polysulfides.
Hwang et al. [260] followed the approach by Wang et al. and produced a composite based on heat treating a mixture of PAN and sulfur; however, PAN nanofibers instead of powder were used. Also here, a carbonate based electrolyte was used (NaClO 4 in EC:DMC). The cell showed a first discharge capacity of 800 mAh/g and an excellent an cycle life. On the other hand, the sulfur loading was quite small (0.31-0.38 mg/cm 2 ). In line with the results by Wang et al., the voltage profile shows an overall sloping behavior and partially low voltages. The authors further showed that the sodium anode was free of sulfur after 500 cycles. This means that polysulfide diffusion from the cathode to the anode can be effectively suppressed by chemically binding sulfur to carbon.
Xin et al. [261] studied the performance of a nanostructured composite consisting of CNTs covered with a microporous layer. The material was designed to alter the reaction mechanism in a beneficial way and had been tested for Li/S 8 cells by the same group in an earlier study [169]. The idea is that the confinement of nanopores only allows the formation of small compounds, thus, the formation of large S 8 molecules and large, highly soluble polysulfides is prevented. As a result, the cell reaction is restricted to small S 2−4 molecules and Li 2 S only, thus improving cycle life and rate capabitlity. The concept also leads to improvements in case of Na/S 8 cells. An initial discharge capacity of about 1610 mAh/g was found, followed by stable cycling at 1000 mAh/g. Also here, a carbonate-based electrolyte was employed (1 M NaClO 4 in PC:EC) and the voltage shifts to low values (more than half of the capacity is achieved at voltages below 1.5 V).
Bauer et al. [262] used a polymer membrane to reduce the shuttle mechanism in Na/S 8 cells with ether-based electrolyte (NaClO 4 in TEGDME). The membrane was prepared by coating a standard polypropylene separator with Nafion. The initial discharge capacity was around 400 mAh/g, which is similar to what other groups obtained when using ether based electrolytes.
Zheng et al. [263] studied the performance of composite materials containing a high surface area mesoporous carbon, sulfur and copper nanoparticles. The copper nanoparticles were added in order to trap soluble polysulfides by CuS x formation [263] and a carbonate-based electrolyte was applied (NaClO 4 in EC:DMC). The first discharge mainly occurs at very low voltage platetau of around 1.0 V and reaches almost 1000 mAh/g. After this activation cycle, stable capacities of around 600 mAh/g are achieved for more than 100 cycles, with Coulombic efficiencies close to 100% and sloping potential curves. Also here, the average voltage values during discharge remain relatively small to what would be ideally expected for the formation of Na 2 S. On the downside, the sulfur loading of the electrode is very small. Although the copper content of the electrodes is small (10%), the cycling behavior shows quite some similarity to a conventional conversion between sodium and CuS x for which an activation cycle and sloping potentials are well known. Ideally, the conversion reaction of sodium with CuS and Cu 2 S would occur at 1.58 V and 1.39 V, respectively [10].
Yu et al. [253] suggested that the often observed capacity fade in Na/S 8 cells is due to the poor reversibility of the insoluble discharge products Na 2 S n (1 ≤ n < 4). Therefore the group used a cell design optimized for shallow cycling between sulfur and soluble long chain polysulfides with the overall reaction nS + 2 Na + + 2 e -= Na 2 S n (4 ≤ n ≤ 8) (see Figure 25). Essentially, this approach is close to a catholyte concept. Evidence for the cell reaction was provided by XPS and UV-vis measurements. A comparable approach was successfully applied in Li/S 8 cells before by the same group [265,266]. Shuttling of the highly soluble, long polysulfides towards the sodium anode was delayed by implementing an additional nanostructured carbon interlayer (thickness not reported) and using a concentrated electrolyte including NaNO 3 (1.5 M NaClO 4 and 0.3 M NaNO 3 in TEGDME). LiNO 3 is a well-known anti-shuttling agent in Li/S 8 cells that protects the lithium anode. Overall, very stable cycling of the cell at 250 mAh/g was achieved for 50 cycles. The average discharge voltage during galvanostatic cycling was around 2.25 V, however, charging curves were not shown so it remains unclear whether the shuttle effect could be prevented.
Overall comparison: In order to compare the different experimental results on Na/S 8 cells, we digitalized literature data of the first galvanostatic cycle (if available) and plotted them into one diagram (see Figure 26). More data is summarized in Table 4. Obviously some noticeable differences exist.
Results for cells can be grouped according to their voltage profiles as follows: 1. Studies using solvents that are frequently used in Li/S 8 cells (DOL:DME, tetraglyme) found a discharge behavior that is qualitatively quite similar to what is known from Li/S 8 cells, that is, one or two plateaus occur at voltages not too far away from the overall expected cell potential (1.85 V). Charging occurs at slightly larger overpotentials compared the Li/S 8 cell. The main difference, however, is that the achieved capacities are very low. Although it was shown that the theoretical end product Na 2 S forms during discharge, the reaction is incomplete and only about 350-550 mAh/g are found corresponding to an overall composition of Na 2 S x (3 ≤ x ≤ 5). So solvents that work well for Li/S 8 cells seem to perform bad in Na/S 8 cells. A notable exception is the work from Lee et al. [258] who used tetraglyme and found a capacity of 1200 mAh/g. But here, discharge mainly occurs at voltages close to 1 V only (cut-off potential of 0.5 V). 2. Studies with carbonate based solvents showed much higher capacities and often also superior cycle life. In one study, the capacity was even close to the theoretical value. At the same time, the voltage profiles of these cells are very different from Li/S 8 cells and usually exhibit sloping potentials during subsequent cycling and much of the capacity is obtained at voltages below 1.5 V. Such low voltages are also undesired with respect to energy density. One could argue that also the conductive salt might have an influence, however, one can conclude from results obtained for Li/S 8 cells that this is less likely [205].
Assuming bulk thermodynamics, it is interesting to note that the lowest cell voltage possible is due to the reaction Equation 7: for which E° = 1.68 V can be calculated (see above). This reaction contributes to half of the theoretical capacity of sulfur (836 mAh/g). During discharge, the cell voltage should therefore fall below this value at some point which is fulfilled for all results shown in Figure 26. During charge, one should immediately exceed this voltage; however, this is not always the case. It seems that cells with high discharge capacity partially charge below this thermodynamically derived threshold. Assuming that the thermodynamic data is correct, one has to conclude that other reactions take place. In some cases, this unexpected voltage profile might be due to sulfur bound covalently to carbon [255,260] or due to CuS x [255] formation, however, a clear understanding is missing. Taking these results together, many questions remain and further studies are needed to clarify the link between voltage profile, cycle life, sulfur utilization, and electrolyte composition. As it is well known from Li/S 8 cells that carbonate based electrolytes are unstable against polysulfides [205,267], future studies should clarify whether side reactions contribute to the high capacities reported for some Na/S 8 cells.
Moreover, all studies reporting capacities exceeding 1000 mAh/g were only achieved with small sulfur loadings meaning that the sulfur content of the electrode was considerably smaller than 50 wt %. In line with research on Li/S 8 batteries one has to emphasize the need to increase this value in case an application should become feasible. Given the very early state of research, however, the overall perspective for Na/S 8 cells is yet unclear, and further work is required to better judge the practical potential.
Conclusion
Lithium-sulfur and lithium-oxygen cells have attracted enormous interest in the last ten years, and the frequency of publications is still increasing. In the case of Li-sulfur batteries the major challenges have been obvious already at the very beginning (e.g., lithium dendrites, polysulfide shuttle) but still await a proper and effective solution. While incremental improvements can be recognized, it is unclear whether the Li-sulfur battery can finally beat LIB technology with respect to energy capacity. It is interesting to note that the majority of papers deals with the design of carbon/sulfur papers rather than targeting the critical issue of the anode.
In the case of lithium-oxygen batteries the current status is different. After an initial phase of enthusiasm major drawbacks (electrolyte decomposition, carbon instability, the need for pure oxygen) have damped too optimistic expectations, and Li/O 2 batteries are now again primarily the target of academic research.
As both systems rely on multielectron transfer reactions at the cathode, and as solid phases are being formed and dissolved during cycling, the kinetics are slow compared to LIB and the energy efficiency as also the power density are not competitive yet. This may easily lead to a pessimistic outlook, but this would not be an appropriate conclusion. Rather one should consider lithium-sulfur and lithium-oxygen batteries as attractive targets which have already triggered numerous valuable technical and chemical innovations -but which still require major innovations in electrolyte and electrode design.
In contrast, (room temperature) sodium-sulfur and sodium-oxygen cells have only very recently attracted interest. Obviously, the lower theoretical energy capacity makes sodiumbased systems second choice at first glance. On the other hand, sodium systems can provide some specific advantages that might help to overcome the obstacles known from the analogue lithium based cells. Several aspects have been discussed in this review. The availability of Na beta-alumina as highly conductive room temperature solid electrolyte that is also chemically stable in contact with sodium might be an important advantage for designing future cell concepts, for example. Moreover, sodium has the advantage to be much more abundant than lithium.
An intriguing example was also shown for the Na/O 2 cell, where formation of NaO 2 as discharge product offers significant advantages compared to the Li/O 2 cell with respect to energy efficiency and reversibility. Comparing results on metal-oxygen batteries is generally difficult as research groups usually use different cell designs, materials and measurement conditions. However, the shape of voltage profile (voltage hysteresis) gives a first impression on the cell performance with respect to reversibility and efficiency. We therefore suggest using a simple 3 × 3 matrix that allows quick assessment of the overall performance of metal-oxygen cells. The ideal voltage profile corresponds to Type 1A which is not found for any of the metal-oxygen cell yet. Most close to this behavior are Na/O 2 cells with NaO 2 as discharge product and are classified as Type 1B. Most other metal-oxygen cells show a Type 2C/3B or Type 3C behavior. Still, very little is known about the cell chemistry of sodium-oxygen cells and it is surprising that different groups find different discharge products. Giving a reasonable explanation for this is an important research task.
Even less is known about the cell chemistry of room-temperature Na/S 8 cells. The research overview showed that (with one exception), the voltage profile seems to depend on the electrolyte composition. In ether based electrolytes, the voltage profile shows similarity to what is known from Li/S 8 cells. But although Na 2 S forms as discharge product, only low capacities and poor cycle life is achieved. The situation is different when carbonate based electrolytes are used. Much higher capacities and improved cycle life have been reported. On the other hand, the voltage profiles are much less defined and carbonates might be simply instable against polysulfides as it is known from Li/S 8 cells. Clearly, there is a need to further understand the cell reaction. At the current state, Na/S 8 cells are not competitive with Li/S 8 cells.
In conclusion, given the relatively early state of research, (room temperature) sodium-sulfur and sodium-oxygen cells already show some attractive properties and the recent increase in research activity is a clear sign for the development of two new independent research fields. At the same time, we emphasize that just as for the analogue lithium-based systems, the road towards practical systems is long and might not necessarily lead to application -in particular in view of the energy densities which may finally not beat the LIB. Aiming for low cost stationary energy stores seems most attractive, especially considering the Na/S 8 system. Progress towards practical devices will be only achieved when challenges of all cell components, that is, anode, cathode and electrolyte, are addressed and side reactions are minimized. Moreover, understanding the of role impurities on the cell reactions need further attention. Innovative approaches in both fundamental research and technical development are therefore needed.
|
2018-04-03T00:39:06.336Z
|
2015-04-23T00:00:00.000
|
{
"year": 2015,
"sha1": "1acd0ede3b9c2113c0c94539ff120d357c4ead21",
"oa_license": "CCBY",
"oa_url": "https://www.beilstein-journals.org/bjnano/content/pdf/2190-4286-6-105.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1acd0ede3b9c2113c0c94539ff120d357c4ead21",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
55178738
|
pes2o/s2orc
|
v3-fos-license
|
Regular Oscillation Sub-spectrum of Rapidly Rotating Stars
We present an asymptotic theory that describes regular frequency spacings of pressure modes in rapidly rotating stars. We use an asymptotic method based on an approximate solution of the pressure wave equation constructed from a stable periodic solution of the ray limit. The approximate solution has a Gaussian envelope around the stable ray, and its quantization yields the frequency spectrum. We construct semi-analytical formulas for regular frequency spacings and mode spatial distributions of a subclass of pressure modes in rapidly rotating stars. The results of these formulas are in good agreement with numerical data for oscillations in polytropic stellar models. The regular frequency spacings depend explicitly on internal properties of the star, and their computation for different rotation rates gives new insights on the evolution of mode frequencies with rotation.
Introduction
The field of asteroseismology has now reached its age of maturity with the exploitation of space missions CoRoT (Baglin et al. 2006) and Kepler (Koch et al. 2010) that are gathering stellar light curves with high accuracy. However, there are still unresolved issues that hinder the successful pairing of light curve frequencies with pulsation modes, which is crucial to obtain detailed information on the inner structure of observed stars. One of these issues is the rapid rotation of a star around its axis, since the exact nature of rotational effects on pulsation modes is not known. In particular, the centrifugal flattening (e.g. Monnier et al. (2007)) affects the spectrum of pressure modes (p-modes) in a complex way (Lignières & Georgeot 2009). This difficulty mainly concerns non-evolved massive and intermediate-mass pulsating stars which are typically rapid rotators (Royer 2009). Recently though, hints of regular frequency spacings have been found in the spectrum of rapidly rotating δ Scuti stars observed with CoRoT (García Hernández et al. 2009;Mantegazza et al. 2012), and this could ease future mode identification.
The recent development of accurate numerical models has enabled progress in the comprehension of pulsation modes in rapidly rotating stars. It has been found in particular Reese et al. 2008Reese et al. , 2009) that in the rapidly rotating regime a subset of p-modes shows approximate regular frequency spacings in the form: ω n,ℓ,m ≃ ∆ n n + ∆ ℓ ℓ + ∆ m |m| + α, where frequencies ω n,ℓ,m are given in the corotating frame. Quantum numbers n, ℓ and m correspond to node numbers of the mode amplitude distributions, ∆ n , ∆ ℓ and ∆ m are frequency ⋆ e-mail: pasek@irsamc.ups-tlse.fr regularities, and α is a constant term. The approximate formula in Eq.
(1) shows a better agreement with numerical results towards high-frequencies, thus suggesting that this relation is of an asymptotic nature. It should also be noted that, from computations of disk-averaging factors, the p-modes following Eq.
(1) are expected to be among the most visible ones (Lignières & Georgeot 2009). An example of such a mode can be seen in Fig. 1. The frequency spacings of Eq. (1) are notably similar to the regularities described by Tassoul's asymptotic formula (Tassoul 1980) for low degree p-modes in non-rotating stars. Tassoul's formula at leading order is ω n,ℓ ≃ ∆ n s + ℓ s 2 with the large frequency separation where c(r) is the radially inhomogeneous sound speed, and R the stellar radius. The integer n s is the node number of the radial component of the mode, while ℓ s is the degree of the associated spherical harmonics, and α s depends on surface properties. Tassoul's theory has proved to be very useful for interpretating solar-like oscillations in slowly rotating stars. Indeed, the formula relates observable quantities, such as the regular frequency spacing ∆, to physical properties of stellar interiors. For rapidly rotating stars, it would be clearly desirable to gain insights on the underlying physics of the potentially observable regular spacings ∆ n , ∆ ℓ and ∆ m by a similar asymptotic analysis. In this paper we derive a formula for these regular frequency spacings in the asymptotic regime. (Colour online) Pressure amplitude P d/ρ 0 on a meridian plane for a polytropic stellar model, with d the distance to the rotation axis and ρ 0 the equilibrium density. The mode shown corresponds to n = 50, ℓ = 1 and m = 1 at a rotation rate of Ω/Ω K = 0.300, where Ω K = (GM/R 3 eq ) 1/2 is the limiting rotation rate for which the centrifugal acceleration equals the gravity at the equator, M being the stellar mass and R eq the equatorial radius. Colors/grayness denote pressure amplitude, from red/gray (maximum positive value) to blue/black (minimum negative value) through white (null value). The thick black line is the ray γ located in the center of the main stable island.
The generalization of the p-mode asymptotic theory to rapidly rotating stars is not trivial. Tassoul's theory requires separation of variables, which is no longer possible when the star is flattened by rotation. For non-separable wave systems, a wellknown technique to obtain eigenmodes is to study the shortwavelength limit of the propagating waves. This limit gives an equation for the propagation of rays that is similar to the geometrical optics limit of electromagnetism, or the classical limit in quantum mechanics. Then, by imposing quantization conditions on the phase of waves propagating on these rays, one obtains the eigenmodes of the wave system. This technique was first developed in the context of quantum physics, and is often called semiclassical quantization.
For spherical stars, the ray limit of pressure waves has been previously used to recover the Tassoul asymptotic formula from the Einstein-Brillouin-Keller (EBK) quantization of ray dynamics (Gough 1993). This analytical approach is possible only when the ray system is integrable. A dynamical system is said to be integrable when it has as many conserved quantities (energy, angular momentum, etc.) as degrees of freedom (Ott 2002). In rapidly rotating stars, there are not enough conserved quantities to ensure integrability of the ray dynamics. Indeed, in Lignières & Georgeot (2008, 2009, it has been found that acoustic rays in rotating stars have a very different dynamical behavior depending on their initial conditions in positionmomentum space (the so-called phase space). For a polytropic stellar model, the numerical integration of the equations for acoustic rays displayed various types of solutions. Indeed, one can obtain either stable rays staying on torus-shaped surfaces in phase space which form structures such as stable islands, or chaotic rays that are dense and ergodic on a phase space volume (Ott 2002).
A similar behavior has been found in many systems studied in the field of theoretical physics known as quantum chaos or wave chaos (Gutzwiller 1990). This field has among its objectives to analyze quantum (resp. wave) systems whose classical (resp. short-wavelength) limit is partly or fully chaotic. In this framework, one can predict the existence of some eigenfunctions (resp. mode amplitudes) and energies (resp. frequencies) of the quantum (resp. wave) system from the different structures that are present in the classical system phase space (Percival 1973;Berry & Robnik 1984). In the stellar pulsation setting, Lignières & Georgeot (2008, 2009 found that the mixed (i.e. regular and chaotic) character of the acoustic ray dynamics in rapidly rotating stars results in a classification of p-modes in two broad families: regular modes either associated with stable islands or whispering gallery zones, and chaotic modes associated with ergodic regions in phase space. For the regular modes associated with stable islands, the so-called island modes, it is known to be possible to obtain approximate analytical solutions by solving the wave equation in the vicinity of a periodic stable ray (Babich & Buldyrev 1991). A simple application of such a method is found in modes of optical resonators, where the periodic stable light ray is a straight line between two reflecting mirrors (Kogelnik & Li 1966). These methods have been previously employed to obtain modes of more complex lasing (Tureci et al. 2002) and electronic (Zalipaev et al. 2008) cavities as well as quantum chaos systems (Vagov et al. 2009). In this paper, we apply this approach to rapidly rotating stars.
In the present analysis, we thus construct an asymptotic formula for regularities in the p-mode spectrum of rapidly rotating stars. Part of the results were already presented in the short communication of Pasek et al. (2011). In the present paper we give a detailed derivation of these results, specify their domain of validity, extend them with a study of rotational splittings, and explore their astrophysical applications.
The paper is organized as follows. In Sect. 2 we present the wave equation for p-modes in rotating stars and its asymptotic limit leading to an equation for acoustic rays. In Sect. 3 we use a stable periodic solution of the ray dynamics to obtain a semi-analytical formula for the associated p-modes, and to derive a formula for the associated regular frequency spacings. We then compare the results obtained from the derived formulas for mode frequencies and spatial distributions with numerical results (Sect. 4). Finally, we suggest directions on how these results could be used for the asteroseismic diagnosis of rapidly rotating stars by discussing the phenomenological implications of the theory in Sect. 5.
P-modes in rotating stars and their asymptotic limit
In Sect. 2.1 we introduce the wave equation for p-modes in rotating stars. We then present the asymptotic limit of this equation in order to obtain an equation for the dynamics of acoustic rays (Sect. 2.2).
Pressure modes in rotating stars
We start with the equation for small adiabatic time-harmonic perturbations of the pressure field in a self-gravitating gas. Since we are interested in obtaining an asymptotic theory for p-modes in the high-frequency regime, we use the Cowling approximation (i.e. we neglect the perturbations of the gravitational potential), an approximation known to be valid for high-frequency perturbations in non-rotating stars (Aerts et al. 2010). We also neglect the Coriolis force. Indeed, in the high-frequency regime, the time scale associated with this force is much longer than the mode period, and thus the influence of the Coriolis force on pulsation frequencies is weak. This has been numerically checked in Lignières et al. (2006); Reese et al. (2006Reese et al. ( , 2008.
In the asymptotic regime of p-modes, the oscillation frequencies are far greater than the Brunt-Väisälä frequency and thus we can discard the terms corresponding to gravity waves. With these assumptions, the equation for pressure perturbations is a Helmholtz equation such that where Ψ =P/ f is the complex amplitude associated with the pressure perturbation P = Re[P exp(−iωt)], f is a function of the background model, ω c is the cut-off frequency of the model and c s its inhomogeneous sound velocity (for a detailed derivation of this equation see Lignières & Georgeot 2009). The stellar model is not spherically symmetric due to the centrifugal distortion, but is however cylindrically symmetric with respect to the rotation axis. Therefore, we can write the pressure field as Ψ = Ψ m exp(imφ) where m is an integer and φ is the azimuth angle of spherical coordinates. By inserting this expression in Eq.
(4) we obtain (cf. Sect. A) where d is the distance to the rotation axis. The new mode amplitude Φ m is such that Φ m = √ dΨ m . We introduce a renormalized sound velocity:c We notice that besides its spatial dependence,c s also depends on ω and m, and that m is taken as a parameter for the twodimensional wave equation Eq. (5).
Ray limit of p-modes
In non-rotating stars, the asymptotic theory of high frequency p-modes has first been derived by Vandakurov (1967), and Tassoul (1980). The method was to use the spherical symmetry of the star to reduce the problem to a one-dimensional equation in order to obtain the mode frequencies. This method is not applicable when the centrifugal force breaks the spherical symmetry of the star. In this case though, one can study the short-wavelength limit (ω → ∞) of the wave equation Eq.
(4) (as detailed in Lignières & Georgeot (2009)). This provides a Hamiltonian system describing the propagation of acoustic rays. The Hamiltonian has been derived in Lignières & Georgeot (2009) as where the frequency-scaled wavevectork p is the projection of k = k/ω onto the meridional plane of the star. We notice that this expression has been derived from the short-wavelength limit of the three dimensional wave equation Eq. (4) and then, projected onto the corotating meridian plane. An alternative derivation would be to start from the two-dimensional wave equation Each dot corresponds to the crossing of an acoustic ray with the equatorial half-plane in the (r/R eq , k r /ω) phase space, r being the radial coordinate and k r the associated momentum. R eq is the equatorial radius and ω the mode frequency. Red/dark gray denotes a chaotic ray, green/light gray a whispering gallery ray, blue/black a stable island ray (see text). Upper inset is a close-up of the main stable island.
Eq. (5). In this case, the ray limit yields the same expression with the addition of the 1/4 factor of Eq. (5) that accounts for the impossibility of acoustic rays to go through the rotation axis (i.e. d = 0). Throughout the paper we use Eq. (7) as the Hamiltonian for acoustic rays.
To probe the integrability property of a dynamical system, it is convenient to use the Poincaré surface of section (PSS), a standard tool in dynamical systems theory (Gutzwiller 1990;Ott 2002) to visualize the structures in phase space. A PSS is a lower dimensional slice of phase space. The acoustic ray dynamical system in the meridional plane has two degrees of freedom, which gives a phase space of dimension four (two for positions, and two for momenta). There is one conserved quantity in the form of the acoustic wave frequency, so the dynamics belongs to a three-dimensional manifold in phase space. By fixing an additional position or momentum coordinate, we obtain a twodimensional PSS which can be easily visualized. An example of such a section for our system is shown in Fig. 2. Different choices of PSS variables are possible, some of which are presented in Lignières & Georgeot (2009). We have here chosen to fix the colatitude θ = π/2, so that the PSS corresponds to the crossing of rays with the equatorial half-plane. We thus display a section in coordinates (r/R eq , k r /ω) where k r is the norm of the radial wavevector, ω the mode frequency, and R eq the equatorial radius (that may be greater than the polar radius since the star is flattened by rotation). In such plots, each dot corresponds to the crossing of an acoustic ray with the PSS. Successive dots from a single ray will form lines in integrable zones, or fill surfaces densely in chaotic zones. We see in Fig. 2 that when the rotation rate Ω/Ω K (where Ω K = (GM/R 3 eq ) 1/2 is the limiting rotation rate) is large, different structures coexist in the system phase space: stable islands correspond to concentric circles around a stable ray, whispering gallery rays to lines near the surface, and chaotic zones to densely filled areas (for more details, see Lignières & Georgeot 2009). In this paper, we will focus on the 2-periodic stable island which is the main stable island (shown in the inset of Fig. 2). The rays' dynamics is very sensitive to the rotation rate, so the PSS will be different for each rotational velocity. Indeed, the locus in phase space of the main stable island changes as rotation increases. The major change happens when the central ray of the 2-periodic stable island undergoes a bifurcation at Ω/Ω K ≃ 0.26. For slow rotation rates, the central ray of the island is located on the polar axis, and through this bifurcation it transforms into two stable rays surrounding one unstable ray on the polar axis. Then, as rotation increases, the stable island will coast away from the polar axis. This bifurcation will be of some importance in the following, when we will show that one can construct approximate eigenmodes of the wave system from this 2-periodic stable island. In Fig. 1, one can see an example of an island mode obtained from a full-numerical computation, together with the central ray of the 2-periodic stable island for the same rotation rate and quantum number m.
Semi-analytical method for island modes
In this section we construct an asymptotic approximation of a subset of regular p-modes associated with a stable periodic ray. The method is based on the works of Babich and coworkers (see Babich & Buldyrev (1991) and references therein for the general formalism, and e.g. Zalipaev et al. (2008); Vagov et al. (2009) for some applications). It consists in deriving an approximation of the wave equation in the vicinity of the ray (Sect. 3.1), finding Gaussian wavepacket solutions related to the stability properties of the ray (Sect. 3.2), and then deriving the asymptotic formula for the frequencies from a quantization condition (Sect. 3.3).
Approximate wave equation in the vicinity of a stable ray
For a given rotation rate and quantum number m, we start with the central periodic ray of the main stable island (see Sect. 2). This ray must be computed by numerically evaluating the Hamiltonian equations derived from Eq. (7). In the following we will call this ray γ. The first step is to write the wave equation Eq. (5) in the vicinity of γ. For this, we use a local orthonormal coordinate system (s, ξ) defined as r = sT +ξN where s is the arc length along the ray, T the unit tangent vector, ξ the transverse coordinate and N the unit vector normal to T. The two basis vectors are related by the curvature κ(s) of the ray as follows: In this coordinate system, the wave equation Eq. (5) reads where the scale factors h s and h ξ (Arfken & Weber 2005) are and In the vicinity of the ray γ, that is for small ξ, the terms of Eq. (9) are simplified using: and We then express the function Φ m (s, ξ) in terms of a WKB ansatz as where τ is an unknown function of position. The fundamental assumption underlying the theory of Babich is that, as ω → +∞, the mode is localized on the acoustic ray and that its transverse extent scales as 1/ √ ω. Such a solution can be found by assuming that the transverse variable ξ scales as Then, from an expansion of Eq. (9) in ω, one obtains at the dominant order that the WKB phase in Eq. (16) depends only on s as dτ = ds/c s . At the next order in ω one finds a parabolic equation for the function V m : with where we introduced the scaled coordinate ν = √ ω ξ and V m = U m / √c s .
Solutions of the parabolic wave equation
To find a solution to Eq. (18), we first find a solution at a fixed arc length s, and then study how this solution must evolve with s. At fixed s, the first two terms of Eq. (18) correspond to the equation for a quantum harmonic oscillator in the direction N transverse to γ. Thus, as we know from quantum mechanics (Cohen-Tannoudji et al. 1973), a solution of this equation is a Gaussian wavepacket, transverse to the ray, that we write as with Γ an unknown complex-valued function. To find the variation of this Gaussian wavepacket along the ray γ, we introduce a solution of this form in the parabolic equation Eq. (18) and obtain a Riccati equation for Γ and a simple form for the factor A In the following, we show that the equation for Γ is related to the ray properties in the vicinity of γ and can thus be solved from ray dynamics computations. First, using the variables (z(s), p(s)) defined as Γ(s) = 1 z(s) where the (time-dependent) Hamiltonian function is The variation along γ of the Gaussian wavepacket can now be linked to the dynamics of the acoustic rays nearby γ. Indeed, Im(Γ) has a simple expression in terms of the complex variable z, as shown in Eq. (26). This variable, on the other hand, obeys Eqs. (23-24) which can be shown to be the same as the equation describing the deviation from γ of a ray nearby γ (see derivation in Sect. C). The Hamiltonian in Eq. (25) is thus a local integrable approximation, also known as a normal form approximation (Arnol'd 1989), to the full Hamiltonian for acoustic rays written in Eq. (7). Now, our task is to find the two linearly independent complex conjugate solutions of Eqs. (23-24). The terms in these equations depend only on quantities that are evaluated on the periodic ray γ. Therefore, these equations are periodic in s, or equivalently in τ. Eqs. (23-24) can thus be written as: where the matrix Σ verifies Σ(τ + T γ ) = Σ(τ), T γ = γ ds c s being the time period associated with γ. Then if (z(τ), p(τ)) is a solution of Eq. (27), so is (z(τ + T γ ), p(τ + T γ )) and the two solutions are related by the following linear map where M is called the monodromy matrix (Cvitanović et al. (2010), and references therein). As (z, p) describe ray deviations from γ, the matrix M characterizes the stability of γ. As γ is stable, we know that |Tr(M)| < 2 and that the eigenvalues are of modulus one and complex conjugates of each other i.e. Λ ± = exp(±iα) with α ∈]0, π[ (cf. Sect. D), where α is called a Floquet phase or stability angle. Hence the two linearly independent solutions of Eq. (27) can be written in the form: where the functions u ± (τ) are periodic with period T γ and v ± are independent eigenvectors of the monodromy matrix M. An expression of the monodromy matrix in terms of second derivatives of the action function S can be derived (Bogomolny 2006). The action function S is defined by a trajectory from the position q i to q f for a given energy or frequency ω (Gutzwiller 1990). For our purposes, the action is written as where σ is the arclength along a ray nearby γ. If we write the monodromy matrix as we can express its components from the second derivatives of the action function S with the following formulas where the positions q i and q f are written as (s i , z i ) and (s f , z f ), z i and z f being respectively the initial and final transverse positions of the neighboring ray after one period, and the derivatives are evaluated on the periodic ray. From these expressions, and the simple formula giving the roots of a second degree polynomial (cf. Sect. D), we can obtain an expression for the stability angle α as where We thus have obtained a solution of the approximate wave equation Eq. (18) in the form of a Gaussian wavepacket (Eq. (20)), whose evolution along the ray γ is given by Eqs. (27-28). It is possible to find other solutions of Eq. (18) that have a finite number of nodes in the direction transverse to γ. In the same way as in quantum mechanics (Cohen-Tannoudji et al. 1973), these solutions can be obtained from Eq. (20) using the annihilationâ and creation operatorâ † that are defined aŝ These operators have the commutation rule â,â † = 1, where the commutator is defined as  ,B =ÂB −BÂ. It can be shown that the expression for the higher-order solutions is This defines a recurrence relation, whose solutions are proportional to Hermite-Gauss polynomials and can be written in the following way with H ℓ the Hermite polynomials of order ℓ. Finally, we can write the solutions of Eq. (5) as
Quantization condition and regular frequency spacings
The quantization condition is based on the single-valuedness of the solution presented in Eq. (39), and thus asserts that the phase accumulated by the function Φ ℓ m over one period must be a multiple of 2π. In the following, we assume without loss of generality that the eigenvalue which corresponds to the wavepacket localization is exp(+iα). The contribution of function V ℓ m to the dynamical phase of Φ ℓ m is obtained from Eq. (30) and Eq. (38). We obtain that the phase accumulated over one period is where T γ = γ ds c s is the acoustic travel time along γ and α the Floquet phase that is defined modulo 2π. For our purposes, we must also take into account the number N r of multiples of 2π acquired by the phase. N r can be computed by following the evolution of the eigenvector v over one period, and we verified numerically that, alternatively, N r is also the winding number around γ of a ray nearby γ during one period. The last term in Eq. (40) is the Maslov phase (Gutzwiller 1990) that comes from the reflection of the wave on the boundaries. The formula for the frequencies, ω n,l,m , of island modes is thus or, in a form that makes the regularities more visible, with the frequency regularities and the constant term The quantum numbers n and ℓ correspond to node numbers of the p-mode, respectively in the longitudinal and transverse directions of the central ray γ as illustrated in Fig. 3. This is to be contrasted with the case of a spherical mode where the most natural labeling are the quantum numbers of spherical harmonics. Eq. (41) is a semi-analytical formula since the quantities T γ , N r and α must be computed numerically from the Runge-Kutta integration of the Hamiltonian equations for acoustic rays. The acoustic time, T γ , is directly computed from γ itself. From the intersections of a ray nearby γ with the PSS (as can be seen in Fig. 2), we compute the monodromy matrix M that maps one intersection with the PSS to the next one. Then, by diagonalizing this matrix one obtains the Floquet phase α from its eigenvalues, and the functions z and Γ from its eigenvectors. It is important to note that for m even, only modes symmetric with respect to the rotation axis exist. Since the preceding theory does not take this phenomenon into account, the theoretical value of δ ℓ is multiplied by two when the ray γ coincides with the rotation axis, i.e. for rotation rates less than the bifurcation point. Finally, it can be noted that a formula similar to Eq. (41) can also be obtained through the formalism of the Gutzwiller trace formula following the method of Miller (1975).
Comparison with numerical results
As the present asymptotic theory relies on various assumptions, its relevance for stellar seismology is not guaranteed and needs to be assessed through a comparison with exact calculations of realistic stellar models. In this section, the comparison is done with numerically computed modes in uniformly rotating polytropic models of stars. The hypotheses of the asymptotic theory are the following : first, it is valid in the asymptotic regime, that is, for high enough frequencies. Second, the island modes are constructed from a stable island of acoustic ray phase space. At null rotation such a structure does not exist, so we expect that the theory fails to describe spherical mode amplitudes and frequencies. A stable island immediately appears at non-zero rotation, but its phase space volume must be high enough for an island mode to exist. This volume increases with frequency and rotation (see Lignières & Georgeot (2008, 2009) for details). Thus, for a given frequency range, the number of island modes starts from zero at small rotation rates and progressively increases as the rotation and thus the phase space volume of the stable island grows. Actually, low degree spherical modes become progressively island modes as rotation increases. Another assumption used in finding a solution to the wave equation is that the mode decays as ∝ 1/ √ ω in the direction transverse to the periodic ray. Finally, the theory also neglects the Coriolis force and the perturbations of the gravitational potential.
In the following, the asymptotic theory is compared with highly accurate computations of high frequency adiabatic modes in uniformly rotating polytropic stellar models with index N = 3, the Coriolis effect and perturbations of the gravitational potential being taken into account. The accuracy of these calculations, described in detail in Reese et al. (2006), is very high (the relative precision on the frequencies is 10 −7 ) and thus does not interfere with the present comparison. A large number of modes were followed from Ω/Ω K = 0 to Ω/Ω K = 0.896. At zero rotation these modes are low degree ℓ s ∈ {0, 1, 2, 3}, high order n s ∈ [21, 25] modes. At higher rotation rates, they become island modes and can thus be labeled with n and ℓ, the number of nodes along and transverse to γ, respectively, as illustrated in Fig. 3. The relation between the quantum numbers at zero and high rotation rates is (Reese 2008) : We remind here the reader that rotational multiplets are defined, in the non-rotating case, as a set of frequencies with identical n s , ℓ s quantum numbers but different values of m s for m s ∈ [−ℓ s , ℓ s ].
For rotating stars, we can define multiplets as frequencies with identical n and ℓ but different m ∈ Z, i.e. without any limiting value for m. The relation between the two sets of quantum numbers and the two types of multiplets is illustrated in Fig. 3. In this figure the multiplets of island modes correspond to diagonal colored bands, whereas the multiplets at zero rotation would have the form of vertical bands. We restricted ourselves to numerical modes with ℓ max s = 3 so, in terms of island mode quantum numbers, the range of numerically computed modes is the one given in Table 1. The associated numerical frequency spacings are defined as: and The semi-analytical asymptotic theory also requires determining the α term in Eq. (43) numerically. To test the robustness of this calculation, we checked that the frequency spacing δ ℓ only weakly depends on the choice of the ray nearby γ that is used to compute α. Also, the spacings δ n and δ ℓ neither depend on the resolution of the background model nor on the integration parameters of the Runge-Kutta method.
Regular frequency spacings
According to Eq. (42), the structure of the spectrum is characterized by the two spacings δ n (m) and δ ℓ (m). In Fig. 4, their semianalytical and numerical values computed for m = 0 and |m| = 1 are compared as a function of the rotation rate. One can see that the semi-analytical regularities δ n , δ ℓ , and the full computations of high-frequency p-modes are in good agreement for almost all rotation rates. For m = 0, around Ω/Ω K ≃ 0.26, the agreement degrades significantly. In this rotation range, the ray γ in the center of the main stable island undergoes a bifurcation from one stable ray on the polar axis to two stable rays surrounding one unstable ray. When such a bifurcation occurs, the eigenvalues of the monodromy matrix become Λ ± = 1, corresponding to a Floquet phase α = 0 mod 2π (see e.g. Brack 2001). This is indeed what happens at Ω/Ω K ≃ 0.26, as δ ℓ ∝ α goes to zero. Such a behavior conveys the non-validity of the present normal form approximation for rays undergoing a bifurcation. One possibility would be to use other local approximations of the ray dynamics called uniform approximations (Schomerus & Sieber 1997). The discrepancy coming from the bifurcation is not to be found for m 0, since the stable ray stays away from the polar axis and does not undergo a bifurcation as rotation increases.
Although, as mentioned before, the theoretical and numerical frequency spacings are not expected to match for slow rotation rates, the discrepancies remain small in this rotation range. This is due to the fact that, as Ω/Ω K approaches zero, the stable ray is along the polar axis and, according to the expression of δ n , this implies that δ n = ∆/2, i.e. half the large separation defined in Eq. (3). Now, using the first order of Tassoul's formula and the quantum numbers conversion rules Eqs. (45)(46)(47), it is easy to see that, at zero rotation, δ N n = ω N n+1,ℓ,m − ω N n,ℓ,m is expected to be close to half the large separation. Note also that the doubling of the numerical values observed in Fig. 4 at small rotation rates is due to the small separation that appears at the next order of Tassoul's theory. Concerning δ ℓ , the semianalytical calculations indicates that δ ℓ goes to 2δ n , that is ∆, for slow rotation rates. Again, Tassoul's formula applied to the n, ℓ quantum numbers shows that δ N ℓ is close to ∆. For these reasons, the frequency spacings δ n and δ ℓ converge to the results of the first order of Tassoul's formula, though their derivation is formally not possible for non-rotating stars.
In order to investigate the drift between the spectra of different m as rotation increases, we consider the frequency spacing: δ m = ω n,ℓ,m − ω n,ℓ,0 .
(50) Figure 5 displays a comparison between the numerical and semianalytical values of δ m for ℓ = 0 and |m| = 1. As expected, the agreement is not good at small rotation rates. Using the first order of Tassoul's formula to approximate the numerical results at zero rotation, δ N m = ω N n,ℓ,m − ω N n,ℓ,0 is found to be close to |m|∆/2 when Ω/Ω K = 0. This is not compatible with the asymptotic theory of the island mode since it predicts that δ m goes to zero when ω goes to infinity. Indeed, δ n (m) depends on m/ω becausec s and the ray path γ, given by the Hamiltonian Eq. (7), both depend on m/ω. An alternative explanation is to consider the spatial distribution of island modes of fixed m and ℓ: one finds that increasing ω produces both larger derivatives along the stable ray associated with a higher node number n, and larger transverse derivatives because the transverse extent scales as 1/ √ ω. Thus, the contribution of the azimuthal derivatives becomes negligible in the wave equation Eq. (4). We also verified that δ m displayed in Fig. 5 diminishes when n is increased. Thus, for rotation rates such that the numerical modes are not fully island modes, they behave more like spherical modes and δ N m shows clear discrepancies with the asymptotic results. By contrast, at high rotation rates, an approximate analytical formula for δ m is derived in the following and shown to closely reproduce the numerical results.
Starting from
we first assume that n is large enough to neglect β(m) − β(0). From Eq. (43), δ n (m) is equal to 2π/T γ (m) where The dependence of T γ in m is explicit in the integrand but is implicit in the integration path γ. In the following, the variation of the location of γ with m/ω is assumed to be negligible. Then, an expansion in 1/ω of the integrand in Eq. (52) leads to: Hence we obtain an approximate expression for δ n (m) of the form: If we insert the previous expression for δ n (m) in Eq. (51) and neglect β(m) − β(0), we have Finally, normalizing by ω p = GM/R 3 p and replacing ω by n ω n yields where we have used the fact that n ω/ω p stays constant in the frequency range considered here, and is known to be close to ω p γ ds c s /2π. The previous expression will be made more precise by renormalizing the value of c s by 1 − ω 2 c /ω 2 to take into account that ω c is not negligible, and indeed is of the order of ω, close to the stellar surface. In Fig. 6, the numerical values of δ m (ℓ = 0) as well as results for the last term in Eq. (56) are plotted as a function of m/ √ n for Ω/Ω K = 0.419, showing a good agreement. This behavior is valid for rotation rates higher than Ω/Ω K ≃ 0.4. It must also be noted that in the numerical calculations by Reese et al. (2009), using more realistic stellar models, the asymptotic m √ n dependency was also found empirically.
Pressure amplitudes of island modes
In this section, we compare the results obtained from the semianalytical formula for mode spatial distributions Eq. (39) with results from full numerical computations. Equatorial cuts of the semi-analytical modes can be expressed as a function of √ ω(r − r 0 ), where r 0 is the radial position of the ray γ, while the value of Γ is obtained from the eigenvectors of the monodromy matrix M. In Fig. 7, the equatorial cuts of semianalytical and numerical modes are plotted for different rotational velocities and quantum numbers ℓ and m. The chosen modes are representative of the different behaviors observed. Discrepancies between semi-analytical and numerical results are mainly due to edge effects. This occurs when the transverse extent of the mode (which scales as 1/ √ ω) reaches either the polar axis for small rotations, or the surface near the equator for high rotations. Finally, avoided crossings can also contravene an accurate prediction for mode amplitudes since the amplitudes of crossing modes will be linear combinations of all the modes contributing to the crossing. Hence, modes undergoing an avoided crossing can differ significantly from Eq. (39) (cf. third panel in Fig. 7). Overall, there is nevertheless a good agreement between the semi-analytical and numerical results for mode spatial distributions, showing the validity of Eq. (39).
Phenomenology and observables for asteroseismology
In this section, we show that the asymptotic theory provides a simple understanding of the evolution of the island mode spectrum with rotation. Then, the physical content of the potentially observable frequency spacings δ n , δ ℓ and δ m is discussed. Figure 8 displays the global evolution of all the numerical frequencies considered in the observer's frame, whose island mode quantum numbers can be found in Table 1 (or equivalently n s ∈ [21, 25], ℓ s ∈ [0, 3], m s ∈ [−ℓ s , ℓ s ] in spherical modes quantum numbers). The first phenomenon that can be noticed is a global decrease of frequencies with rotation. This effect is simply due to the increasing volume of the star when it is spinning rapidly. Besides this global effect, the evolution of the spectrum's organization can be inferred from the evolution of frequency spacings δ n , δ ℓ and δ m . The spacing δ n stays almost constant from null up to high rotations, its value remaining close to half the large frequency separation of the spherical model. If a large number of island modes are detected in an observed spectrum, δ n should be easily extracted from the data. By contrast, the rapid evolution of δ ℓ with rotation will strongly modify the spectrum's organization. This is shown in Fig. 9 where for clarity only a few m = 0 modes have been displayed: the ℓ = 0, n ∈ [43, 46] and ℓ = 1, n ∈ [42, 44] modes (or equivalently the (n s , ℓ s ) ∈ {(21, 1), (21, 2), (21, 3), (22, 0), (22, 1), (22, 2), (23, 0)} modes). Starting from the usual structure at zero rotation involving the large and small separations of Tassoul's theory, the spectrum reorganization induced by the decrease of δ ℓ can also be viewed as an increase of the small separation δ. Then, above Ω/Ω K ≃ 0.45, the structure of the m = 0 spectrum remains practically unchanged. Now, to illustrate the evolution of the spectra of different m, Fig. 10 displays the n = 44, ℓ = 0, m ∈ {−2, −1, 1, 2} mode frequencies as a function of the rotation rate together with the n ∈ [43, 46], ℓ = 0, m = 0 frequencies. The main feature of this evolution is the decrease of δ m from ≃ ∆ 2 |m| at zero rotation to very small values at high rotations. When multiplets of island modes are defined as in Sect. 4 and Fig. 3, they show no regularity at small rotation rates. Note, however, that the splitting ω m −ω −m is always very close to −2mΩ because the effects of the Coriolis force are negligible. By contrast, at high rotation rates, as δ m vanishes above Ω/Ω K ≃ 0.45, the m ∈ {−2, −1, 0, 1, 2} modes clearly form a regular multiplet, as can be seen in Fig. 10, where deviations from strict Ω spacings are due to the (m/ √ n) 2 For clarity, the corresponding degrees of the spherical harmonics are also written. We have outlined the large frequency separation ∆, the small frequency separation δ = ω n s ,ℓ s − ω n s −1,ℓ s +2 and the δ n , δ ℓ spacings with arrows. term. Since for such rotation rates the structure of the m = 0 spectrum remains unchanged, the evolution of the whole spectrum in the observer's frame is dominated by the advection term mΩ.
The global evolution of mode frequencies in the observer's frame shown in Fig. 8 also presents some particular events: a first clustering of mode frequencies occurs around Ω/Ω K ≃ 0.25 and then a second one around Ω/Ω K ≃ 0.56. Both phenomena can be understood from the asymptotic theory. According to the asymptotic formulas Eqs. (41-44), crossings of mode frequencies will happen when δ ℓ /δ n , or equivalently α/π, has a rational value. Though the asymptotic theory predicts true eigenvalue crossings, it is known that these crossings will be avoided if the two modes are of the same symmetry class (Landau & Lifshitz 1977). As can be seen on Fig. 4, δ ℓ becomes equal to δ n at some rotation rate around Ω/Ω K ≃ 0.25 where the spectrum for a given m simplifies to ω n,m = δ n (m)n + β. The degeneracy occurs between modes of a different symmetry class, and the rotation rate at which it occurs depends only weakly on the m values of the modes, if m is small. This property translates itself into a clustering of the full spectrum in the observer's frame because it turns out that this rotation rate is close to δ n /2; and δ m , that decreases from an initial value of mδ n to zero at high rotation, is around mδ n /2 at this intermediate rotation. The second frequency clustering close to Ω/Ω K ≃ 0.56 is related to the fact that δ m vanishes at high rotation. In this regime, the different m spectra are expected to collapse onto a single spectrum in the rotating frame but not in the observer's frame. However, when Ω is equal to δ n , the near degeneracy of the m spectra produces the frequency clustering observed at Ω/Ω K ≃ 0.56.
One of the interests of asymptotic theories in asteroseismology is to gain physical insights into seismic observables such as δ n , δ l , and δ m . In the following, we briefly discuss this point with emphasis on the differences and similarities with the physical content of the large and small separation from Tassoul's theory. The spacing δ n depends only on the acoustic travel time T γ = γ ds c s along the acoustic ray γ. We expect T γ to be dominated by the time spent in the sub-surface region where the sound speed is much smaller than in the interior. While the path of the ray varies with rotation, δ n remains approximately proportional to the mean density as shown by Reese et al. (2008). On the other hand, the δ ℓ spacing depends also on the second derivatives of the sound speed transverse to the ray γ, an information integrated all along the ray. As long as the path of the stable ray goes through the central region of the star, the island mode frequencies should be sensitive to the chemical stratification and thus the age of the star. However, after the bifurcation at Ω/Ω K ≃ 0.26, the ray path progressively avoids the central region and the island modes do not contain this information anymore. Another interesting property of δ ℓ (or δ ℓ /δ n ) is that it is very sensitive to rotation as long as Ω/Ω K ≤ 0.35. Finally, for high rotation rates (Ω/Ω K ≥ 0.40), the value of δ m , that can be detected through the irregularity of multiplets, also gives an information on rotation since it is proportional to γ c s d 2 ds (Eq. (56)) where the distance of the ray to the rotation axis d strongly depends on the rotation rate.
Conclusions
In this paper, we derived an asymptotic formula for frequencies that predicts and describes regular spacings in the p-mode spectrum of rapidly rotating stars. The derivation relied on finding a stable periodic solution of the acoustic ray dynamics, and obtaining an expression for the modes that are localized around this ray, the so-called island modes. The method thus provides a formula for the island modes frequencies, as well as a formula for the mode spatial distributions. We compared these semianalytical formulas with results from numerical computations of high-frequency oscillations in rotating polytropic stellar models. The frequency spectrum is characterized by the three spacings δ n , δ ℓ and δ m . The agreement was shown to be good for δ n and δ ℓ at almost all rotation rates, while δ m shows significant discrepancies at low rotation rates. The spacing δ n stays almost constant at all rotation rates with a value that is close to half the large frequency separation of the non-rotating model. On the other hand, the rapid decrease of δ ℓ strongly modifies the spectrum's organization up to Ω/Ω K ≃ 0.4, while above that rotation rate, δ ℓ remains approximately constant. For such high rotation rates, the spacing δ m nearly vanishes, thus in the observer's frame the evolution of the whole spectrum is dominated by the advection term mΩ. We have also seen that the combined evolution of these frequency spacings with rotation leads to particular events such as true or near degeneracies, that can significantly simplify the spectrum. In addition to these new insights on the evolution of the island mode spectrum with rotation, the asymptotic theory provides semi-analytical formulas for the regular spacings, in particular simple formulas for δ n and δ ℓ .
The present asymptotic theory should be useful for different aspects of stellar seismology in the presence of rapid stellar rotation. The regular frequency spacings are potentially observable and our results provide guidance to look for them in data. While investigations dedicated to the search for regularities are necessary (e.g. Lignières et al. (2010)), we expect that the easiest quantities to detect in an island mode spectrum are δ n at any rotation rate, 2mΩ at small rotation rates, and Ω at high rotation. For modeling pulsations, the asymptotic theory provides a new approach, complementary to numerical computations. One of its advantages is to give a quick estimate of frequency spacings for a given stellar model, which in turn can be used to search for patterns in numerically computed spectra. In the same spirit, the semi-analytical amplitude distributions might provide a useful approximation for calculating mode visibilities and spectral signatures.
The asymptotic theory in itself can be improved and extended in various ways. We have already mentioned that the method needs to be refined at the rotation rate where the bifurcation of the stable ray occurs, using uniform approximations of the ray dynamics. It would also be interesting to predict analytically the rotation rate of the bifurcation for a given sequence of stellar models. The present method also assumes that the modes are governed by local dynamics around the stable ray. This assumption can be tested with a numerical EBK method applied to the tori of the stable island (Bohigas et al. 1993), although this method is complicated to implement in pratice. In this paper, we left aside the determination of the actual number of modes that are described by the asymptotic theory. An estimate of such a number can be obtained by computing systematically the phase space volume of stable islands for different rotations (e.g. Lignières & Georgeot (2009)). Then, knowing the value of δ ℓ that gives the mean distance between island mode frequencies, or the mean density of these modes, one could compute the ℓ max of the modes that satisfy our formulas. Another aspect that we have not modeled is avoided crossing in spite of the fact that it will induce important deviations, especially at low frequencies. Strong gradients of the sound speed will also produce deviations from the asymptotic theory. A technique called ray-splitting, that has already been used successfully in quantum chaos (Blümel et al. 1996), could account for this effect. Finally, a similar technique could be applied to asymptotic gravity modes, that were shown recently to have connections with ray theory (Ballot et al. 2011).
|
2012-05-30T14:31:03.000Z
|
2012-05-30T00:00:00.000
|
{
"year": 2012,
"sha1": "7e14bd35d718ff89e77f4c0b562d690500549e17",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2012/10/aa19716-12.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "7e14bd35d718ff89e77f4c0b562d690500549e17",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
250077862
|
pes2o/s2orc
|
v3-fos-license
|
Conversion from chronic to episodic migraine in patients treated with galcanezumab in real life in Italy: the 12-month observational, longitudinal, cohort multicenter GARLIT experience
Objective To investigate in real-life the conversion from chronic migraine (CM) to episodic migraine (EM), specifically to EM with High-Frequency (HFEM: 8–14 monthly migraine days, MMDs), Medium-Frequency (MFEM, 4–7 MMDs), and Low-Frequency EM (LFEM, 0–3 MMDs), and its persistence during 1 year of treatment with galcanezumab. Methods Consecutive CM patients treated with galcanezumab completing 1 year of observation were enrolled. We collected data on MMDs, pain intensity (Numeric Rating Scale, NRS score), and monthly acute medication intake (MAMI) from baseline (V1) to the 12-month visit (V12). Results Of the 155 enrolled patients, 116 (around 75%) reverted to EM at every visit and 81 (52.3%) for the entire 1-year treatment. Patients with older onset age (p = 0.010) and fewer baseline MMDs (p = 0.005) reverted more frequently to EM. At V12, 83 participants (53.5%) presented MFEM or LFEM. Patients reverted to MFEM or LFEM for 7 months (25th 1, 75th 11). The medication overuse discontinuation rate at V12 was 82.8% and occurred for 11 months (25th 8, 75th 12). From baseline to V12, the MAMI decreased by 17 symptomatic drugs (p < 0.000001) while the NRS score reduced by almost 2 points (p < 0.000001). A consistent transition to EM for the entire treatment year was observed in 81 (52.3%) patients. Discussion The 1-year GARLIT experience suggests that more than half of CM patients treated with galcanezumab persistently reverted to EM in real life. Trial registration ClinicalTrials.gov NCT04803513. Supplementary Information The online version contains supplementary material available at 10.1007/s00415-022-11226-4.
Introduction
Migraine is among the most disabling neurological conditions. In 2019, headache disorders caused disability to 46.6 million people globally. Of those, 88.2% were attributable to migraine, representing the second highest cause of disability worldwide [1,2]. Migraine distresses people in their productive age, impairing their work performances and social and familial contexts [3]. Moreover, around 8% of patients experience a progressive increase in the frequency of attacks to the point where migraine becomes chronic [4]. Patients with chronic migraine (CM) [5] suffer pain as part of a constellation of symptoms, including non-cephalalgic pain, emotional distress, sleep and gastrointestinal disorders, and other somatic conditions [6,7]. In addition, CM patients are often forced to consume analgesics to relieve pain, resulting in medication overuse (MO), which worsens patients' quality of life and is a risk factor for migraine chronification [8].
In this context, calcitonin gene-related peptide (CGRP) targeted therapies revolutionized migraine management [9]. Before their availability, international guidelines [10] recommended the use of prophylactic medications not specifically developed for migraine treatment and burdened by poor long-term adherence due to their adverse events and often inadequate effectiveness [11].
Randomized controlled trials (RCTs) have consistently demonstrated that monoclonal antibodies (mAbs) specifically designed to target CGRP or its receptor are safe and effective in preventing CM [12]. These results have also been confirmed by real-life studies showing that clinical improvements can be even better in everyday clinical practice than in RCTs [13][14][15][16]. However, few studies focused on the efficacy of CGRP targeting mAbs in reverting CM to EM, mainly in the short term [16][17][18]. Chronic migraine, indeed, is a fluctuating condition: nearly 75% of patients with CM can remit to EM for at least 3 months during 1 year [19]. Migraine chronification can also be reversible: about 26% of patients with chronic migraine remit within 2 years [20].
Galcanezumab has been available in Italy for migraine prevention since September 2019. Although RCTs demonstrated high efficacy and tolerability of galcanezumab in CM patients [21], a noticeable impact on their highly disabled quality of life is achieved only with sustained response to a preventive treatment.
The present prospective, observational, multicenter study aimed to investigate in real life the persistence of conversion to EM during 1 year of therapy with galcanezumab in CM patients.
Participants and study design
Galcanezumab for the prevention of high frequency episodic and chronic migraine in Real Life in ITaly, i.e., the GARLIT study, is an independent, multicenter, prospective, cohort, real-life study ongoing at 15 headache centers across 8 Italian regions from September 2019. The present study included data from the latest survey on December 6, 2021.
All consecutive patients aged 18 or older with a diagnosis of HFEM (8-14 migraine days per month) or CM (1.3 ICHD-3) [5], with the clinical indication to galcanezumab according to the eligibility criteria [22], were considered for enrollment in the GARLIT study. Patients had not been not previously involved in any CGRP mAbs trial. The present paper considered only CM patients with 12 months of observation from the start of therapy. Patients were treated with galcanezumab subcutaneous injections as recommended (https:// www. ema. europa. eu/ en/ docum ents/ produ ct-infor mation/ emgal ity-epar-produ ct-infor mation_ en. pdf). They received the first loading dose of 240 mg and 120 mg every month afterward. The Italian Medicines Agency allows the reimbursement of CGRP mAbs therapy in migraine patients with at least 8 monthly migraine days and moderate disability (MIDAS score ≥ 11), having a history of an insufficient response to at least three classes of prophylactic treatments (not including calcium-antagonists).
Data collection
Data collection of the GARLIT study is described elsewhere [23]. Patients were assessed at baseline by a headache expert with a face-to-face interview using a semi-structured questionnaire addressing socio-demographic factors, clinical migraine features, previous and current acute and preventive migraine treatments, comorbidities, and concomitant medications. Migraine-related dopaminergic and unilateral cranial autonomic symptoms and allodynia during or between attacks were also investigated. Cranial autonomic symptoms were defined as at least one of the following: ipsilateral conjunctival injection, lacrimation, nasal congestion, rhinorrhoea, forehead, facial sweating, miosis, ptosis, and eyelid edema. Dopaminergic symptoms were at least one of the following: yawning, drowsiness, severe nausea (i.e., requiring specific treatment), and vomiting during prodromes, headache stage, and postdromes. Patients were also requested to rate the overall efficacy of triptans in most attacks as none/poor or fair/excellent. Enrolled patients were requested to carefully fill out a daily headache diary reporting monthly migraine days (MMDs) and monthly acute medication intake (number of tablets/month, MAMI) during a run-in month period (baseline) and the 12 months of the study. We calculated the ratio between mean MAMI and MMDs to assess the number of acute medications per attack. Acute medications were classified into triptans, NSAIDs/ paracetamol, and combination drugs. All patients were educated on the headache diary use before enrollment in the GARLIT study. Medication overuse was defined in patients taking ≥ 15 NSAIDs or ≥ 10 triptans per month. Based on MMDs, at each time point, patients were classified as CM, HFEM, Medium-Frequency Episodic Migraine (MFEM; 4-7 MMDs), and Low-Frequency Episodic Migraine (LFEM, < 4 MMDs). Patients were also asked to rate the pain severity (score 0-10 at the Numerical Rating Scale, NRS) of the monthly most painful attack.
The above-reported variables were recorded at baseline and monthly at every visit (V1 to V12). Telephone/email contacts were allowed when in-office visits were not possible (e.g., isolation/quarantine due to the SARS-CoV-2 pandemic).
Endpoints
The primary endpoint was the conversion rate from CM to EM and, more specifically, to HFEM, MFEM, and LFEM at each time point from V1 to V12. Secondary endpoints included the rate of MO discontinuation and changes in MAMI and monthly NRS score. We also investigated the predictive factors of MO discontinuation and the conversion to MFEM/LFEM compared to CM/HFEM in the last month of therapy (V12). Finally, we evaluated the use of acute medications during the 12 months of therapy.
Standard protocol approvals, registrations, and patient consents
All patients provided written informed consent. The study was approved by the Campus Bio-Medico University Ethical Committee n.30/20, mutually recognized by the other local ethical committees, and registered at the Italian Medicines Agency (Agenzia Italiana del Farmaco, AIFA) and at Clini-calTrials.gov NCT04803513.
Data availability statement
Anonymized data will be shared by request from any qualified investigator.
Statistical analysis
This is a priori analysis. To achieve a power of 80% and a level of significance of 5% (two-sided), for detecting an effect size of 0.25 between paired variables, we calculated a sample size of at least 128 subjects. Statistical analyses were performed with SPSS version 27.0 (SPSS Inc., Chicago, IL, USA). The interval variables between groups were compared with the independent t test (expressed as means with standard deviations [SD]) or Mann-Whitney tests (medians with 25th,75th percentiles]). Paired t-test was used to analyze the variable changes over time. Contingency tables (chisquare and two-tailed Fisher's exact tests) and unadjusted odds ratios (OR) with their 95% confidence intervals (CI) were run to compare frequencies between groups. All tests were bilateral. Statistical significance was set as a two-tailed p < 0.05. We included only subjects with complete information regarding the primary variable (MMDs). We declared data availability of secondary variables (MAMI, NRS), excluding patients with missing values from the analysis. We assessed the percentage of patients with CM, HFEM, MFEM, and LFEM and patients with MO from V1 to V12.
We initially investigated which clinical baseline characteristics were associated with conversion to EM, MFEM/ LFEM, and MO discontinuation at V12. These variables (considering only p < 0.02) were entered as independent variables in the binary logistic regression (forced entry) to confirm the association with conversion to CM to MFEM or LFEM and MO discontinuation (dependent variables). Bonferroni correction was applied for multiple comparisons.
Results
Since the first galcanezumab injections, 161 CM patients completed 12 months of observation and were considered in the present study. Six subjects were excluded from the current analysis since the complete data set regarding primary studied variables was unavailable. We finally enrolled 155 patients. Of these, 22 patients (14.2%) dropped out due to lack of effectiveness (20) or adverse events (2) after at least 3 months of therapy; these individuals were included in the analysis as still CM and still MO and considered for the other endpoints for their respective treatment period (Fig. 1) From baseline to V12, participants reported a decrease in MMDs (around 10 days, 9.6 ± 7.9; p < 0.00001), in MAMI (around 17 drugs, 8.2 ± 8.7; p < 0.000001), and pain intensity (almost 2 points in NRS score, 5.9 ± 1.79; p < 0.000001). At V12, 48 (30.9%) patients were on concomitant preventive medications. Figure 2 shows the percentage of patients converting to EM during the 12 months of treatment. Around 75% or more of them reverted to EM at each evaluation visit ( Fig. 2A). Table 2 reports multivariate logistic regression results having "reversion to EM" at V12 as the dependent variable. After Bonferroni correction, reversion to EM was observed in participants with older onset age (p = 0.010) and less frequent baseline MMDs (p = 0.005).
Return to EM occurred at median for 12 cumulative months (25th 9, 75th 12) and to MFEM or LFEM for 7 cumulative months (25th 1, 75th 11). After the first month of treatment, 63 patients (40.6%) presented less than 8 MMDs (MFEM/LFEM), increasing to 83 (53.5%, Fig. 2B) at V12. However, only 32 (20.6%) improved to MFEM/LFEM consistently from V1 to V12. Figure 3 illustrates the percentage of patients with MO during the observation period. At baseline, 122 (78.7%) participants presented MO. At V12, 101 out of them (82.8%) had discontinued MO. Patients discontinued MO at median for 11 cumulative months (25 th 8, 75 th 12). Supplemental Fig. 1 (panel B) shows the percentage of patients discontinuing MO for at least 1 (97.5%) and up to 12 (41.8%) cumulative months of therapy. Figure 4 displays the variations in MAMI (panel A) and NRS values across evaluation times (panel B). Although the decrease in MAMI intake was principally observed in the first month of therapy, it became more pronounced from V1 to V12 (p = 0.01). The ratio between mean MAMI/MMDs was above 1 at baseline (1.29, i.e., 29% more than one acute medication per migraine day) but consistently lower than 1 (up to 0.80 at V11, i.e.,20% less than one acute medication per migraine day) from V4 to V12 (Fig. 5). Table 3 summarizes baseline demographical and clinical profiles of patients with baseline MO and compares them according to the presence of MO at the end of the treatment year.
Finally, participants did not substantially modify the class of acute medications used during the year of treatment (Supplemental Fig. 2).
Discussion
Patients with CM face a sort of never-ending migraine attack. From a neurobiological point of view, migraine can be considered an evolutive condition involving different systems that interconnect in a fluctuating balance, producing cycling but sometimes persistent perturbation of neural connectivity homeostasis and even impairment of cognitive performance [24]. From a social point of view, CM imposes constraints in family and work settings and often induces patients to renounce to meaningful opportunities, consequently resulting in high levels of frustration. The GARLIT is a large, multicenter, prospective, reallife study performed on galcanezumab. We have described the conversion rate to EM and MO discontinuation in the short term (3 months) and their prognostic factors [25]. The present analysis investigated these endpoints in the long term, i.e., 1 year. Around 75% or more of patients experienced remission to EM from V1 to V12 (Fig. 1A), and more than half of them (52.3%) consistently for the whole treatment year. These findings should also be appraised in light of a cohort of people with a very long disease history and multiple preventive treatment failures. Although a direct comparison is not possible, the conversion rate to EM observed as early as the first month of therapy in our cohort (77.4%) seems higher than previously described in RCTs with fremanezumab [18] and erenumab [26] (around 50%) and slightly higher than the rate reported in two real-life studies on the use of erenumab which increased at later time points (64-68%) [16,17]. Our previous short-term analysis [25] observed lower BMI, unilateral pain, good response to triptans, and MO as positive predictive factors of rapid conversion to EM. Although we also found a trend for the above variables in the long term (Table 1), regression analysis did not confirm these findings (Table 2). Still, this analysis highlighted an older onset age and fewer monthly migraine attacks at baseline as positive predictive factors for good outcomes.
While it is not unexpected that more frequent attacks are less likely to decline to an episodic frequency, the association between younger onset age and worse outcomes deserves careful consideration. Interestingly, a study investigating genetic and clinical conditions predicting erenumab therapy outcomes reported an association Ratio between the mean MAMI and MMDs from baseline to V12. A ratio of 1 corresponds to the intake of one acute medication for each migraine day of younger onset age and a variant of the receptor activity modifying protein 1 with a less prominent response [27]. CM is often pictured as the result of inadequate therapeutic management. However, the GARLIT participants had been treated according to best clinical practice for a long time before enrollment [10]. It can be speculated that constitutional characteristics (genetic or epigenetic) influence the lifetime course of migraine and possibly impact the response to pharmacological treatments [24,28]. These considerations suggest that the earlier the migraine onset, the more favorable the outcome if adequate pharmacological and non-pharmacological treatments are offered early [29]. Nevertheless, a cross-sectional analysis of a large pool of patients from the American Registry for Migraine Research demonstrated that using a 15-headache day/month cut-off to distinguish EM from CM does not accurately capture the burden of illness nor reflect the treatment needs [30]. The authors proposed reconsidering the concept of CM, including also attack frequency ranging from 8 to 14.
In the GARLIT population, the percentage of participants with MMDs below 8 increased from V0 (40.6%) to V12 (53.5%). This benefit was obtained for a median of 7 months, and in 32 patients (20.6%), for the entire duration of the 12-month therapy. None of the evaluated baseline characteristics, not even baseline MMDs, seemed to predict the conversion to MFEM or LFEM.
Can the chronic migraine brain unlearn pain? Although mAbs targeting the CGRP pathway cannot meaningfully cross the blood-brain barrier, few studies observed a central functional restoration of the pain network in the short term [31,32] as a possible effect of a peripheral modulation of sensory input. Other factors may help transform a very disabling condition such as CM into a manageable episodic disorder. We envision that some patients, perceiving persisting a long-term benefit, might be capable of reverting the migraine-driven vicious circles affecting different aspects of life, e.g., lifestyle and psychosocial situations. Once the frequency of migraine attacks decreases, patients are more likely to lead a healthier lifestyle and be less impacted by the fear of pain and psychosocial stress [33]. This indirect advantage exerted by mAb targeting the CGRP pathway is also supported by the no-return to the baseline condition after 3 months of suspension [34].
Along the same lines, 82.8% of patients with MO at baseline had discontinued it after 12 months of treatment. The percentage of participants with MO gradually decreased from V1 to V12 (Fig. 3). MO discontinuation was not influenced by baseline characteristics, not even by the baseline number of acute medications as observed in the short term [25], as one could a priori hypothesize. The decrease in MAMI intake was mainly observed in the first month of therapy (by 14.5 drugs) but became more pronounced at V12 (around 17, Fig. 4 panel A). Similarly, pain intensity was eased already in the first month of therapy ( Fig. 4 panel B). However, less severe attacks did not influence acute medication choices, except for a reduction in combination therapies by around a quarter (Supplemental Fig. 1). A possible interpretation is that a fall in migraine frequency and less intense pain leads to an immediate decrease in MAMI and a further decline over the months when patients become more and more capable of coping with migraine attacks, as discussed above. This change in attacks' management is well depicted by the almost progressive reduction in MAMI/ MMDs ratio (Fig. 5).
One may wonder if a further extended treatment regimen beyond 1 year would additionally help patients with HFEM shift to MFEM o LFEM. The open-label extension of RCTs demonstrated the tolerability and efficacy of CRGP pathway targeted mAbs in the long term [35]. However, the high cost of the mAb primarily limits their wide and prolonged use. Preliminary economics evaluations predicted that erenumab is also likely to reduce migraine-related direct and indirect costs compared to standard care [36]. Hence, a comprehensive economic evaluation comparing the CGRP pathway targeted mAb to standard care is thus necessary to clarify these aspects to guide regulatory drug agencies.
Our study has some limitations. Mainly, we did not assess the changes in quality of life measures, i.e., psychosocial scales, everyday habits, and demographic characteristics (e.g., BMI). These evaluations would have helped clarify the relative contribution of these aspects to the shift from chronic to episodic migraine. Moreover, we should consider that patients with migraine may experience cyclic oscillation between chronic and episodic frequency of attacks [19]. A more extended observation period is necessary to confirm the efficacy of mAbs targeting the CGRP in persistently reverting CM to an episodic condition. Similarly, up to 30% of patients in our cohort had not discontinued previous preventive medications. Therefore, we cannot exclude the influence of concomitant therapy on the outcome at the end of the galcanezumab treatment year.
In summary, the long-term GARLIT experience suggests that around three-quarters of patients treated with galcanezumab can revert from CM to EM in real life, and in our cohort around half of them became EM for the entire treatment year. This shift and MO discontinuation were persistent throughout the months of therapy and tended to improve over time. Future studies are necessary to understand whether multidisciplinary approaches and more extended treatment regimens, if economically sustainable, further increase this benefit and impact the migraine course in the longer term.
Supplementary Information
The online version contains supplementary material available at https:// doi. org/ 10. 1007/ s00415-022-11226-4. patients provided written informed consent. The study was approved by the Campus Bio-Medico University Ethical Committee n.30/20, mutually recognized by the other local ethical committees, and registered at the Italian Medicines Agency (Agenzia Italiana del Farmaco, AIFA) and at ClinicalTrials.gov NCT04803513.
|
2022-06-28T13:29:17.528Z
|
2022-06-28T00:00:00.000
|
{
"year": 2022,
"sha1": "218438ea38944468f52315a20a06b68e18bd1d19",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00415-022-11226-4.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "efdd8248eab648bbee04ad55986c2b3f667362b4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17166615
|
pes2o/s2orc
|
v3-fos-license
|
Resistance to Bacillus thuringiensis Toxin in Caenorhabditis elegans from Loss of Fucose*
A mutation in the Caenorhabditis elegans bre-1 gene was isolated in a screen for Bacillus thuringiensis toxin-resistant (bre) mutants to the Cry5B crystal toxin made by B. thuringiensis. bre-1 mutant animals are different from the four other cloned bre mutants in that their level of resistance is noticeably lower. bre-1 animals also display a significantly reduced brood size at 25 °C. Here we cloned the bre-1 gene and characterized the bre-1 mutant phenotype. bre-1 encodes a protein with significant homology to a GDP-mannose 4,6-dehydratase, which catalyzes the first step in the biosynthesis of GDP-fucose from GDP-mannose. Injection of GDP-fucose but not fucose into C. elegans intestinal cells rescues bre-1 mutant phenotypes. Thus, C. elegans lacks a functional fucose salvage pathway. Furthermore, we demonstrate that bre-1 mutant animals are defective in production of fucosylated glycolipids and that bre-1 mutant animals make quantitatively reduced levels of glycolipid receptors for Cry5B. We finally show that bre-1 mutant animals, although viable, show a lack of fucosylated N- and O-glycans, based on mass spectrometric evidence. Thus, C. elegans can survive with little fucose and can develop resistance to crystal toxin by loss of a monosaccharide biosynthetic pathway.
The crystal (Cry) 3 proteins made by Bacillus thuringiensis are naturally occurring agents that are used for the control of insects that eat crops and carry disease (1). Cry proteins have been used for over 50 years as an environmentally safe and effective alternative to synthetic pesticides. One attractive feature of Cry proteins is their nontoxicity toward mammals and other vertebrates (2). Consistent with this lack of mammalian toxicity, several of the receptors for Cry proteins have been characterized and encode invertebrate-specific glycolipids and/or an insect family of cadherins (3). Because of their efficacy against invertebrates and safety toward vertebrates, Cry proteins are widely used worldwide as topical sprays on crops, as topical sprays to kill mosquitoes and black flies that carry disease, and as transgenes expressed in plants as an environmentally friendly alternative to chemical pesticides (4,5). In the year 2005, over 26 million hectares of B. thuringiensis transgenic corn and cotton were planted (6). In addition, B. thuringiensis crystal proteins are now also being explored for their possible use in the control of nematode parasites (7,8).
In our efforts to gain insight into the important question of how invertebrates develop resistance to Cry proteins, we isolated mutations in five Caenorhabditis elegans genes that result in resistance to the crystal protein, Cry5B (9). Four of these bre genes have been cloned and characterized. These genes, bre-2, bre-3, bre-4, and bre-5, encode glycosyltransferase genes that catalyze the addition of monosaccharides onto invertebratespecific glycolipids (10 -12). The resulting oligosaccharide chain is a receptor for the Cry protein (11). Thus, loss of any one of these genes results in loss of the receptor for the toxin and a high level of resistance.
Here we clone the fifth bre gene, bre-1, and characterize its phenotype. bre-1 mutant animals are resistant to Cry5B but at a lower level than the other bre mutants. Mutation of the bre-1 gene comes at a significant cost to the animal and includes a small brood size at 25°C. bre-1 encodes a protein with significant homology to GDP-mannose 4,6-dehydratase (GMD), a cytosolic enzyme involved in the biosynthesis of GDP-fucose. This inferred enzymatic function is supported by fucose rescue experiments that also demonstrate that C. elegans lacks an alternative pathway for production of GDP-fucose, the fucose salvage pathway. bre-1 mutant animals show qualitative and quantitative defects in the production of glycolipid receptors for Cry5B, explaining its resistance phenotype. bre-1 mutant animals have overall very low levels of fucose and appear to be lacking fucosylated N-and O-glycans. C. elegans can evidently survive with little protein fucosylation.
EXPERIMENTAL PROCEDURES
C. elegans Maintenance and Microscopy-C. elegans N2 and other strains were maintained on standard NG plates spread with the Escherichia coli strain OP50 (13). bre-1(ye4) was outcrossed six times to N2 prior to the work conducted here. All nematode assays were carried out at 20°C unless otherwise noted. Nematodes were mounted for microscopy on 2% agar-ose pads with 0.1% sodium azide. Images were acquired using an Olympus BX60 microscope using the ϫ10 objective linked to a ϫ0.5 camera mount and a DVC Co. camera.
Brood Size Assay-L4 stage worms from N2 and bre-1(ye4) were picked one each to three plates and incubated at 25°C for 24-h periods. After each period, the originally picked worms would be picked to a new plate. Progeny from the original parent worms were allowed to grow an additional 24 h at 25°C before they were counted. This process was continued every 24 h until the original parents ceased to produce additional progeny. The results for this assay include three independent replicates (total of nine worms).
Cloning bre-1(ye4)-bre-1(ye4) was mapped between unc-24(e138) and dpy-20(e1282) using standard three-factor mapping. An unc-24(e138)bre-1(ye4)dpy-20(e1282) triple mutant was then made in order to perform single nucleotide polymorphism mapping with the Hawaiian strain (CB4856) (14). Single nucleotide polymorphism mapping narrowed the search to a region covered by three cosmids (D1046, C53B4, and C53D6). Purified cosmid from the three cosmid strains were injected with the rol-6 marker (pRF4) into bre-1(ye4) young adults. All injection experiments were performed on a Zeiss Axiovert 100 microscope using an Eppendorf microinjector. rol-6 alone was used to inject N2 and bre-1 worms to control for any other possible phenotypes produced by this marker. These controls showed no deviation from expected phenotypes. Stable lines obtained from injections were tested for rescue from resistance to Cry5B by plating onto NG plates containing 0.25 mM isopropyl -D-thiogalactoside and 50 g/ml carbenicillin spread with E. coli strain JM103 with a pQE9 vector expressing a form of the Cry5B toxin called Cry5B m . Cry5B m is a slightly attenuated form of Cry5B that contains two mutations in the toxin domain (N172I and E248K). bre-1 mutant animals (as well as bre-2, -3, -4, and -5 mutant animals) are resistant to both wild type and attenuated forms of Cry5B. However, because the level of resistance of bre-1 mutant animals is lower than that of the other bre mutants, we found it simpler to score bre-1 resistance using attenuated Cry5B m when the number of worms to assay was limited, i.e. following injections for RNAi and following injections for sugars rescue experiments ( Fig. 2B and Fig. 3).
After finding C53B4 as the only one of the three cosmids capable of rescuing the bre-1 phenotype (6/6 lines), each of the open reading frames from the cosmid were PCR-amplified, using the primers listed in Table 1, and tested for their ability to rescue bre-1(ye4) as described above. C53B4.7 was found to be the only gene capable of rescue (8/8 lines). For final rescue experiments, animals were placed on a isopropyl -D-thiogalactoside/carbenicillin NG plate spread with a 60/40 dilution of E. coli expressing Cry5B(without mutation):E. coli transformed with vector alone. cDNA was isolated from total bre-1(ye4) RNA and used for sequencing the bre-1(ye4) gene. SL-1 was used as the leading primer for 5Ј sequencing, and oligo(dT) was used as the reverse primer for 3Ј sequencing.
Cry5B Lethal Concentration Assay-Cry5B was purified from B. thuringiensis using a sucrose gradient as described (15) and solubilized in 20 mM Hepes (pH 8.0) just prior to setting up the lethality assay. The Cry5B lethality assay was performed as described (16). Cry5B was tested with N2 and bre-1(ye4) worms at the L4 stage using a range of Cry5B from 0.312 to 120 g of Cry5B/ml. Worm viability was scored after 8 days at 20°C. Each individual assay was set up in triplicate for each concentration of Cry5B. Three independent individual assays were performed. Probit analysis (17) was used to calculate the lethal concentration (LC) of Cry5B that killed 50 and 90% of the worms (LC 50 and LC 90 , respectively). The mean values from the individual assays were compared using a paired t test to determine statistical significance. A probability value of less than 0.05 was set as significant. For graphical representation of the lethality assays, a nonlinear regression analysis was performed with GraphPad Prism (GraphPad Software, San Diego).
RNA-mediated Interference (RNAi) by Injection-20 rrf-3(pk1426) L4 worms were plated and incubated 12 h for each of the injection samples. After the 12-h incubation, 10 worms were injected for each of the injection samples. Injections were performed as described previously. Each injected worm was plated individually and allowed 24 h of recovery time at 15°C. At this point the worms were transferred to new individual plates and incubated at 20°C for an additional 24 h. Progeny obtained more than 48 h after injection were allowed to grow to the L4 stage at which point 20 were transferred to plates seeded with E. coli expressing Cry5Bm. After an additional 48 h, the response to toxin was observed. In addition, three L4 worms were picked out to individual plates for each injection sample and tested for brood size. The brood size assay was performed as described above. The double-stranded RNA (dsRNA) fragments were made from RNA purified from N2 worms. The dsRNA fragments were amplified using T7-linked primers (see Table 2) with an Ambion T7 polymerase kit. dsRNAs were injected at a concentration of 2 mg/ml. Fucose Rescue Experiments-12 worms were injected for each sugar. bre-1(ye4) and bre-2(ye31) L4 worms were injected with 2.5 mM solutions of either L-fucose (F1395; Sigma) or GDP--L-fucose (371443; Calbiochem) in water. All of the injected worms were then transferred to NG OP50 plates and allowed to recover for 2 h at 15°C. After recovery, eight worms were transferred to Cry5Bm toxin plates prepared as described above. The other four worms were transferred to similar plates spread with the same strain of bacteria with an empty vector. After 48 h worms were mounted and imaged as described above. This experiment was independently repeated three times.
Preparation of Lipids and TLC Analyses-Lipids were prepared, separated by TLC, and probed with biotinylated Cry5B as described (11,18).
Ulex europaeus Fucose-binding Lectin Overlay-Plates of resolved glycolipids were fixed with polyisobutylmethacrylate and blocked in phosphate-buffered saline containing 0.5% bovine serum albumin and 0.02% Tween 20 for 30 min. After blocking, the plate was washed for 1 min and again for 5 min with a tissue-staining buffer (10 mM Hepes (pH 7.5), 0.15 M NaCl). The plate was then probed with a 10 g/ml solution of biotinylated U. europaeus lectin (B-1065; Vector Laboratories) in tissue staining buffer for a period of 2 h. The remaining steps are the same as those performed for the Cry5B overlay experiments (11).
Analysis of Glycolipid Affinity for Cry5B-Glycolipids were purified as described (11) from mixed stage worm pellets in which the total number of worms was quantitated by sampling worm quantities in several small aliquots. Upper phase glycolipids were dissolved in a solution of 1/1 (methanol:water). Solution volumes, representative of a specific number of worms, were then transferred to a 96-well polystyrene microtiter plate (Costar 9017, medium binding). This solution was allowed to evaporate at room temperature for a period of 135 min. Any remaining solution was removed and replaced with blocking solution (42 mM Na 2 HPO 4 , 85 mM NaCl, 1 mM MgSO 4 , 0.2% defatted bovine serum albumin) and allowed to block for 30 min. The wells were then probed with 22 nM elastase-activated, biotinylated Cry5B in blocking solution for 1 h at room temperature. Cry5B protoxin was activated and labeled as described (11). The wells were then washed twice with blocking buffer and incubated with an alkaline phosphatase solution in blocking buffer for 45 min. Wells were washed twice with bovine serum albumin-free block solution and once with water. p-Nitrophenyl phosphate (PnPP) was then added at a concentration of 1 mg/ml in PnPP buffer (50 mM HCO 3 , 0.5 mM MgCl 2 (pH 10)). After positive control wells reached an A 405 of 1, 3 M NaOH was added to each well to stop the color reaction. Three control wells were used to determine the A 405 at three 10-min time points (10, 20, and 30 min) during the PnPP incubation. Control wells generally reached an A 405 of 1 after 30 min with PnPP. A 405 measurements were then taken for all wells. Background was determined by setting up duplicate wells with a 100-fold excess of unlabeled activated toxin for all worm numbers. Each condition was represented by three wells in each of three replicates.
Monosaccharide Analysis of Worm Pellets-Large populations of worms were grown on 100-mm ENG plates spread with E. coli strain OP50. Before the populations starved, they were washed from the plates with water and transferred to a Falcon tube producing a pellet volume of ϳ1 ml. Collected worm pellets were washed eight times with water. Pellets were then treated using standard protocols (40).
Structural Analysis of Glycoproteins-Worm pellets were acquired as described above under "Monosaccharide Analysis of Worm Pellets." The worm pellets were sonicated in an extraction buffer consisting of 0.5% (w/v) cetyltrimethylammonium bromide and 0.1 M Tris (pH 7.4). Material for analysis was extracted for an additional 24 h on a rocker at 4°C. Solid debris was removed by centrifugation at 1400 ϫ g for 10 min. Detergent was removed by extensive dialysis against 50 mM ammonium bicarbonate buffer (pH 7.6). After dialysis the samples were lyophilized. Reduction and protection of the disulfide bridges of the extracted proteins were carried out as described (19). The reduced carboxymethylated proteins were trypsinized and digested with PNGase F (EC 3.5.1.52; Roche Applied Science) as described (20). Glycopeptides remaining after PNGase F digestion were further digested with PNGase A (EC 3.5.1.52; Roche Applied Science), in ammonium acetate buffer (50 mM, pH 5.0), for 16 h at 37°C using 0.2 milliunits of the enzyme. The reaction was terminated by lyophilization, and the products were purified on a C18 cartridge. O-Linked oligosaccharides were liberated from glycopeptides after PNGase F and A digestion by reductive elimination (400 l of 1 M NaBH 4 in 0.05 M NaOH at 45°C for 16 h) and desalted through a Dowex 50W-X8(H) column. Excess borates were removed by coevaporation with 10% (v/v) acetic acid in methanol under a stream of nitrogen.
Chemical Derivatization for MALDI-MS-Permethylation using the sodium hydroxide procedure was performed as Loss of Fucose and Cry5B Resistance in C. elegans described (19). After derivatization, the reaction products were purified on a Sep-Pak C 18 (Waters) as described (19). MALDI data were acquired using a PerSeptive Biosystems Voyager-DETM STR mass spectrometer in the reflectron mode with delayed extraction. Derivatized glycans were dissolved in 10 l of methanol, and 1 l of dissolved sample was premixed with 1 l of matrix (2,5-dihydrobenzoic acid) before loading onto a target plate. Structural Analysis of Freed Glycans from Glycosphingolipid-Upper phase glycosphingolipids were collected from bre-1 worms as described under "Preparation of Lipids and TLC Analyses." Upper phase glycosphingolipid head groups were enzymatically released as described (11). Pure bre-1 glycans in water were then lyophilized, permethylated, and analyzed by MALDI-MS as described above.
RESULTS
A single allele of bre-1, bre-1(ye4), was isolated in genetic screens for C. elegans mutants resistant to Cry5B toxin made by the invertebrate Gram-positive pathogen B. thuringiensis (9). In the same screens, multiple mutant alleles in each of four other genes, bre-2, -3, -4, and -5, were also isolated. Resistance of bre-1(ye4) animals relative to wild-type animals can be readily seen at a 60% dose of Cry5B-expressing E. coli (Fig. 1, A and B). As demonstrated previously (9), resistance of bre-1 mutant animals is less than that of animals mutant for the other four bre genes, such as bre-2 (Fig. 1C). Based on LC 90 and LC 50 values, bre-1(ye4) mutant animals are 3.2-and 2.2-fold, respectively, more resistant to purified Cry5B than wild type ( Fig. 1D; p ϭ 0.036 for LC 90 and 0.0173 for LC 50 ). bre-1(ye4) mutant animals also produce small brood sizes at 25°C (Table 3). There thus appears to be a steeper cost to the nematode for mutation of the bre-1 gene when compared with most of the other bre mutants (Table 3). Interestingly, bre-3 mutant animals also display low progeny production at 25°C (Table 3). The lower level of resistance and the brood size defects associated with the bre-1 mutant suggested that the bre-1 gene could encode a protein that is dissimilar to the four other bre genes, all of which encode glycosyltransferases.
The bre-1 gene was mapped using the bre-1(ye4) allele in standard three-factor and single-nucleotide polymorphism techniques to an interval covered by three cosmids on chromosome IV. All three cosmids were injected into bre-1(ye4) animals, and only one, C53B4, resulted in rescue of bre-1 resistance to toxin phenotype. Injection of PCR fragments that contain all open reading frames, including their putative promoters and 3Ј-untranslated regions on C53B4, identified C53B4.7 as the only rescuing gene ( Fig. 2A). Further confirmation that C53B4.7 and bre-1 are synonymous is provided by the following facts: 1) injection of double-stranded C53B4.7 RNA into rrf-3(pk1426) animals recapitulates the bre-1 resistance to toxin phenotype ( Fig. 2B; rrf-3(pk1426) animals show normal response to Cry5B (21) and are more sensitive to dsRNA), and 2) bre-1(ye4) animals show a mutation of Gly 217 to Glu in the C53B4.7 open reading frames (Fig. 2C). Injection of the C53B4.7 fragment also rescues the reduced brood size phenotype associated with bre-1(ye4) (not shown).
bre-1 encodes a putative enzyme with significant homology to GMD. This enzyme has been characterized in humans as working upstream of GDP-keto-6-deoxymannose-3,5-epimerase-4-reductase (FX protein) to convert GDP-mannose into GDP-fucose (22). Human GMD maintains 64% amino acid identity to the BRE-1 protein (Fig. 2C). Wormbase shows two alternatively spliced variants for bre-1, C53B4.7a and C53B4.7b. We confirmed both of these variants by sequencing reverse transcription-PCR products from wild-type RNA. C53B4.7b is 15 amino acids longer than C53B4.7a at its N terminus, and the first eight amino acids of C53B4.7a differ as well. Otherwise, starting at amino acid 9 for C53B4.7a and 24 for C53B4.7b, the protein sequences are the same. The bre-1(ye4) mutation results from a base pair change in exon 4 and affects both splice variants of the gene. The glycine that is converted to glutamic acid in ye4 is conserved in GMDs from other organisms (e.g. E. coli, Arabidopsis, and humans) and is located in helix 7, which is very close to the predicted catalytic domains of the GMD protein (23). There is one other open reading frames in C. elegans with significant homology to BRE-1, F56H6.5. F56H6.5 displays 88% amino acid identity with BRE-1. RNAi of F56H6.5 either by feeding dsRNA to rrf-3(pk1426) animals or injection of dsRNA into the gonad of rrf-3(pk1426) animals does not result in resistance to Cry5B (Fig. 2B). In addition, co-injection of F56H6.5 and C53B4.7 dsRNAs into rrf-3(pk1426) animals qualitatively resulted in the same level of resistance to Cry5B as injection of C53B4.7 dsRNA alone (Fig. 2B). These results suggest that F56H6.5 does not play an important role in facilitating intoxication by Cry5B. It should be noted that most genome-wide RNAi screens have reported no abnormalities for RNAi of C53B4.7 or F56H6.5, although occasional phenotypes such as larval lethality and larval arrest have been noted (24,25). However, these screens did not focus in detail on any individual gene (as we have done here), and these studies list any phenotype even if it appears in only 10% of the animals. In all our feeding or injection RNAi experiments, we have not noted any significant phenotype for either of two genes other than Cry5B resistance and low brood size for C53B4.7.
GDP-fucose Rescue of bre-1-mediated Resistance-If bre-1 is a functional GMD and the toxin resistance defect in the bre-1(ye4) mutant is because of a lack of fucose, then the supple-mentation of fucose into the cells targeted by the toxin should rescue the bre-1(ye4) toxin resistance phenotype. Similar experiments have been successfully performed in mice where it has been shown that supplementation of fucose into the diet of FX-defective mice (FX is the enzyme directly downstream of GMD in the biosynthetic pathway for GDP-fucose) rescues defects resulting from lack of fucose (26). The fucose presumably enters the cells via a fucose-specific transporter that has been reported in several types of mammalian cells (27,28).
Repeated attempts to rescue bre-1(ye4) defects by supplementation of fucose in the medium used to grow C. elegans failed to succeed. We hypothesized that the problem with the rescue experiment might reside in the lack of the salvage pathway that is responsible for converting free fucose from the environment to useable GDP-fucose (29). This process involves a fucokinase and GDP-L-fucose pyrophosphorylase (30). Indeed, a search of the C. elegans genome failed to show any homologues of these proteins in the nematode. Supplementation of the diet with GDP-fucose did not work in attempted experiments presumably because of the absence of a GDP-fucose transporter into C. elegans cells.
We therefore modified our rescue approach. We directly injected the intestines of bre-1(ye4) and bre-2(ye31) L4 worms with either L-fucose or GDP-fucose. The intestine is the anatomical focus of action of Cry5B toxin (10,12). After a 3-h recovery at 15°C, these worms were transferred to plates with and without toxin. Injection of L-fucose or GDP-fucose did not harm the health of N2, bre-1(ye4), or bre-2(ye31) animals in the absence of toxin (Fig. 3, A-E, left panels). In contrast, bre-1(ye4) animals were intoxicated similar to wild type in the presence of toxin when injected with GDP-fucose but not L-fucose (Fig. 3, Late F1 progeny were allowed to grow to the L4 stage before they were plated on E. coli expressing Cry5B m (see "Experimental Procedures" for details on Cry5B m ) and imaged at 48 h. C, BRE-1 amino acid sequence aligned with the nearest C. elegans homologue (F56H6.5) and the human sequence for the GDP-mannose 4,6-dehydratase gene (H. sapiens). The bre-1(ye4) mutation is the result of a base pair change, which produces a glutamic acid at position 217 instead of a glycine (in red).
A-C, right panels). In other words, bre-1(ye4) animals can be rescued back to wild type with injection of GDP-fucose. Rescue of the low brood phenotype at 25°C is also achieved with injection of GDP-fucose (not shown). bre-2(ye31) animals are not rescued back to toxin susceptibility when injected with either L-fucose or GDP-fucose because they are not defective in fucose biosynthesis (Fig. 3, D and E, right panels). These data are consistent with bre-1 encoding a functional GMD and with C. elegans lacking a functional fucose salvage pathway.
bre-1(ye4) Animals are Defective in the Production of C. elegans Glycolipid Receptors for Cry5B-Because at least two of the glycolipid receptors for Cry5B contain two terminal fucose residues (11), we hypothesized that the resistance phenotype of bre-1(ye4) is because of defects in production of glycolipids. Comparison of orcinol-stained upper phase glycolipids isolated from wild-type and bre-1(ye4) mutant animals demonstrate that bre-1(ye4) animals are indeed defective in the production of some highly polar glycolipid species (Fig. 4A). Glycolipid species D is a notable exception in that it is still produced in normal abundance in bre-1(ye4) animals. However, this species does not contain fucose, and so its presence in the bre-1 mutant is expected (11). There are at least two new species of glycolipid that appear in bre-1(ye4) animals (Fig. 4A).
To see if these glycolipid defects translate into defects in binding Cry5B toxin, activated Cry5B was biotinylated and used in overlay binding experiments as described previously (11). Consistent with the results found with orcinol-stained glycolipids, three dominant Cry5B-binding glycolipid species present in wild type, B, C, and F, are absent in bre-1(ye4) animals (Fig. 4B). Both species B and C are known to contain fucose; the structure of species F is not known (11). Cry5B-binding glycolipid species E is present in the mutant (albeit at reduced levels), but this species normally lacks fucose so its presence in bre-1(ye4) is expected. At least two poorly resolved new glycolipid bands produced by bre-1(ye4) bind Cry5B. These bands might each represent a defucosylated B, C, or F glycolipid species that retains some binding activity.
To confirm that glycolipids from bre-1(ye4) animals lack fucose, we probed upper phase glycolipids from wild-type and bre-1(ye4) mutant animals with U. europaeus agglutinin I (UEA-I) that recognizes terminal fucose residues linked to galactose via an ␣-2 linkage (contained in both species B and C). In wild type animals, glycolipid species C is the predominant UEA-I-binding species (Fig. 5, A and B). Several less abundant and less polar UEA-I binding species are also detected. Although glycolipid species B also contains terminal fucose, it does not bind UEA-I or does so poorly. This result is interesting because the only difference between bands B and C is one terminal galactose on band B. UEA-I may be sensitive to a structural change caused by the addition of the terminal galactose to band B, which could make fucose less accessible. In glycolipids from bre-1(ye4) animals, the major (band C) and minor UEA-1-binding species are no longer present (Fig. 5B).
Although there are still some Cry5B-binding glycolipids in bre-1(ye4) mutant animals, qualitatively there appear to be less total receptors for Cry5B in the mutant relative to wild-type animals (Fig. 4B). To confirm this result quantitatively, total upper phase glycolipids from increasing numbers of wild-type (N2), bre-1(ye4), and bre-3(ye28) animals were isolated, immobilized in polystyrene wells, and probed with biotinylated Cry5B. The results were as predicted based on their resistance FIGURE 3. bre-1(ye4) resistance to Cry5B can be rescued with GDP-fucose. A, N2 (wild type) control worms plated at a life stage similar to injected bre-1 and bre-2 worms. B-E, bre-1 or bre-2 worms injected with fucose or GDP-fucose before being plated on empty vector E. coli (left panels, no toxin) or E. coli expressing Cry5B m (right panels, toxin). The injected worms did not exhibit a reduction in overall health as a result of injection trauma (see no toxin panels). Injection of GDP-fucose into the intestine of bre-1(ye4) animals is the only condition tested that restores wild-type susceptibility to toxin (compare right panels in C to right panels in A). -1(ye4) animals are defective in the production of glycolipids and Cry5B-binding glycolipids. A, orcinol-stained TLC plate of N2 wildtype and bre-1(ye4) glycolipids that were resolved in tandem with the plate in B. B, resolved glycolipids from N2 wild type and bre-1 after an overlay was performed with biotinylated Cry5B. The critical Cry5B binding bands B and C are not present in bre-1 mutant animals, but they do possess unique bands of lower polarity that are capable of binding toxin.
to Cry5B toxin; bre-1(ye4) animals, which show intermediate levels of toxin resistance, have total glycolipids that show intermediate levels of binding relative to wild type (high levels of toxin binding) and bre-3 animals (very low levels of toxin binding) (Fig. 6).
bre-1 Mutant Animals Show Severe Reductions in Fucose and in Fucosylated N-and O-Glycans-
To determine the effects of the bre-1(ye4) mutation on the production of fucose, we performed monosaccharide analyses of wild-type, bre-3(ye28), and bre-1(ye4) mutant animals (Fig. 7). Although bre-1 mutant animals display relatively normal levels of galactose, GalNAc, and GlcNAc, they contain no detectable fucose. Thus BRE-1 plays a major role in the production of GDP-fucose in C. elegans, consistent with it encoding a functional GMD. bre-1 mutant animals also show an increase in the amount of total mannose, which may indicate an inability to convert mannose to fucose.
We have demonstrated previously that C. elegans proteinlinked N-and O-glycans are rich in fucosylated structures (11,31,32). To understand the effects of bre-1(ye4) on protein fucosylation, total protein from wild type and bre-1 mutant animals were extracted, prepared for N-and O-glycan analysis, and subject to MALDI mass spectroscopy ( Fig. 8 and Table 4). From the PNGase F spectra the small fucosylated truncated N-glycans (m/z 1141-1897, Fuc0 -3Hex2-4HexNAc2) and the larger fucosylated glycans (m/z 1753-2305, Fuc0 -3Hex5-6HexNAc2) were absent from bre-1 mutant spectra. Even more strikingly, the spectra for the bre-1 mutant PNGase A-released N-glycans also did not contain any of the fucosylated glycans seen in the wild type spectra (m/z 1141-2479, Fuc1-4Hex2-6HexNAc2). In addition gas chromatography-MS linkage analysis of the bre-1 mutant PNGase F-and PNGase A-released glycans failed to detect a signal for terminal fucose, which is a major signal in the wild-type samples (data not shown). The spectra of the permethylated reductively eliminated O-glycans from bre-1 mutants also lacked the fucosylated glycans detected in the wild type (m/z 1494 -1944, Fuc2Hex4 -5HexNAc2-3). Taken together these data indicated that bre-1 mutant worms do not fucosylate any of their glycoproteins.
bre-1 Mutant Animals Show Severe Reductions in Fucosylated Upper Phase Glycolipid-derived Glycans-The glycan components of the upper phase glycolipids were enzymatically released and analyzed by MALDI mass spectrometry ( Fig. 9 and Table 5). The spectra are dominated by three signals at m/z 1375, 1579, and 1783 consistent with compositions of Hex4 -6HexNAc2. A very minor potential molecular ion is present at m/z 2337 consistent with the difucosylated glycolipid B, but this signal constitutes less than 0.25% of the total glycan mixture. The major molecular ions are consistent with defucosylated, truncated versions of the previously characterized wild-type upper phase, glycolipid-derived glycans (11).
DISCUSSION
The bre-1 gene, which mutates to resist Cry5B crystal toxin, encodes a putative GMD, the first enzyme in the biosynthesis of cellular GDP-fucose. The evidence that BRE-1 is the major functional GMD in C. elegans is as follows: 1) BRE-1 shows extensive (64% amino acid identity) identity with human GMD; 2) loss of BRE-1 results in a loss of detectable fucose on proteins as determined by monosaccharide analysis and by mass spec- A, orcinol-stained plate of N2 wild-type and bre-1(ye4) glycolipids that were resolved in tandem with the plate in B. B, fucose-binding lectin overlay of N2 and bre-1 glycolipids. This shows that bre-1 glycolipids do not bind the fucose-specific lectin, whereas band C and other glycolipids from N2 do bind the lectin. FIGURE 6. bre-1(ye4) animals show reduced Cry5B binding to C. elegans glycolipids. Total glycolipids from N2 wild-type (diamonds), bre-1 (squares), and bre-3 (triangles) animals were tested for their ability to bind biotinylated Cry5B using an enzyme-linked immunosorbent assay. The number on the x axis refers to the number of worm equivalents from which glycolipids were isolated for that particular well. The bre-1 mutant was identified based on its resistance to Cry5B B. thuringiensis toxin. bre-1 mutant animals are defec-tive in the production of C. elegans polar glycolipids; some polar glycolipids appear to be fully absent (e.g. species B and C), whereas some polar glycolipid species appear to be reduced in level (e.g. species E). Because polar glycolipids serve as receptors for Cry5B (11), the reduction in number and/or affinity of glycolipid species that bind Cry5B toxin in the bre-1 mutant explain the basis of resistance. They also explain the lower level of resistance seen in bre-1 animals relative to the other bre mutants; the binding of Cry5B to glycolipids found in the bre-1 mutant is quantitatively less than that found in wild-type animals but greater than what is seen in the bre-3 mutant, which disrupts the production of all polar arthroseries glycolipids in C. elegans (11). Thus, Cry5B toxicity correlates with the amount of (11), which may explain why the absence of fucose is not sufficient for achieving full resistance to Cry5B. In contrast to galactose, fucose is not capable of competing with receptor in Cry5B binding experiments.
Because of the fact that galactose is still present in bre-1, it is possible that the remaining toxin-binding bands found in bre-1 glycolipids may be altered forms of the primary binding bands B and C found in N2, as suggested in Fig. 9. Although we cannot be sure that the ye4 allele represents a complete loss of function, it seems to represent at least a strong reduction of function allele because of the following: 1) ye4 mutation is in an amino acid residue that is conserved in GMDs, 2) RNAi leads to a similarly penetrant phenotype, and 3) bre-1(ye4) in trans to a deletion allele qualitatively leads to the same level of resistance as the homozygous mutant (not shown). This report is the first to study the fucose biosynthetic pathway and GMD in C. elegans. Because of the importance of fucosylation, GMD itself is an important enzyme studied in a wide variety of organisms. Biochemical deficiency of GMD activity in humans is associated with leukocyte adhesion deficiency II, a rare genetic disease characterized by immunodeficiency and severe mental and growth retardation (33). GMD in Arabidopsis thaliana is encoded by two genes, GMD1 and GMD2 (MUR1), and loss of GMD2 leads to reduced tensile strength of elongating stem segments and slight dwarfism (34). GMD, along with the fucose salvage pathway, also plays an important role in the ability of the bacterial symbiont Bacteroides to colonize the mammalian intestine (35), and a mutation in Pseudomonas fluorescens GMD was shown to be deficient in biofilm formation and attenuated for virulence in a Drosophila model system (36). Our data also demonstrate that C. elegans lacks a functional fucose salvage pathway to convert L-fucose to GDPfucose because injection of L-fucose is not able to rescue bre-1 mutant phenotypes (in contrast to injection of GDP-fucose). It was suggested that Drosophila melanogaster also lacks a functional fucose salvage pathway because no detectable homologues of the enzymes involved in this pathway could be discerned from its genome (37).
These data show a unique variant on the mechanism of resistance to crystal toxin via biosynthesis of a monosaccharide. All previous data in insects and nematodes point to either mutation in proteases required to process crystal toxin or mutation in genes that directly encode the receptor (3). Here, resistance is caused by mutation of a gene that indirectly influ-ences the production of receptor. It is conceivable that many other similar mutations will be found in the future. In addition to the resistance defect, bre-1 mutant animals show a significant reduction in brood size at 25°C. Based on the glycan analysis of bre-1 mutant animals, it is plausible that the reduction in brood observed is because of the absence of fucosylated glycans critical to the development of the germ line and/or proper egg fertilization. Fucosylated glycans have been shown to be critical in mammalian fertility (38). Interestingly, bre-3 mutant animals also display a reduced brood size at 25°C suggesting that bre-3 is involved in making glycolipid species that is made important for fertility but that does not involve bre-2, -4, or -5 (e.g. Man1-4Glc-ceramide).
There is evidence for two interesting feedback mechanisms in the bre-1 mutant. First, the level of at least one nonfucosylated glycolipid (e.g. species E) is reduced in bre-1 mutant animals. Thus, the lack of fucosylation somehow feeds back onto the overall production of glycolipids. Furthermore, the severe reduction in fucose appears to result in a moderate increase in the amount of total cellular mannose. Because GDP-mannose is the precursor for GDP-fucose, this result is expected because it might be predicted that there would be an accumulation of mannose.
Fucose is also recognized by a functioning mammalian immune system as part of an epitope for IgE antibodies attacking the parasitic helminthes Hemonchus contortus in infected sheep (39). The core ␣1-3-fucosylated N-glycan functioning as an epitope in this study has also been found in C. elegans (39). Based on this information alone, a fucose-deficient C. elegans mutant such as bre-1 could be an important tool in the study of the IgE immune response to parasitic nematodes.
The reduction in total fucose levels and fucosylated N-and O-glycans in the bre-1 mutant is striking and dramatic. Fucose levels are reduced to undetectable levels in the mutant, and fucosylated N-and O-glycans are missing by mass spectroscopy analyses. There is no precedent for such a severe level of fucose depletion leading to such a mild phenotype in an animal. The FX mouse, which is defective for the second enzyme in the fucose biosynthetic pathway, has been found to be incapable of survival without being provided an outside source of fucose (26). As noted above, defects in GMD in humans can lead to leukocyte adhesion deficiency type II, which is a human disorder characterized by growth and mental retardation in addition to immune problems stemming from a defect in leukocyte rolling during inflammation. The lack of a fucose salvage pathway is consistent with these data; the nematode has apparently evolved to survive with very little fucose and fucosylated proteins.
|
2017-04-18T22:00:54.401Z
|
2006-11-29T00:00:00.000
|
{
"year": 2007,
"sha1": "520f2fcc6a1af307ba5e12a383a2aa17135a0c1a",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/282/5/3302.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "c0fb52d9c528c8da87a4ee19447e0ae58d85ef02",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine",
"Biology"
]
}
|
269049557
|
pes2o/s2orc
|
v3-fos-license
|
Antagonistic Effects of Chinese Salt and Folic Acid on Developing Swiss Albino Mice
One of the most often utilized taste enhancer in commercial meals is monosodium glutamate (MSG) commonly called Chinese salt. MSG utilization has been increasing over time and linked with toxicity in liver and other organs. Objective: To determine the teratogenic and toxic effects of Chinese salt and folic acid on developing mice. Methods: In this study, 20 female pregnant albino mice were divided into four groups, each consisted 5 mice. The control group was supplied with water. To check the teratogenicity and toxicity of Chinese salt and folic acid the treated groups such as Group-I, Group-II and Group-III were supplied with Chinese salt and/or folic acid of concentration 7.50 ug/g of their body weight. The dose was administered orally on th th daily basis during 6 to 12 day of gestation. This was accomplished through an 18 days trial. On the eighteenth day of gestation, the pregnant mice underwent dissection and the fetuses were retrieved. Fetuses were taken from all groups for histopathological �ndings and morphometry. Results: Vast range of morphological, morphometric and histological abnormalities in mice were observed in pregnant mice and fetuses. Conclusions: The �ndings of this study clearly revealed that Chinese salt and folic acid overdose are potentially toxic to liver and stomach.
Monosodium glutamate (MSG) also known as Chinese salt in everyday usage and commonly used as avor enhancers are natural component of foods that are rich in proteins such as meat, cheese and vegetables, which is also used worldwide for enhancing food palatability.L-Glutamic acid is an amino acid, which is a component of MSG.[1,2].The typical daily consumption of MSG in developing nations is believed to be between 0.3 and 1.0 g, but this might vary depending on the food items a person prefers and how much they eat [3].It is an essential component in the body's protein constituents and metabolic intermediates [4].High glutamate activation causes neurological problems in fetuses and long-term depression in rats [5].MSG excitotoxins cause affects that result in overstimulation of nerve cell till damage point and cause death of these neurons [6].Brain damage, epilepsy, oligozoospermia, degeneration of retina, development of hepatic in ammation are all neurotoxic effects caused by MSG [7,8].Monosodium glutamate enhances the production of f r e e r a d i c a l s , p r o t e a s e s , p h o s p h o l i p a s e s a n d transcriptional endonucleases resulting in genotoxicity and apoptosis in mice and rat [9,10].In case of neonatal mice and rats, treated with MSG it causes damage to the arcuate hypothalamic nucleus, which effect neuroendocrine function and induces intolerance to glucose, causes obesity, resistance to insulin, accumulation of fat, dyslipidemia, diminished responsiveness of vascular systems and reduced growth hormone secretion leading to In a study, it was found that folic acid supplementation during pregnancy in substantial concentrations, resulted in a 90% decrease in NTD (neural tube defects) and other congenital abnormalities [21].During pregnancy folate de ciency causes low birth weight of infants, retardation of fetus growth, blood homocysteine level increase, placental abruption and pre-eclampsia.Folate is n e c e s s a r y fo r m a l e fe r t i l i t y, c o n t r i b u t i n g t o spermatogenesis [22].Pregnant women who take food rich in folic acid tend to have reduced serious birth defects but folic acid intake in more than normal potency may cause serious complications in fetuses [23].
The current investigation attempts to investigate the antagonistic effects of folic acid and Chinese salt on Swiss albino mice which may be inducing hepato and gastric toxicity.
R E S U L T S
All animal trial investigations have been carried out using international and regional protocols.These were carried out under the Wet op de dierproeven (article 9) of Dutch law on animal testing.The NIH document "Guide for the care and Use of Experimental Animals" was used for animal testing and rearing [24].Housing: A group of 10 albino mice (10 females and 2 males) were obtained from Veterinary Research Institute, Lahore.These animals were kept in a controlled condition in an animal room at 25±1°C temperature featuring steel racks and cages with a 12hourly light/dark cycle and a relative humidity of between forty and fty percent.The pregnant mice were weighed on the eighteenth day of pregnancy and given ether for anesthesia.Following a cesarean section, the uterus's two horns had been taken from the body and placed under weight.After the fetuses were counted and removed from the uterus, they were xed for 48 hours in Bouin's xative.Fetuses were stored in 70% alcohol after 48 hours.The tissues underwent a series of procedures including a 0.9 percent saline solution wash, 10% formalin solution xation, graded ethanol dehydration for clarifying, xylene treatment, and para n wax embedding.Sections of the liver and kidney with a thickness of four micrometres were created using a microtome, and they were then stained with eosin and haematoxylin in accordance with the recommended methodology [25].Following full drying, the produced slides were examined under a microscope at 10X and 40X for further histological investigation.Microphotography was then carried out.Both Control and treated fetuses after dosage administration were obser ved for morphological and anatomical studies.Morphological and morphometric studies involved wet weight, crown rump (CR), length of each fetus as well as circumference of head and eye, which were calculated using the computer-based program "Ellipse Circumference Calculator" which was utilized by downloading from the CSG network website [26].The entire set of data underwent mathematical computations and was examined using the computer application SPSS.The One-Way ANOVA Duncan test was employed to analyze the data.
The morphometric analysis of the fetuses yielded interesting results.The number of fetuses recovered after dosage administration is depicted here.In control group, the average body weight of fetuses was 1470 ± 35.32mg, eye circumference was 6.01 ± 1.09 mm, the average forelimb size was 6.24 ± 0.48 mm respectively.1).
Histological examination of cranio-visceral fetal organs including spinal cord, heart, lungs and liver was carried out.This study was done in order to understand Chinese salt and folic acid related histopathological changes.
Transverse section through cranial region revealed no major derangements upon histological examination of fetuses in control group (Figure 1A).Group-I treated with folic acid revealed improper formation of spinal cord, pharynx and tongue (Figure 1B).Group-II treated with Chinese salt and folic acid showed damaged spinal cord, pharynx and tongue (Figure 1C).While Group-III treated with Chinese salt showed poorly formed spinal cord as well as no pharynx was observed (Figure 1D).The purpose of this study was to investigate the opposing effects of folic acid and Chinese salt on the growth of mice.Oral dosages were administered to pregnant mice on days 6-12 of gestation every day for a period of 18 days.Fetuses were recovered, xed and analysed on morphological, morphometric and histological bases.The results obtained in this study agree with available data, and show a decrease in the body weights of new born mice affected by MSG.In the referenced study, they also performed the same experiment but did not test the antagonistic effect of folic acid [27].In a similar experiment by It was seen that the offspring born from MSG treated female mice were quite weak, often did not pull through the pregnancy or had lower body weights as compared to the control groups [28].
Interestingly the body weights of the mothers had gained signi cantly after prolonged intake of MSG indicating an increasing effect of MSG on normal body weight.The proposed explanation is that the MSG adversely effects the hunger controlling parts of the brain and may lead the rats towards obesity.Longitudinal sections of trunk region in Control Group showed well developed aortic bulb, liver and cervical spines and neuropore (Figure 2A).In Group-I (Figure 2B) treated with folic acid showed abnormal liver, kidney and mid gut (white star showing necrosis in lower region of trunk).In Group-II treated with Chinese salt and folic acid showed formation of formed abnormal somite of cervical spine, damaged dorsal aorta and necrosis apparent in lower limb region (Figure 2C).In Group-III treated with Chinese salt showed abnormal formation of heart liver and cervical spine (white star) showing necrosis in medullary regions of kidneys (Figure 2D).
Figure 1 :Table 1 :
Figure 1: Transverse section through cranial region of (A): Fetus from control group (B): Fetus from treated group with folic acid; (C): Fetus from treated group with Chinese salt and folic acid; (D): Fetus from treated group: spinal cord (SC), pharynx (P), tongue (T)
Figure 2 :
Figure 2: Longitudinal sections of trunk region (A): Fetus from control group, (B): Fetus from Group-I treated with folic acid (C): Fetus from group treated with Chinese salt and folic acid cervical spine (CS), (D): heart (H), liver (LV) and cervical spine (CS):and tongue (T), somite (S).Histopathological sections of fetus havingLongitudinal sections of trunk region in Control Group showed well developed aortic bulb, liver and cervical spines and neuropore (Figure2A).In Group-I (Figure2B) treated with folic acid showed abnormal liver, kidney and mid gut (white star showing necrosis in lower region of trunk).In Group-II treated with Chinese salt and folic acid showed formation of formed abnormal somite of cervical spine, damaged dorsal aorta and necrosis apparent in lower limb region (Figure2C).In Group-III treated with Chinese salt showed abnormal formation of heart liver and cervical spine (white star) showing necrosis in medullary regions of kidneys (Figure2D).
On the basis of this study, it can be concluded that Chinese salt and folic acid do indeed show teratogenic effects in developing mice fetuses.They caused a vast range of m o r p h o l o g i c a l , m o r p h o m e t r i c a n d h i s to l o g i c a l abnormalities in mice.This study will provide awareness about the toxic effects of Chinese salt and folic acid particularly from the stand point of teratogenic and embryotoxic effects that it has on developing mice.
Ata Ul Mustafa Fahid , Azeem Azam , Hamza Faseeh , Farhan Anjum , Rabia Bano , Maryam Latif and Sana Kauser
Two females were kept with one male in ve different cages.Each cage had wood shavings as bedding material, which was replaced daily.During this research, Folic acid and Chinese salt (Monosodium Glutamate) were tested for their toxic effects and teratogenicity.Different dose groups were managed as follows and elaborated in gure 1.
Our studies also show different morphological abnormalities in fetuses such as Morphometric studies of fetuses like Control Group fetuses remained healthy while Treated Groups such as Group-I, showed over use of folic acid revealing distorted axis, mild reduction in head circumference, eye circumference, forelimb, hindlimb size and in Group-II both Chinese salt and folic acid were used revealing folic acid compensate the teratogenic effects of Chinese salt indicating that Chinese salt has teratogenic nature but folic acid overcomes effects of Chinese salt if used in correct proportion based on requirement.While in Group III fetuses exposed with Chinese salt revealed drastic reduction in head circumference, eye circumference, limbs size as well as cardiac and neural tube defects.
|
2024-04-12T15:04:40.197Z
|
2023-12-31T00:00:00.000
|
{
"year": 2023,
"sha1": "50d6e73bf38ca3a80fb8f3b4504e7358d517b195",
"oa_license": "CCBY",
"oa_url": "https://www.markhorjournal.com/index.php/mjz/article/download/76/78",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b3ca51e79143c8b321871d73c66df10c6baa9bce",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
249240347
|
pes2o/s2orc
|
v3-fos-license
|
A Wavenumber Integration Model of Underwater Acoustic Propagation in Arbitrary Horizontally Stratified Media Based on a Spectral Method
The wavenumber integration method is considered to be the most accurate algorithm of arbitrary horizontally stratified media in computational ocean acoustics. Compared with normal modes, it contains not only the discrete spectrum of the wavenumber but also the components of the continuous spectrum, eliminating errors in the model approximation for horizontally stratified media. Traditionally, analytical and semianalytical methods have been used to solve the depthseparated wave equation of the wavenumber integration method, and numerical solutions have generally focused on the finite difference method and the finite element method. In this paper, an algorithm for solving the depth equation with the Chebyshev–Tau spectral method combined with the domain decomposition strategy is proposed, and a numerical program named WISpec is developed accordingly. The algorithm can simulate both the sound field excited by a point source and the sound field excited by a line source. The key idea of the algorithm is first to discretize the depth equations of each layer by using the Chebyshev–Tau spectral method and then to solve the equations of each layer simultaneously by combining boundary and interface conditions. Several representative numerical experiments are devised to test the accuracy of ‘WISpec’. The high consistency of the results of different models running under the same configuration proves that the numerical algorithm proposed in this paper is accurate, reliable, and numerically stable.
Introduction
The wavenumber integration method is basically a numerical implementation of the integral transform technique for horizontally stratified media [1]. This method, which does not make any approximations to the Helmholtz equation, completely avoids approximation error and is considered the most accurate method for simulating sound propagation in horizontally stratified media.
The normal mode model is often compared with the wavenumber integration method because the mathematical basis of the two is the same; the difference is that the evaluation of the integral adopts different strategies. The normal mode model uses complex contour integration to reduce the integral representation to a sum of residues, whereas the wavenumber integration method evaluates the integrals directly by numerical quadrature [1,2]. The wavenumber spectrum of a general waveguide is a mixture of discrete and continuous parts. The discrete spectrum in such cases leads to a representation involving a sum of modes, while the continuous spectrum involves an integral over a continuum of points in wavenumber space. In other words, the normal modes contain only a limited number of discrete wavenumbers that contribute greatly to the sound field while ignoring the continuous spectrum that may have a great error on the sound field, especially the near field. For the case where the horizontal wavenumbers are near the branch cut, the normal mode model may fail to find the root, thus reducing the accuracy of the sound field. Therefore, the wavenumber integration method is generally considered to be more accurate than the normal mode model.
The principle of wavenumber integration for horizontally stratified media was first introduced into ocean acoustics by Pekeris [3]. He used simple two-and three-layer structures to model sound propagation in horizontally stratified media. Later, Ewing, Jardetzky and Press used this method to study seismic propagation in waveguides with few layers [4]. The wavenumber integration technique performs a series of integral transformations on the Helmholtz equation to simplify the original partial differential equation into a series of ordinary differential equations of depth coordinates. These equations are then solved analytically in each layer in such a way that the amplitudes are undetermined, the undetermined amplitudes are determined by matching boundary conditions at the interfaces, and finally, the corresponding sound field is determined by evaluating the inverse integral transform. For the initially proposed ocean environment with few layers, it is easy to solve the system of linear equations analytically by expressing the boundary conditions in terms of undetermined sound field amplitudes. However, for more complicated ocean environments, the undetermined coefficients method is not applicable, and numerical methods are usually employed.
The earliest algorithm for simulating depth-dependent sound fields is the propagator matrix approach (PMA) proposed by Thomson [5] and Haskell [6]. The advantage of the PMA is that it is recursive and thus requires only a small amount of memory, but the disadvantage is that it requires a very time-consuming correction scheme to ensure numerical stability. Furthermore, the PMA is not well suited to problems where the field has to be determined at more than a single receiver depth [1]. Kennett reviewed the PMA [7] and proposed the invariant embedding approach (IEA) [8]. The advantages of the IEA are inherent numerical stability, simplicity of the recurrence algorithms and direct suitability for reflectivity modeling. In addition, the IEA has definite interpretational advantages for crustal seismology in particular. However, the IEA is not well suited to the solutions of the global problem of interest in ocean acoustics, where sources and receivers lie within the layering [9]. At present, the most widely used method for solving depth equations is the direct global matrix (DGM) approach proposed by Schmidt [10]. In this 2 approach, the sound field of each layer is represented as the superposition of the sound field generated by the sound source and the undetermined sound field satisfying the homogeneous depth equation, and the relationship between the sound fields of each layer is controlled by the continuity condition of the interface. Then, the depth equations in local layers are assembled into a global matrix, and after adding boundary conditions, the sound field in all layers can be obtained simultaneously by solving the global linear equations [11]. The most important advantage of the DGM approach is its unconditional stability, obtained at no additional computational cost, yielding very efficient numerical solution of the depth-separated wave equations in all layers simultaneously [12]. However, the memory requirement of the DGM approach is proportional to the number of layers. When the acoustic parameters vary greatly with depth or the frequency of the sound source is very high [13], a denser configuration of layers is required to treat the acoustic parameters of each layer as constants. The size of the global matrix produced by the DGM approach then becomes unacceptable, especially for small personal computers with limited memory. Among the methods for numerically solving differential equations, in addition to the widely used finite difference and finite element methods, spectral methods are a kind of niche but efficient new method. Spectral methods have high accuracy and fast convergence speed [14][15][16][17][18][19][20] and have been rapidly developed in acoustics [21,22], especially computational ocean acoustics. In recent years, new algorithms of normal modes [23][24][25][26][27][28][29], coupled modes [30][31][32] and parabolic equation models [33][34][35] based on spectral methods have been successively proposed. In this paper, a Chebyshev-Tau spectral method is used to numerically solve the depth-separated wave equation. In the model designed in this paper, the Chebyshev-Tau spectral method does not physically discretize the ocean environment in the vertical direction; that is, it does not use piecewise linear approximation to address ocean environmental parameters, so there is no physical discretization error. In addition, the algorithm has no factors that make the solution divergent, so it has good stability. A corresponding numerical program is developed for the algorithm. Several classic numerical experiments verify the accuracy and illustrate the capability of the algorithm and program devised in this article.
Mathematical Modeling
For a horizontally stratified ocean environment, the interfaces at different depths are all parallel planes, the layer properties are functions of depth only, and the field is independent of azimuthal angle, as shown in Fig. 1. For this range-independent problem, the Helmholtz equation takes the following form [1]: where ψ(r, z) denotes the displacement potential, F(r, z) is the body force, k is the wavenumber, k = 2π f /c(1 + iηα), η = (40π log 10 e) −1 , f is the frequency of the sound source, and c and α are the sound speed and attenuation of the medium, respectively. The derivations of the sound fields for point and line sources discussed below are based on the Helmholtz equation.
Integral transformation for point source problems
For a point sound source, the waveguide it excites is usually solved in cylindrical coordinates. The sound field is related only to the depth and horizontal range away from the sound source, so in a cylindrical coordinate system, we let the z-axis pass through the sound source and go 3 vertically downward, and the r-axis is parallel to the sea surface, as illustrated in Fig. 1. The Helmholtz equation (Eq. (1)) in the cylindrical coordinate system is taken in the following form: where z s is the depth of the sound source. We consider using the following Hankel transform for the above equation: Specifically, the following operation is applied to Eq. (2): ∞ 0 (·)J 0 (k r r) rdr Therefore, we can easily obtain the following depth-separated wave equation: This equation is an ordinary differential equation in depth and can be solved numerically or analytically. Conventionally, the solution strategy for Green's function Ψ(k r , z) is to first physically 4 discretize the ocean environment in the depth direction [5][6][7]12]. The ocean environment is divided into sufficiently thin layers, and the acoustic parameters of each layer are regarded as depth-independent constants, which evidently introduce errors. In the next section, we introduce a Chebyshev-Tau spectral method to numerically solve the depth-separated wave equation, which is a high-precision numerical method that does not involve physical discretization. After the depth-dependent Green's function Ψ(k r , z) is found at a discrete number of wavenumbers for the selected receiver depths, Eq. (3a) is evaluated, yielding the total displacement potential ψ(r, z) at the selected depths and ranges.
Integral transformation for line source problems
An infinitely long line sound source is often used to verify the accuracy of models in computational ocean acoustics. We also consider the solution of this common model. The line source problem is usually introduced in a Cartesian coordinate system, still letting the z-axis pass through the sound source and vertically downward; the x-axis is parallel to the sea surface, and the sound source penetrates the xoz-plane perpendicularly. The main structure is still as shown in Fig. 1, except that the r-axis is replaced by the x-axis. Therefore, the Helmholtz equation of the line source in the Cartesian coordinate system can be written in the following form [1]: We apply the following Fourier transform to Eq. (5): Specifically, the following operator is applied to the above formula: The following depth-separated wave equation is thus obtained: Solving the depth-separated wave equation provides the depth-dependent Green's function Ψ(k x , z). After obtaining Ψ(k x , z), the total sound field can be synthesized by Eq. (6a), as discussed for the point source.
Comparison of Eqs. (4) and (7) indicates that the depth-separated wave equations for the point source and line source have exactly the same form, except that r is replaced by x and k r is replaced by k x . Therefore, Green's function of the depth-separated equation can be used not only as the integral kernel function for the point source but also for the line source. We take only Eq. (4) as an example for the solution of the depth-separated wave equation below. 5
Interface conditions and boundary conditions
In the ocean environment shown in Fig. 1, the interfaces ({h l } −1 l=1 ) with discontinuous environmental parameters in seawater need to satisfy the interface conditions. The sound pressure must be continuous, yielding: The normal particle velocity must also be continuous: where the superscripts − and + indicate the interfaces from above and below, respectively. To solve Eq. (4), it is necessary to impose boundary conditions at the sea surface (z = 0) and the seabed (z = H). Considering the large difference in impedance between seawater and air, the sea surface is usually taken as the perfectly reflected boundary, that is, the pressure-release boundary: For the lower boundary condition, the pressure-release seabed is also considered: When the lower boundary is perfectly rigid, the boundary condition is taken as: When modeling the ocean environment, the acoustic half-space boundary shown in Fig. 1 is typically found in practice. Next, we deduce the boundary condition that the acoustic half-space should satisfy. Since the energy in the acoustic half-space has only downward waves and no upward waves, the general solution of the displacement potential satisfies the following form: where β is the magnitude of the horizontal wavenumber and The interface conditions still need to be satisfied at z = H. Thus, The following is easily obtained from Eq. (13): Substituting the above formula into Eq. (14b) and noting Eq. (14a), the boundary condition that needs to be satisfied on the boundary of the acoustic half-space can be obtained: Here, ρ ∞ → 0 and ρ ∞ → ∞ correspond to perfectly free and perfectly rigid seabeds, respectively. Note that the inhomogeneous term on the right-hand side of Eq. (4) contains δ(z − z s ) and that the singularity necessitates special treatment of the equation at the depth of the sound source. We add a virtual interface at the depth of the sound source. The acoustic parameters at the virtual interface are continuous, so the sound pressure must also be continuous (Eq. (8)). Due to the singularity of the sound source, the normal particle velocity cannot be constrained using the continuity condition at the discontinuous interface (Eq. (9)). A natural idea is to integrate both sides of Eq. (4) in a very small neighborhood of z s so that δ(z − z s ) can be eliminated.
Since → 0, the above equation translates to: where z + s and z − s represent the layers below and above the depth of the sound source, respectively. This is the interface condition that the displacement potential at the depth of the sound source needs to satisfy.
Calculation of the sound pressure
After Green's function of Eq. (4) or (7) is obtained, the corresponding point source displacement potential field ψ(r, z) can be obtained through the inverse Hankel transform of Eq. (3a), or the line source displacement potential field ψ(x, z) can be obtained through the inverse Fourier transform of Eq. (6a). However, in the actual numerical evaluation, when using Eq. (3a) or (6a) to compute the displacement potential field of a point source or a line source, only the finite interval [k min , k max ] and discrete M points can be used for numerical integration; k min and k max are the lower and upper limits of numerical integration, respectively. Undersampling the spikes of Green's function with a limited number of discrete points would introduce large errors. In addition, waveguide problems have poles on or close to the real wavenumber axis. Fortunately, the aliasing problem can be eliminated by simply moving the integral contour out into the complex plane [1]. According to Cauchy's theorem, the integral between two points on the complex plane does not vary with the change in the integral contour provided that the integrand is analytical between the contours. Thus, the contour offset ε can be introduced, as shown in Fig. 2. When the points are chosen where the kernels are small and the contour offset ε k max − k min , then the contributions from the vertical sections become insignificant compared to the integral along the horizontal section.
When the sound source is a point source, substitutingk = k r − iε into Eq. (3a) can yield: When the sound source is a line source, substitutingk = k x − iε into Eq. (6a) can yield: Re(k r ) The value of ε is not extremely critical. However, if ε is too large, the contributions of the two vertical parts of the contour are nonnegligible. However, an excessively small value requires a very large number of sampling points M. For most practical purposes, an attenuation of the wrap-around by 60 dB is more than sufficient [1]. The corresponding value of ε is: The integrals in Eqs. (19) and (20) above become the rectangular integrals of the following equations in the actual numerical calculations: ∆k x has the same form as ∆k r in Eq. (21). The above numerical integration can be easily written in the form of matrix multiplication, which greatly improves the actual computational efficiency and is computationally attractive. In addition, we should choose the parameters of numerical integration with great care, such as ∆r, k min , k max and the farthest range r max of the sound field of interest. Techniques related to evaluating integrals (e.g., quadrature schemes, fast field techniques) are universal; they are not the main innovations of this article and therefore are not described in detail. When the displacement potential field is obtained by the above numerical integration, the sound pressure field can be obtained by the following formula [36]: where ω = 2π f . 8
Calculation of the transmission loss (TL)
The TL of a point source is defined as: where The TL of a line source is defined as: where Here, p 0 (1) is the acoustic pressure 1 m from the source, ρ s and k s are the density and wavenumber of the medium at the location of the source, respectively, and H (1) 0 (·) denotes the first type of Hankel function.
Chebyshev-Tau spectral method
Here, we employ the Chebyshev-Tau spectral method to solve for the depth-separated wave equation Eq. (4). The Chebyshev spectral method is a spectral method that uses the Chebyshev polynomial as a basis function [18], so it is necessary to introduce the Chebyshev polynomial here.
T 0 (t) = 1, Chebyshev polynomials are a class of orthogonal polynomials whose orthogonality is defined as follows [20]: Since the Chebyshev polynomial {T i (t)}, that is, the basis function, is defined in t ∈ [−1, 1], the equation to be solved, Eq. (4), must first be scaled to t ∈ [−1, 1] as: where ∆h denotes the length of the domain and L represents the differential operator. Note that the above formula is applicable only to the depth of the nonsound source; the addition of Eq. (18) is required for the depth of the sound source due to the singularity. 9 Next, the function to be determined, Ψ(t), is transformed into the spectral space spanned by the basis functions {T i (t)} ∞ i=0 . Furthermore, the expression of spectral coefficients {Ψ i } ∞ i=0 can also be obtained from the orthogonality of the Chebyshev polynomial [19].
The integral on the right side of the above equation is usually calculated using the Gauss-Chebyshev-Lobatto numerical quadrature [18].
Since it is impossible to expand to infinite terms in the actual calculation, only a limited first (N + 1) terms can be retained, as follows [17]: Ψ N (t) is a function approximation, which becomes increasingly accurate as N increases. The truncation of the infinite term expansion described above inevitably introduces errors, which means that Eq. (30) no longer strictly holds. Substituting Ψ N (t) into Eq. (30) yields a residual [15], which we call R N .
Some principle must be adopted to minimize R N so that the above spectral expansion can achieve higher accuracy. In the Tau-type spectral method, the basis function is used as the weight function, and then the inner product of the weight function and the residual is forced to be equal to 0 [37].
This constraint on the residual is the essence of the weighted residual method [20]. In mathematical monographs, the above equation is generally called the weak form [18] of Eq. (30). Taking into account the orthogonality of the Chebyshev polynomial and Eq. (31), the above equation becomes:LΨ whereL represents the L operator on the spectral space. The next most important thing isL, that is, the transformation of the L operator to the spectral space. The L operator has a derivative term. According to the characteristics of the Chebyshev polynomial, the following is straightforward to prove: Then, the derivative term is transformed into a differential matrix D N , which is related only to the truncation order N and is completely unrelated to Ψ. The term is obtained by the relationship between the Chebyshev polynomial and its derivatives [19]. The L operator also contains a product term, and the spectral transformation of the product of the two functions satisfies the following relationship: where v = v(t) is any continuous function on t ∈ [−1, 1] used, for example. Similarly, the relationship between the spectral coefficients of the product of two functions and the spectral coefficients of the individual functions is represented by a matrix C v , which is related only to v and not to Ψ [14,18].
Discretization
According to the above analysis, Eq. (30) is discretized into the following matrix-vector form in Chebyshev spectral space: where E is the identity matrix. The above equation is equivalent to Eq. (35), whereΨ is a column vector composed of {Ψ i } N i=0 . Eq. (38) is a set of linear equations, but the boundary conditions are not imposed at this time.
For the waveguide in Fig. 1, the depth-separated wave equation must be established in all discontinuous layers. A single set of basis functions cannot span all layers since the Chebyshev polynomial corresponding to the interfaces {h l } l=1 is not continuously differentiable. Thus, we apply the domain decomposition strategy [38] to Eq. (4) and split the domain into subintervals: N l is the spectral truncation order in the l-th layer, and {Ψ l,i } N l i=0 is the spectral coefficient in the l-th layer. Similar to Eq. (38), the depth-separated wave equation in the l-th layer can be discretized into the following matrix-vector form: where A l is a square matrix of order (N l + 1) andΨ l is a column vector composed of {Ψ l,i } N l i=0 . Since the interface conditions are related to the adjacent layers, a total of Eqs. (40) of l = 1, · · · , should be solved simultaneously, which is expressed as follows: Note that when z s is not on the interface, we set up a virtual interface for it as described above. Eq. (40) that needs to be satisfied for the two layers above and below the virtual interface can also be organized into Eq. (41), but the total number of layers becomes ( + 1) at this time.
where A s andΨ s represent Eq. (40) on the layer at the depth of the sound source. The interface conditions in Eqs. (8) and (9) and boundary conditions in Eqs. (10)-(12), (16) must also be expanded to the spectral space and explicitly added to Eq. (42). In addition, on the virtual interface at the depth of the sound source, the intermittent condition that the normal particle velocity needs to satisfy, i.e., Eq. (18), is also added to Eq. (42). After considering the virtual interface, the seawater media comprise a total of ( + 1) layers, so there are interfaces leading to the 2 interface conditions. With the addition of the boundary conditions at sea surface (z = 0) and seabed (z = H), there are 2( + 1) conditions to apply. Next, we describe the imposition of boundary conditions and interface conditions in detail. For the convenience of description, we define the following intermediate row vectors: Thus, the interface conditions and boundary conditions of Eqs. (8)-(12), (16) and (18) can be transformed and expressed as: where h s and h s+1 represent the depth of the interfaces above and below the sound source, respectively. How do these 2( + 1) conditions apply to Eq. (42)? A natural idea is to replace the last two rows of the A 1 to A +1 block matrix with the boundary/interface conditions that the corresponding layers need to satisfy. Doing so reduces the original spectral accuracy of each layer from order N l to order (N l − 2), but this problem can be compensated by increasing the value of N l . The spectral coefficients {Ψ l } +1 l=1 of each layer of Green's function can be obtained by solving Eq. (42) after adding boundary constraints. The numerical solution of Green's function can be obtained by performing the inverse Chebyshev transform (Eq. (32)) of {Ψ l } +1 l=1 sequentially and then stacking into a single column vector.
Numerical Simulation
Here, we present a program named WISpec (Wavenumber Integration based on the Spectral method) developed based on the above algorithm and verify the accuracy of the algorithm through several numerical experiments.
Analytical example: ideal fluid waveguide
The ideal fluid waveguide is a very simple example with an analytical solution. It consists of a layer of homogeneous seawater and upper/lower boundaries; the sea surface is usually perfectly free, and the bottom can be perfectly free or rigid. The ideal fluid waveguide of the perfectly free seabed has an analytical solution of the following form [1]: The analytical solution of the sound field of the perfectly rigid seabed is the same as that of Eq. (44), except that the vertical wavenumber becomes: In this example, the sound source frequency is f = 20 Hz, we take the sea depth H = 100 m, z s = 36 m, the density ρ = 1 g/cm 3 , the speed of sound c = 1500 m/s, and the maximum horizontal range is r max = 3000 m. The number of discrete points in the wavenumber domain is taken as M = 2048, the integral interval is [0, 2k 0 ] (k 0 is the wavenumber in water), and the spectral truncation order is N = 20. Similarly, Fig. 4(a) and 4(b) show the wavenumber spectrum of the ideal fluid waveguide with the perfectly rigid seabed calculated by WISpec; three peaks appear at the positions of k = 0.082343 m −1 , 0.069247 m −1 and 0.029139 m −1 . This matches the analytical solution in Table 1 very well, and the sound fields shown in Fig. 4(c) to 4(f) lead to the same conclusion as in Fig. 3, namely, that WISpec can calculate the sound field very accurately. error of the numerical sound fields mainly comes from the error of Green's function. The numerical sound fields are given in the form of a discrete grid with 3000 discrete points horizontally from 1 to 3000 m and 401 discrete points vertically from 0 to 100 m. The error of the numerical sound field is calculated by: where nz and nr are the number of discrete points in the vertical and horizontal directions, respectively, and TL i, j represents the analytical solution for the TL at (z i , r j ). Fig. 5(a) clearly shows that as N increases, the error of the sound field rapidly converges to a very low level and remains stable, which also proves that the spectral method indeed maintains the advantage of exponential convergence in solving depth-separated equations. In Fig. 5(b), the error of SCOOTER decreases linearly at the beginning and then stabilizes on the order of 10 −1 dB, while WISpec can converge to the order of 10 −2 dB, which further illustrates the high-precision properties of the spectral method.
Analytical example: pseudolinear waveguide
A pseudolinear waveguide is a waveguide whose sound speed profile follows [1]: A pseudolinear waveguide has an analytical solution involving Airy functions (Ai(·) and Bi(·)) and their first derivatives (Ai (·) and Bi (·)), and the horizontal wavenumbers k r are the roots of the following transcendental equation [39]: In this example, the seabed is perfectly rigid; we take the sea depth H = 100 m, a = 5.94 × 10 −10 s 2 /m 3 , and b = 4.16×10 −7 s 2 /m 3 , the sound source frequency is f = 50 Hz, and M = 4096. Table 2 lists the discrete modes calculated by the three methods, and the spectral truncation order used by WISpec is N = 20. The results of WISpec are very consistent with the analytical solution.
Even if there is a certain error, a large part is due to the limited number of discrete points.
Pekeris waveguide
The Pekeris waveguide is a classic waveguide in ocean acoustics; the ocean environment of the Pekeris waveguide consists of a layer of homogeneous water and an acoustic half-space below it. In this example, the same configuration as the ideal fluid waveguide is used, except that the sound source frequency is f = 50 Hz, the density in the acoustic half-space is ρ ∞ = 1.5 g/cm 3 , the speed of sound is c ∞ = 2000 m/s, and the attenuation is α ∞ = 0.5 dB/λ. Since there is no analytical solution for this example, we present the results of SCOOTER [40] and NM-CT [25,27] in Fig. 6 for reference. The former is a wavenumber integration model based on the finite element method, and the latter is a normal mode model based on the spectral method. The sound fields calculated by the three programs are basically the same, but the two programs based on the wavenumber integration model have a higher degree of agreement. Whether on the sound field or the TL line diagram, the sound field calculated by the NM-CT is still somewhat different. This also proves that for the Pekeris waveguide, the normal mode model generates a certain error in the near field due to ignoring the continuous spectrum.
In addition to point sources, WISpec can also calculate the sound field of line sources. Replacing the sound source with a line source in this example results in a sound field, as displayed in Fig. 7. The sound fields calculated by SCOOTER, WISpec and KRAKENC (a normal mode model based on the finite difference method [41]) are still very similar, but there are slight differences in the near field. Note that when the sound source is a line source, the difference in the sound field calculated by WISpec and KRAKENC in the far field is smaller than that of the point source. To facilitate the comparison with SCOOTER and KRAKENC, the sound field of the line source is normalized as p 0 = iρ s ω 2 H (1) 0 (1)/4 instead of Eq. (27).
Munk waveguide
The Munk waveguide is a typical example of deep-sea acoustic propagation problems. Here, the ocean environment consists of a layer of seawater with a sound speed profile of the Munk profile and a homogeneous half-space below, as schematically shown in Fig. 8(a). In this experiment, the frequency of the sound source is f = 50 Hz and z s = 100 m, the sea depth is H = 5000 17 The number of discrete points in the wavenumber domain is taken as M = 55000, the integral interval is [0, 2k 0 ], and the spectral truncation order is N = 500. Fig. 9 illustrates the sound fields of the Munk waveguide calculated by SCOOTER, WISpec and NM-CT. The results of the three programs are very similar, and there is almost no difference in the sound fields. This result demonstrates the accuracy of WISpec and demonstrates that normal modes are an excellent approximation model for long-range propagation.
Bucker waveguide
The Bucker waveguide is a benchmark for ocean acoustic propagation models [1]. As shown in Fig. 8(b), the sound speed contrast is very small, yielding a small number of normal modes with real propagation wavenumbers. On the other hand, this environment is characterized by a strong density contrast at the bottom, and the density contrast yields a significant number of virtual modes close to the real wavenumber axis. Therefore, normal mode models ignoring the continuous spectrum are not able to provide accurate predictions of the TL. However, wavenumber integration has no restrictions on the density contrast or on the spectral composition and is therefore capable of providing an exact solution for this waveguide. In this experiment, the sound source frequency is taken as f = 100 Hz, z s = 30 m, and the sea depth is H = 240 m. 20 The number of discrete points in the wavenumber domain is taken as M = 4096, the integral interval is [0, 2k 0 ], and the spectral truncation order is N = 40. Fig. 10 lists the sound fields of the Bucker waveguide calculated using SCOOTER, WISpec and KRAKENC. Compared with the KRAKENC results, the results of the two wavenumber integration programs, SCOOTER and WISpec, are more similar. As shown in Fig. 10(d), even over a range of 2000 m from the sound source, the sound field predicted by the normal mode program still has a certain error with the wavenumber integration models. This clearly proves that in the Bucker waveguide, the influence of the continuous spectrum is notably important even over very long ranges.
Remarks
The above simulation experiments confirm that WISpec is a robust and accurate program and that the spectral method is effective in solving the depth-separated wave equation. From the above analysis, we can directly summarize the following features of the algorithm and program developed in this article: 1. Depending on the range-depth requirement, the evaluation of the Green's function may have to be performed a substantial number of times. The solution of the depth-separated wave equation is parallelizable because it is independent for different wavenumbers. 2. When applying the spectral method to solve the depth-separated wave equation, as shown in Eq. (38), there is no need to use piecewise linear elements to approximate the environmental parameters, i.e., eliminating the need for subdividing the environment into homogeneous layers, thus avoiding error caused by physical discretization in the vertical direction. 3. The depth-separated wave equation discretized by the Chebyshev-Tau spectral method (see Eq. (42)) yields a block diagonal matrix, and in many cases, the Chebyshev matrix is quasidiagonally dominant; this sparsity makes it easy to solve efficiently. 4. WISpec can still maintain the advantage that the error decreases exponentially with increasing N in solving the depth-separated wave equation, and this method can often obtain higher accuracy than the low-order finite difference scheme. 5. The algorithm and program designed in this paper can calculate the sound field excited by both point and line sources.
Conclusion
In this paper, we developed a novel wavenumber integration model that can solve for a twodimensional sound field in an arbitrary horizontally stratified ocean environment based on a Chebyshev-Tau spectral method. First, the Helmholtz equation is transformed into the wavenumber domain by the Hankel/Fourier transformation. Since horizontally stratified media are considered, the wavenumber kernel function satisfies the depth-separated wave equation. The algorithm first samples the wavenumbers in a preset interval [k min , k max ] and solves the depth-dependent Green's function Ψ(k r , z) in parallel for the discrete wavenumbers obtained by sampling. After the wavenumber kernel function is obtained, the inverse Hankel/Fourier transform is applied to synthesize the sound field in the physical space.
This algorithm is the first to use a spectral method to solve the depth-separated wave equation. Spectral methods use the idea of function approximation to control accuracy and the idea of the weighted residual to discretize the equation. When the ocean environment parameters are 23 sufficiently smooth, the solution of Green's function converges exponentially. The results of numerical simulations verify the accuracy and reliability of the model and code. The robust and high-precision Chebyshev-Tau spectral method avoids the possible instability of traditional algorithms for the depth-separated wave equation.
In terms of its application scope, this model requires that the ocean environment be independent in the r/x direction, which limits the practicality of WISpec to a certain extent. Therefore, developing a high-precision wavenumber integration model based on the spectral method to solve the range-dependent waveguide has a bright future. In addition, elastic sediment is a more accurate model of the real ocean environment. In the future, WISpec can be further improved to enable the prediction of more complicated ocean acoustic fields.
|
2022-06-02T14:49:34.278Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "a4009cffd69da101a322a3c0b8f715a14536ff90",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a4009cffd69da101a322a3c0b8f715a14536ff90",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
221642615
|
pes2o/s2orc
|
v3-fos-license
|
Kawasaki Disease: an Update
Purpose of Review Provide the most recent updates on the epidemiology, pathogenesis, and treatment advances in Kawasaki disease. Recent Findings Treatment advances in complex, IVIG-refractory cases of Kawasaki disease. Multisystem inflammatory syndrome, a newly reported inflammatory condition with Kawasaki-like features and an association with the 2019 Coronavirus (COVID-19). Summary Kawasaki disease (KD) is a rare systemic inflammatory disease that predominately affects children less than 5 years of age. Pathogenesis of KD remains unknown; the leading theory is that an unknown stimulus triggers an immune-mediated inflammatory cascade in a genetically susceptible child. Classic KD is a clinical diagnosis based on set criteria and excluding other similar clinical entities. Patients who do not fulfill complete diagnostic criteria for KD are often referred to as atypical (or incomplete) KD. The most feared complication of KD is coronary artery abnormality development, and patients with atypical KD are also at risk. Administration of intravenous immunoglobulin (IVIG) and aspirin has greatly reduced the incidence of coronary lesions in affected children. Several other immune-modulating therapies have recently been utilized in complex or refractory cases.
Introduction
Kawasaki disease (KD) was first described in a 1967 report by Japanese pediatrician Tomisaku Kawasaki. The cardiac sequelae were later documented in 1970, following investigation of 10 autopsy cases of sudden cardiac death following diagnosis of KD. The first reported cases outside Japan were in Hawaii in the early 1970s; KD cases have since been reported in more than 60 countries worldwide.
Epidemiology
The epidemiology of KD varies greatly by geographic location and seasonality. The highest incidence rates (per 100,000) are in children of Japanese ancestry. Recently published data from the Japanese KD nationwide survey reported an increased rate over time from 218.6 per 100,000 in 2008 to 243.1 and 330.2 in 2011 and 2015 respectively [1•, 2]. In the United States, the incidence appears to have remained relatively stable. In 2012, the KD-associated hospitalization rate for children < 5 years of age was 18.1 per 100,000. In 2003, the rate was 19.7 per 100,000 children, [3] which amounts to roughly 4000 to 5500 new cases in the United States each year. The highest rates are seen among children < 5 years of age, with a male predominance (21.0 per 100,000 versus 15 per 100,000 in females). There is considerable ethnic variation, the highest rates seen among Asian/Pacific Islanders at 29.8 per 100,000 children < 5 years, and the lowest recorded rate among white children 13.7 per 100,000 [3,4]. It should be pointed out that analysis of Black and Hispanic race/ethnic groups could not be carried out in the Holman study as there were too few reported cases [4].
supports the likely role genetics plays in pathogenesis of KD [5,6]. Several additional findings support a genetic component to KD susceptibility, including: concordance risk in identical twins at~13%, increased incidence of KD in children whose parents have a history of KD, and higher occurrence of KD in siblings of affected patients [7][8][9][10][11][12].
KD does not appear to follow Mendelian pattern of inheritance. However, familial aggregation is well recognized, as are prediction models for severity based on genetic differences. Several single-nucleotide polymorphisms (SNPs) in different genes and gene regions have been implicated in family linkage and genome association studies: caspase 3 (CASP3), inositol 1,4,5-trisphosphate kinase-C (ITPKC), CD40, FCGR2a, and B-cell lymphoid kinase (BLK) [13][14][15]. Interestingly, many of the SNPs associated with KD have been identified in other inflammatory diseases such as rheumatoid arthritis, ulcerative colitis, systemic lupus erythematosus, and systemic sclerosis. These findings may indicate a common pathway in the inflammatory immune response [16].
Vaccine Exposure Theory
Several studies have evaluated the role vaccination may play in triggering KD via robust stimulation of the innate and adaptive arms of the immune system. However, there is currently no evidence to suggest that vaccine administration is associated with development of KD [17][18][19][20].
Infectious Theory/Seasonality
The leading theory for the pathogenesis of KD is that an unknown infectious agent leads to activation of the immune system in a genetically susceptible child. Several epidemiologic phenomena support this theory. The first is the apparent seasonality of KD. There is a consistent peak in the number of cases reported in the month of January, with another gradual increase in spring to summer (March-June) [1•, 21]. We often see this kind of consistent seasonal fluctuation in relation to infectious agents, especially viral infections. Several temporal clusters of epidemics have been reported in Japan, Canada, the United States, and Finland further supporting an infectious trigger [22].
The next supporting feature is related to tropospheric wind patterns whose presence in different locations may coincide with the incidence of KD. Studies suggest that winds arising from certain regions may carry either environmental toxins or an infectious agent to another region, thus triggering development of KD [23][24][25]. Another supporting feature is the significant overlap of clinical features between KD and other infectious agents, most notably scarlet fever, the newly described multisystem inflammatory syndrome (described in detail below), and adenovirus. In one study, it was found that 10% of patients diagnosed with KD also had positive low titer adenovirus infection [26].
There is a mono-modal age distribution in the occurrence of KD with peak incidence in late infancy (9-11 months), and then a gradual decrease in incidence with advancing age [1•]. This suggests the possible existence of protective transplacental antibodies to infection, which wanes after the first few months of life [27]. Finally, there are case reports/series showing higher occurrence of KD cases among siblings. The risk of KD in a child is increased roughly 10 times if a sibling has also been affected. This temporally occurs either on the same day or within 10 days of the initial presentation [7].
Immune Factors/Dysregulation
To date, no infectious causes have been identified as potential underlying etiologies, despite many investigations into bacterial toxins, super-antigens, fungal organisms, and viral pathogens. The theory remains, however, that an unknown stimulus triggers an inflammatory cascade with activation of both the innate and adaptive arms of the immune system. The innate immune system may be activated via detection of either pathogen-associated molecular patterns (PAMPs), or damage-associated molecular patterns (DAMPs). The NLRP3 inflammasome recognizes these abnormal molecular patterns in the body and activates a signaling cascade, which ultimately results in downstream release of several proinflammatory cytokines. Some of the most well studied of these cytokines in KD include IL-1, IL-18, IL-6, TNF-a, IFN-gamma, and IL-8. Several studies have either implicated this pathway of innate activation, or have successfully induced coronary arteritis (resembling KD) in murine models via these innate mechanisms [28][29][30]. Interleukin-1 has direct inflammatory effects on coronary artery endothelial cells.
In addition to the innate immune response in activating inflammatory mechanisms in KD, there is also significant activation of the adaptive (antigen-specific) immune response. There appears to be increased numbers of circulating proinflammatory and regulatory T cells in the acute phase of KD [31]. Studies have noted an increased number of IgAproducing plasma cells in tissues and coronary artery vascular walls in affected patients with KD [32,33]. Several autoantibodies directed against myocardial, endothelial, and extracellular matrix proteins have also been described in the literature, although their clinical significance is poorly understood [34]. Following administration of IVIG, we see an expansion of regulatory T cell populations and normalization of B cellactivating factor. This is associated with subsequent clinical improvement during the acute phase of KD [35,36]. All of the aforementioned findings support the significant role adaptive immune system plays in KD. B and T cell memory cell development is likely involved as well given the low recurrence rate of KD and typically self-limited course of the disease.
Diagnosis
There is no diagnostic test for KD, instead, the diagnosis of classic (or complete) Kawasaki disease is made utilizing clinical criteria (Table 1) and excluding other similar clinical entities. Individual clinical manifestations may not all present simultaneously. Careful review may reveal that one or more clinical features were present and resolved prior to presentation. Several other clinical manifestations may also be present which are not included in the diagnostic criteria (Table 2).
Patients who do not fulfill the complete diagnostic criteria for KD are referred to as incomplete or atypical KD. These patients may still be at risk for coronary artery abnormalities [37]. Therefore, any child with prolonged unexplained fever with any of the principal clinical features should be further evaluated for KD with consideration of echocardiography. The American Heart Associated (AHA) created an algorithm to aid in evaluation of suspected KD patients who do not meet the diagnostic criteria [38•].
Kawasaki disease tends to be triphasic with an acute, subacute, and convalescent phase. The acute phase is characterized by high-spiking fevers (typically > 39.0°C), with the other principal features listed in table 1. The acute febrile phase lasts anywhere from 7 to 14 days. The subacute phase is often an asymptomatic period after the febrile episode subsides and extends approximately 4 weeks. During this phase, patients may still have desquamation of the digits, arthralgias, and abnormal lab findings. This is the period of time notable for the greatest risk (highest incidence) of developing cardiac sequelae, namely coronary artery aneurysms (CAA). The third, convalescent phase is typically an asymptomatic period, roughly 4-8 weeks after onset of initial illness. There is still a risk (but significantly decreased) of aneurysm development despite absence of clinical symptoms during this period.
The rate of KD recurrence is less than 3% of patients in Japan, [39] and roughly 1.7% of patients in the United States (3.5% in US KD patients of Asian and Pacific Islander descent) [40]. There is reportedly a higher risk of coronary artery sequelae with recurrent episodes [22].
Laboratory Analysis and Workup
Kawasaki disease is a clinical diagnosis based on set diagnostic criteria. Laboratory findings, although nonspecific, are useful in supporting a diagnosis of KD, particularly when the clinical manifestations are non-classic. Table 3 outlines several common laboratory findings seen in KD during different phases of disease [41,42].
Most children with KD will typically present in the acute phase with leukocytosis (elevated immature and mature granulocytes). Anemia is another common finding and tends to be normocytic and normochromic. Thrombocytosis is common after the first week of symptoms; counts peak in the third week, and may reach as high as 1,000,000 per mm 3 (averagẽ 700,000 per mm 3 ) before normalizing in the subacute to convalescent phase. Acute phase reactants are elevated to varying degrees in nearly every patient with KD. Serum transaminases or gamma-glutamyl transpeptidase elevations occur in 40-60% of patients [2,43]. Urinalysis may show a sterile pyuria in up to 80% of children [44].
Some studies suggest use of N-terminal pro-brain natriuretic peptide as an adjunctive diagnostic marker of acute phase KD. Its suggested use is in the pediatric emergency room in patients with unexplained prolonged fever with suspected KD. However, it is a nonspecific test with no clear cut-point values for a positive result [45][46][47]. A recent study investigated the use of platelet-activating factor (PAF) and its acetyl-hydrolase (PAF-AH) in predicting KD. In this particular report, the authors found a statistically significant elevation in PAF and PAF-AH levels in the acute phase in children with KD as compared to controls [48•]. To be diagnosed with classic KD, the patient must have ≥ 5 days of fever as well as ≥ 4 of the 5 principal clinical features. In rare cases, experienced clinicians may be able to establish the diagnosis with less than the required duration of fever
Diagnostics/Imaging
The most feared sequelae of KD is development of coronary artery abnormalities, which occurs in 20-25% of untreated children [49]. Echocardiography remains the standard imaging modality to evaluate for both coronary artery dimension as well as other cardiac abnormalities. It is a non-invasive study without risk of radiation and high sensitivity and specificity for identifying coronary artery lesions (CALs). The Japanese Ministry of Health criteria is widely used to classify coronary artery sizes according to age [50]. In children younger than 5 years, coronary artery lumen diameter is abnormal if exceeding 3 mm. In children 5 years of age and older, a lumen diameter greater than 4 mm is considered abnormal. In addition to absolute luminal dimensions, both the Japanese ministry of Health and the American Heart Association also utilize Z scores when classifying CALs. Z scores are coronary dimensions that are adjusted for body surface area, as coronary artery dimensions will change with the size of the child. Overall, aneurysms < 5 mm luminal diameter are considered small, 5-8 mm luminal diameter are considered mediumsized, and aneurysms > 8 mm in luminal diameter are considered large.
Echocardiography surveillance is typically performed at diagnosis, 1-2 weeks after diagnosis, and then again 6-8 weeks later (assuming no complications). There are several factors associated with increased risk of developing CALs including male sex, age < 12 months or > 8 years, fever duration > 10 days, leukocytosis > 15,000 per mm 3 , low hemoglobin (< 10 g/dL), thrombocytopenia, hypoalbuminemia, hyponatremia, and persistent fever or recurrence of fever > 36 h after IVIG administration [51,52]. Children at higher risk, and those with previously noted CALs, will be screened more often. Other imaging modalities utilized include magnetic resonance angiography, computed tomographic angiography, and cardiac catheterization if warranted.
Differential
Several other illnesses share similar clinical features to KD (Table 4) and must be considered prior to diagnosis. Clinical manifestations that do not align with the diagnostic criteria for KD should prompt investigation of other causes. It must also be noted, that children affected by KD may have a concurrent infection with another pathogen, i.e., viral respiratory pathogen as previously described.
Multisystem inflammatory syndrome in children (MIS-C) is a newly reported inflammatory condition with Kawasakilike features and an association with the 2019 Coronavirus (COVID-19). First described April 2020 in the UK, MIS-C cases are now reported in Italy, France, Spain, and the United States. Affected children tend to present with persistent fever, conjunctivitis, mucositis, lymphadenopathy, rash, evidence of multisystem organ involvement, and elevated inflammatory markers. Respiratory symptoms and abdominal pain are also common features [ Mononuclear pleocytosis without hypoglycorrhachia and/or elevated protein individuals less than 21 years of age presenting with fever (> 38.0°C), laboratory evidence of inflammation, and clinically severe illness requiring hospitalization with multisystem organ involvement. Patients must have evidence of exposure to COVID-19 within 4 weeks prior to onset of symptoms, and practitioners must exclude plausible alternative diagnoses [58•].
MIS-C appears to present as a late manifestation of disease (weeks after the COVID-19 exposure) and may be more related to immune activation during the convalescent period. It remains unknown if COVID-19 triggers KD features, if it is a completely separate entity, a spectrum of disease, related to macrophage activation, or an overlap syndrome. One of the most interesting aspects of this disease is that countries with the highest incidence of KD, i.e., Japan and China, have no reported cases despite excellent surveillance systems. Other notable differences compared to KD: MIS-C typically presents after the age of 5, and there appears to be a higher incidence in children of Afro-Caribbean descent [54•, 57•]. Little information is currently known about the pathogenesis and optimal treatment regimen for MIS-C. Most practitioners are utilizing standard Kawasaki protocols if clinically similar to KD in addition to supportive therapy [54•, 55•, 56•, 57•]. Several international registries are collecting surveillance data to learn more about this new entity. The hope is that discoveries in MIS-C may provide insight into our understanding of the trigger, genetics, and pathophysiology of KD.
Intravenous Immunoglobulin (IVIG)
Early identification of KD is paramount as timely administration of treatment has greatly reduced the incidence of coronary artery lesions (CALs). IVIG is most effective when administered within 10 days of onset of fever, and its use decreases the risk of coronary artery aneurysm formation from 20-25% to 3-5% in those who are appropriately treated [59,60]. Effective initial treatment consists of a single infusion of high-dose IVIG at 2 g/kg together with acetylsalicylic acid (ASA) [60][61][62].
Even with prompt IVIG therapy, up to 20% of children will develop recurrent or persistent fevers. These children are termed IVIG-resistant [61][62][63]. There are several risk factors for IVIG-resistant KD including delayed initial IVIG administration, increased ESR, decreased hemoglobin and platelet levels, oral mucosal alterations, cervical lymphadenopathy, extremity swelling, and polymorphous rash [64•]. It is recommended that these children are administered a second dose of IVIG to help prevent sequelae [61]. Additional considerations regarding IVIG therapy: active vaccinations, i.e., measles and varicella vaccinations are contraindicated for 11 months after administration of IVIG and known physiologic ESR elevations after IVIG preclude its use to assess response to therapy.
ASA
Moderate-dose (30-50 mg/kg/day) or high-dose (80 to 100 mg/kg/day) ASA is generally utilized until the patient is afebrile in the United States, Japan, and Western Europe. There does not appear to be a significant difference between low-dose (3-5 mg/kg/day) ASA versus high-dose ASA in regard to incidence of CALs, duration of fever, or duration of hospitalization [65•]. There is also no clear evidence that any dose of ASA will decrease development of CALs [66].
Therefore, it may be reasonable to give the moderate-dose ASA to avoid potential toxicities seen in high-dose ASA. Regardless of dose, ASA and IVIG remain the standard initial management. ASA is typically scheduled every 6 h during the acute phase of illness. Some clinicians will continue high-dose ASA until the 14th day of illness, even after fever defervescence. After the acute phase, children are transitioned to lowdose (3-5 mg/kg) ASA for anti-platelet effect. Patients remain on low-dose ASA into the convalescent phase. The decision to continue or discontinue therapy is usually made around 6-8 weeks pending any CALs on echocardiogram. Patients who are at high risk of treatment resistance and/or patients with coronary sequelae may benefit from adjunctive treatments (which are discussed below).
Corticosteroids
Corticosteroids are well-utilized in most vasculitides given their relatively fast onset, strong anti-inflammatory properties, and overall improved outcomes. Their use in KD is more controversial, but emerging data suggests that patients at particularly high risk for development of CALs may benefit from early use of corticosteroids as primary adjunctive therapy with IVIG and ASA. A 2016 meta-analysis of 16 studies by Chen et al. looked at early intervention with corticosteroids plus IVIG versus corticosteroid use in IVIG-resistant cases. They found that the incidence of CALs was lower in patients who received corticosteroids as adjunctive primary therapy compared to the IVIG only group [67]. A 2017 Cochrane review also demonstrated reduced incidence of CALs in KD patients treated with corticosteroids during the acute phase. Additionally, they found that corticosteroid use was associated with decreased duration of fever, length of hospitalization, and time to normalization of CRP [68•]. There is significant heterogeneity in studies assessing corticosteroids use in KD with regard to dose, duration, and timing of use. Despite this fact, there are several notable consistencies when comparing results. Studies that utilized a single dose of intravenous methylprednisolone [69,70] did not demonstrate the same benefit of reduced incidence of CALs seen in studies which utilized moderate to high dose (i.e., 1-6 mg/kg/day prednisolone equivalent doses) over an extended course (i.e., greater than 3 days) [71,72]. Early use of corticosteroids during the acute phase appears to be more beneficial than in refractory (IVIG-resistant) cases [67,73]. Patients at higher risk of poor coronary outcomes have the greatest magnitude of benefit from early adjunctive corticosteroid therapy [68,69,72,74]. Overall results demonstrate good tolerability and safety with corticosteroid use and no evidence of increased incidence of adverse outcomes. There is currently no consensus on corticosteroid dosing for treatment of KD. The most recent AHA guidelines note that a longer course of corticosteroids may be considered in high risk patients as primary adjunctive therapy or in IVIG-resistant cases [38•].
Tumor Necrosis Factor (TNF) Inhibition
TNF and IL-1 beta have both been implicated in the vascular endothelial cell damage and CALs seen in acute KD [75]. Several small studies reported potential beneficial effects of TNF blockade during the acute phase of KD, either as primary therapy or in cases refractory to IVIG. The most well-studied agent is infliximab, whose use may decrease the duration of fever and the length of hospitalization as well as aid in normalization of acute phase reactants. No studies to date, however, have reported decreased CALs with the use of infliximab [76,77].
A recent trial with etanercept for acute phase KD showed no significant benefit in cases refractory to IVIG. However, there did appear to be an improvement in coronary artery dilation and disease progression in patients 1 year out from onset. This effect seemed especially true in patients with baseline abnormalities on coronary imaging [78•]. Use of TNF inhibitors is not mainstream for reasons besides lack of effect on CALs, namely association with malignancy and infection risk.
Interleukin 1 Inhibition
Several small case studies have reported successful use of anakinra, an IL-1 receptor antagonist, in the treatment of refractory KD [79][80][81]. Prospective trials are underway to further investigate.
Calcineurin Inhibition
Calcineurin inhibitors like cyclosporine may be beneficial as adjunctive primary treatment or in cases refractory to IVIG [82•, 83, 84]. The 2019 Hamada et al. study was a randomized control trial that showed reduced incidence of CALs in higher risk patients treated with IVIG plus cyclosporine versus standard therapy with IVIG alone. Additionally, the authors found no increased incidence of adverse events between the two groups [82•]. Calcineurin inhibition therapy has promise given the important role the adaptive immune system, specifically T cells, plays in the pathogenesis of KD. More studies need to be conducted to assess effectiveness and safety.
Other Therapies
Several other immunosuppressive agents are reported to be effective in patients with refractory KD including: plasma exchange, cyclophosphamide, methotrexate, and even rituximab. Use of these agents is not widespread given toxicity risks and lack of staunch prospective clinical trials [85][86][87]. IL-6 inhibitors are not currently used in refractory cases of KD. A 2017 prospective case series by Nozawa et al. reported progressive development of giant coronary artery aneurysms in 2 out of 4 children treated with tocilizumab with refractory KD. While this was a single small series, there is a suggestion than tocilizumab may accelerate formation of CALs [88].
Primary Prevention of Thrombosis
Patients with no evidence of CALs are maintained on low-dose ASA therapy throughout the acute phase of illness. At the 6-8 week follow-up appointment, ASA may be discontinued so long as no adverse changes are seen on the final cardiac imaging (echocardiogram). Patients with small CALs are typically continued on low-dose ASA monotherapy past this period. Those with moderate-sized aneurysms are managed with ASA and an ADP receptor antagonist, i.e., clopidogrel. Children with persistent large or giant aneurysms (internal luminal diameter ≥ 8 mm) may be treated with an antiplatelet agent plus anticoagulant therapy (i.e., warfarin or LMWH). The latter regimens are implemented in collaboration with pediatric hematology specialists. It is important to note that nonsteroidal anti-inflammatory drugs, which utilize the cyclooxygenase pathway, may interfere with the antiplatelet effect of ASA and should be avoided.
Studies are underway assessing the role of 3-hydroxy-3methylglutaryl coenzyme A reductase inhibitors (statins) in children with KD and CALs. Statins may have potential beneficial effects on inflammation, platelet aggregation, coagulation, and endothelial function in addition to their known cholesterol lowering effects. Studies have shown both safety and tolerability, but long-term prospective trials are needed prior to recommending their routine use in KD [89,90].
Prognosis and Long-Term Management
The prognosis for children diagnosed with Kawasaki disease is primarily based upon extent and severity of coronary artery involvement at diagnosis and at follow-up. The case-fatality rate in the United States and Japan is less than 0.2%, and the principal cause of death is myocardial infarction resulting from coronary artery occlusion [91]. The AHA 2017 guidelines for diagnosis, treatment, and management of KD provide a detailed risk classification scheme that can be utilized for follow-up guidance [38•]. The classification system is divided into five risk categories utilizing both Z scores and absolute luminal dimensions.
The lowest risk level is 1, indicating no involvement of the coronary arteries (Z score < 2). These patients are screened with echocardiogram during the acute illness, and then again at 6-8 weeks after onset. These patients appear to have a similar risk profile to patients without a diagnosis of KD [49]. ASA can be discontinued in this group so long as there are no adverse changes in the risk classification. The highest risk group is risk level 5 with large or giant aneurysms (Z score ≥ 10 or absolute dimension ≥ 8 mm). These patients naturally require much closer cardiac monitoring, and even addition of anticoagulants if persistence of aneurysms as outlined above [38•].
Compliance with Ethical Standards
Conflict of Interest The authors declare that they have no conflicts of interest.
Human and Animal Rights and Informed Consent This article does not contain any studies with human or animal subjects performed by any of the authors.
|
2020-09-14T13:08:10.167Z
|
2020-09-13T00:00:00.000
|
{
"year": 2020,
"sha1": "318bf913395b4e6da0e467860840079c87e55523",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11926-020-00941-4.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ed47ad2e45d465216ca65270476621ccac44b54",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
105385375
|
pes2o/s2orc
|
v3-fos-license
|
Adsorptive Removal of Selected Anionic and Cationic Dyes by Using Graphitic Carbon Material Prepared from Edible Sugar : A Study of Kinetics and Isotherms
Graphitic carbon-like material (GCM) derived from edible sugar under a nitrogen environment was applied as an adsorbent for the removal of anionic and cationic dyes (methyl orange, MO) and methylene blue (MB) from wastewater. The physico-chemical characterization of GCM was analyzed by scanning electron microscopy (SEM), X-ray diffraction (XRD), Fourier transform infrared (FT-IR) spectroscopy, and X-ray photoelectron spectroscopy (XPS). The plate-like morphology with an average size of 50–100 nm was measured from the SEM images. The measured BET ‘surface area and pore volume were 574 m2/g and 0.248 cm3/g, respectively with pore diameter (d), 1.8 47 (< 2 nm) indicates that the GCM classified as a microporous. The effects of dosage, pH, contact time and concentration on the adsorption of MB and MO onto GCM were studied to unveil the adsorption process. The experimental isotherm data concurred with the Langmuir isotherm model (R2 = 0.990) for MB, while the MO isotherm data concurred with Freundlich model (R2 = 0.995). The maximum adsorption capacity achieved from the Langmuir isotherm equation at 25 °C was 38.75 and 43.48 mg/g for MB and MO, respectively, which indicates that GCM is a suitable adsorbent for the adsorption of both anionic and cationic dyes. The kinetic study demonstrated that the adsorption of both dyes onto GCM was the pseudo-second-order diffusion kinetics. The thermodynamic parameters reveal the adsorption of both dyes was endothermic spontaneous through chemical interactions. The GCM was found to be a potential adsorbent for the removal of MB and MO from an aqueous solution.
Introduction
The releasing of dye-containing wastewaters in to the environment is a significant cause of poor water quality, and leads to eutrophication and distressing aquatic life.Dye-containing wastewater can increase the toxicity, biochemical oxygen demand, and chemical oxygen demand of the affected water. 1 Therefore, developing a cost-effective process for the removal of dyes from the effluents of industries has been one of the most challenging tasks around the world.Many treatment methods including physical, chemical, and biological methods have been re-ported to remove dyes from wastewater. 2 However, these methods have a number of disadvantages, such as the production of large amounts of toxic and carcinogenic byproducts, and are not cost-effective. 3Adsorption is an economic, effective, and easily operated process in dye removal. 4Hence, continued attempts have been made by investigators to discover a new adsorbent material which can give results that are more efficient.Methylene blue (MB) is a cationic dye and is most commonly used for dying materials such as wood, silk, and cotton. 5Methyl orange (MO) is an acidic/anionic dye, and has been widely used in the textile, printing, paper, food, and pharmaceuti-Lingamdinne et al.: Adsorptive Removal of Selected Anionic ... cal industries. 6Because of their toxic nature, the removal of MO and MB from wastewater is essential. 5,7raphene is an attractive new material composed of carbon ingredients with a honeycomb-like structure.It has motivated massive interest over the last few years because of its excellent properties such as stability, 8 high thermal conductivity, 9 and fast mobility of charge carriers. 102][13] However, the preparation of graphene from graphite is expensive and using toxic chemicals.A biologically derived graphene is possibly the most reasonable and chemically most adaptable graphene.Graphene or carbon-like materials derived from plant sources are typically eco-friendlier than those from fossil sources such as petroleum.There are many reports prepared carbon materials from biomaterials or plant continents and were utilized for adsorption. 14Edible sugar is one of the simplest natural sources of carbon, and converts completely into elemental carbon upon dehydration. 15n this work, we report the results of the adsorption of an anionic dye (MO) and a cationic dye (MB) on a sugar-based graphitic carbon-like material (GCM).We developed GCM from a low-cost crystal sugar in the presence of nitrogen gas.Crystal sugar is a type of edible sugar, an inexpensive and sustainable raw material that can be easily produced from agricultural products such as sugar cane and beet.Synthesized low-cost GCM was examined as an adsorbent for the removal of MB and MO from aqueous solutions.Studies were conducted with a parameter (equilibrium time, pH, temperature, and initial dye concentra-tion) that affect the adsorption process.Kinetic models and isotherm models were also studied.This study clearly confirmed that GCM signified a high adsorption performance for the removal of the both dyes (MB and MO) from aqueous solutions.Moreover, the adsorption capacity of GCM for MB and MO was comparable or near with previous reported similar activated carbons or graphene type materials. 5,13,16,17Hence, as prepared GCM has potential adsorption capacity for the removal organic dye pollutants and thereby significant reducing human health and environmental risks.
1. Materials
Methylene Blue (molecular formula C 16 H 18 ClN 3 S • 3H 2 O) and methyl orange (C 14 H 14 N 3 NaO 3 S) were purchased from Samchun pure chemical Co., Ltd.Korea.Edible sugar was purchased from the local market.Figure 1 represents the molecular structure of MB and MO.
Preparation of the GCM
Scheme 1 represents the synthesis of graphitic carbon-like material (GCM) from the edible sugar.At first, the sugar was dissolved in water thoroughly, then the mixture was heated at ~120 °C with continuous stirring for getting caramel.The sugar solution (caramel) was then transferred to a silica crucible and heated in a furnace at N 2 atmosphere.
The furnace temperature was programmed as follows: (a) from room temperature to 100 °C in 30 100−200 °C in 30 min (c), held at 200 °C for 1 h (sugar melting point of sucrose is around 186 °C), (d) ramped to 400 °C in 1 h, and (e) held for 3 h at 400 °C (to ensure complete graphitization of sugar).The furnace was then switched off and the material was cooled down to room temperature.The temperature of 400 ± 5 °C was chosen as the final temperature after several experiments showed this to provide optimized results.No special care was taken in controlling the cooling rate.The black material obtained was named as the graphitic carbon-like material (GCM).
3. Adsorption Experiments
A batch study was carried out for the evaluation of adsorption equilibrium and kinetic studies of MB and MO.The effects of different operating parameters (solution pH, adsorbent dosage, initial MB concentration, contact time, and temperature) were studied about MB and MO removal using the GCM.Enough adsorbent dose was added to separate solutions of 50 mL of MB and 50 mL of MO at the desired concentrations.These solutions were placed into 100 mL glass flasks and the samples were then shaken at 25 ± 0.5 °C.The effect of the pH on the adsorption of MB and MO was studied while varying the pH values in the range of 2 to 10. Various adsorbent dosages (0.5, 1.0, 2.0, and 4.0 g/L) were mixed in a dye solution (50 mL) in a concentration range of 5 to 50 mg/L.These solutions were then continuously stirred at 60 rpm in a water bath shaker.Samples were collected at different times.After reach adsorption equilibrium, the residue dye concentration in the solutions was measured using a UV-Vis spectrophotometer (UV 1601, Shimadzu) with maximum wavelength (λ max ) of 665 nm and 465 nm for MB and MO, respectively.Experiments were performed in triplicate to check the reproducibility of the data.
The adsorption amount and adsorption efficiency of MB and MO were calculated according to Eqs. 1 and 2 as follows: Removal efficiency (%) = ((C 0 -C e ) / C 0 ) × 100 (3) where C 0 (mg/L) is the initial MB or MO concentration, Ce (mg/L) is the MB or MO equilibrium concentration at equilibrium time t (min), V (L) is volume of solution, W (g) is the weight of adsorbent, and q e (mg/g) is the amount of MB or MO adsorbed by GCM.
4. Instrumental Analysis
Scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDS) (S-4300 & EDX-350, Hitachi, Japan) were used to measure the surface morphology GCM.To identify the functional groups in the GCM, a Fourier transform infrared (FT-IR) spectrometer (Perkin-Elmer, USA) was used.X-ray diffraction (XRD) analysis of the GCM nanoparticles was conducted using a D/Max-2500 diffractometer (Rigaku, Japan).Elemental composition analysis of GCM was performed by using ESCALAB-210 (Spain) X-ray photoelectron spectroscopy (XPS).Quantachrome Instruments (Boynton Beach, FL, USA) was used to Brunauer-Emmett-Teller (BET) surface analysis of GCM.
1. Characterizations of GCM
Figure 2a and 2b present the SEM images of the GCM at low and high resolution respectively, showing the rough surface morphology of GCM, indicating the considerable adsorption potential of MB and MO.The structure and morphology of the GCM were investigated from the SEM images.A plate-like morphology with an average size of 50-100 nm was detected from the magnification images.The XRD pattern (Fig. 2c) of the GCM shows a broad peak at 2θ = 23.4o , corresponding to the phase of graphitic hexagonal carbon (JCPDF No: 75-1621 of graphene XRD pattern); however, the small peaks located at 43.5 o could be attributed to the characteristic peaks of the oxidized form of GCM. 18The crystalline nature of GCM is also concluded from that the XRD pattern.The surface physical characteristics of GCM was measured by using Brunauer-Emmett-Teller (BET) surface analysis with nitrogen (N 2 ) adsorption-desorption isotherms.It was found that the surface area and pore volume was 574 m 2 /g and 0.248 cm 3 /g, respectively.And the measured pore diameter (d), 1.8 47 (< 2 nm) indicates that the GCM classified as a microporous crystalline material.
To better understand the functional groups of the GCM, we applied Fourier transform infrared (FT-IR) spectroscopy, as shown in Figure 3a.The FT-IR spectra of the GCM shows the availability of numerous functional groups before and after adsorption.The peaks occur at 1200 cm -1 on GCM, which might have designated -C-O-C-stretching vibrations.However, after adsorption, this peak was broadened and shifted at 1170.3 cm -1 and 1178.4 cm -1 , confirming the adsorption of MB and MO, respectively, onto the GCM.While a peak at 1598.7 cm -1 was observed on GCM, after adsorption of MB and MO, this peak shifted to 1590.4 cm -1 and the peak intensity increased, representing the -C=C-stretching vibrations. 19A peak at 1717 cm -1 on GCM was also observed before and after adsorption, and the peak intensity increased after adsorption of both dyes, representing the -C=O stretching vibrations. 19he GCM sample is analyzed by XPS in the range of binding energies, 0.0-1400 eV.The XPS survey (Fig. 3b 18 The intensities of the peaks of C1s and O1s were increased after the adsorption of MB and MO.
2. Effect of Operational Parameters on Adsorption Process of MB and MO onto GCM
The effect of adsorbent mass on the removal of pollutants was studied to select the suitable amount of adsorbent for industrial applications.The effect of adsorbent dose on the MB and MO removal was studied by changing the dosages of GCM from 0.5 to 4.0 g/L (experimental conditions: MB or MO initial concentration of 10 mg/L, pH 8, temperature of 25 o C, shaking speed of 60 rpm, and shaking time of 420 min) (Figs.4a and 4b).
The removal efficiencies of MB and MO increased to around 99.9% and 92.6%, respectively, with the increase of adsorbent dosages; this occurred because more adsorption sites were available at higher adsorbent dosages. 20However, the adsorption capacity decreased from 19.5 to 2.6 mg/g for MB and from 19.7 to 2.7 mg/g from MO by increasing the adsorbent dose from 0.5 to 4.0 g/L.This decrease of adsorption capacity may have occurred in two ways, the first reason is due to the decrease of a number of available adsorption site per unit area by the increase of adsorbent molecules interactions or aggregation of adsorbent molecules as an increase of adsorbent dosage. 21The second reason is may be due to the collision between the particles of adsorbent sites and the dye molecules. 22Considering the removal efficiency and practicality, the optimal adsorbent dosage was maintained at 2.0 g/L for the both MB and MO in all subsequent experiments.The effect of the initial dye concentrations (5, 10, 20, 30, 40, and 50 mg/L) on the percentage removal and the uptake (q e ) of MB and MO was studied (experimental conditions: GCM dose of 2.0 g/L, pH 8 for MB, pH 6 for MO, temperature of 25 o C, shaking speed of 60 rpm, and shaking time of 420 min) (Figs.4c and 4d).The adsorption capacities of MO and MB on GCM were increased from 2.6 to 24.3 mg/g and from 2.5 to 24.0 mg/g, respectively with an increasing concentration of both dyes from 5.0 to 50 mg/L.However, the percentage removal of both dyes was decreased from 89.4% to 67.5% and from 99.9% to 67.3% for MO and MB, respectively, with increasing concentration.
Decreased the adsorption removal percentages of MO and MB on to GCM with increasing dyes initial concentrations, it might be due to the driving force created by the dye molecules, which could resist the mass transfer of the dyes. 23The time profile shows that equilibrium of dye uptake was reached after a contact time of 180 min for both dyes.It was observed that the adsorption capacity of MO and MB onto the GCM increased with the initial concentration of both dyes during the initial stage, and this increasing tendency continued until equilibrium was reached after 180 min.This can be attributed to the fact that most vacant surface sites of GCM are occupied for the adsorption of dyes during the initial stage, and adsorption of pollutants is difficult in the remaining unoccupied surface sites due to the repulsive forces between the adsorbed dye molecules on the GCM and the bulk phase. 24he effect of different solution pH values (pH 2.0, 4.0, 6.0, 8.0, and 10) was studied on the percentage removal and the uptake (q e ) of MO and MB (experimental conditions: initial MO or MB concentration of 10 mg/L, GCM dose of 2.0 g/L, temperature of 25 °C, shaking speed of 60 rpm, and shaking time of 7 hr).The solution pH values were adjusted by adding 0.1 N HCl and 0.1 N NaOH.As shown in Figure 4e and 4f, the percentage removal of MO increased up to the pH 6 solution; however, the percentage removal of MB was increased up to the pH 8 solution and the further increase in the values of pH removal percentage was found to be almost constant for both dyes.A very low removal of MB was observed at an acidic pH (pH 2.0), this can be attributed to the repulsive force between the cationic dye (MB) and the surface of GCM.An addition of H + ions might compete with the cation of the MB molecule for vacant adsorption sites of GCM.
The removal percentage of MB on the GCM was increased from 75.0% to 99.9% with increasing pH values from 2.0 to 8.0.This is due to the increased number of negatively charged sites with maintaining basic pH, which could be favoring the adsorption of MB onto GCM due to the electrostatic force of attraction. 24At pH above 8 for MB and pH 6 for MO, the removal percentage was found to be constant.The optimum pH values for the removal of MO and MB were found to be 6 and 8, respectively.In alkaline condition, the adsorption of MO onto the GCM was lower and was possibly due to the existence of OH -ions on the adsorbent surface, which competes with the anionic dye. 25 However, the best results were obtained at neutral pH for both dyes.
Equilibrium Adsorption Isotherm
The Langmuir isotherm as shown in Eq. 4 is widely used in the scientific assessment of the adsorption process.This model assumes that the adsorbent surface can only occur at the surface monolayer and adsorption follows homogeneously. 12e /q e = (1 In equation ( 4), C e is the equilibrium concentration of MB or MO in solution (mg/L), q e is the amount of MB (mg/g) or MO (mg/g) adsorbed on GCM at equilibrium, Q o is the maximum adsorption capacity (mg/g), and b is the Langmuir constant.The slope 1/Q o and intercept (1/ bQ o ) can be calculated by straight line equation obtained through a plot of C e /q e and C e (Figs. 5a and 5b).The linear correlation coefficients R 2 are 0.990 for MB and 0.976 for MO, indicating that the adsorption of MB followed the Langmuir adsorption model.The calculated values of Q 0 are 38.75 and 43.48 mg/g for MB and MO, respectively, at 25 o C. The Langmuir parameters for both the MB and MO dyes are presented in Table 1.The linear form of the Freundlich isotherm is given in Figure 5c and d for MB and MO respectively, and its linear equation is shown here. 26g q e = ln K F + (1/n) ln C e (5) In equation ( 5), q e is the amount of MB or MO adsorbed at equilibrium (mg/g) and C e is the equilibrium concentration (mg/L) of the MB or MO.K F and 1/n are the Freundlich binding constant and constant related to the surface heterogeneity, respectively.A straight line was obtained when plotted ln q e against ln C e (Fig. 5c and d) and n and K F were obtained from the slopes and intercepts, respectively.The Freundlich constant n was found to be 1.30 and 1.27 for MB and MO, respectively, when the value of n is greater than 1.This result demonstrates that the materials are heterogeneous in nature and could thus adsorb MB or MO successfully.The adsorption data of MB was better fitted by the Langmuir isotherm (R 2 = 0.99) compared to the Freundlich isotherm; however, adsorption data of MO fitted well with the Freundlich isotherm (R 2 = 0.995) in comparison to the Langmuir isotherm (Table 1).Hence, the overall isotherm results demonstrated that the adsorption process of MB and MO onto GCM is complexed.The obtained adsorption capacity of GCM for MB and MO was comparable or near with the reported the similar type of materials such as activated carbons and graphene or its composites (Table 2) indicates that the GCM was potentially applicable for adsorption removal of dyes as it is reported methods.
4. Kinetic Study of Removal of MB or MO
A kinetics study for the adsorption of MB or MO onto the GCM was carried out under the following experimental condition: pH 8 for MB, pH 6 for MO, a dose of 4.0 g/L, and temperature of 25 °C.A kinetic study was conducted with six different initial concentrations (5, 10, 20, 30, 40, and 50 mg/L) to recognize the adsorption kinetics (Fig. 6).It was observed that the kinetic equilibrium for adsorption of the MB and MO on the GCM was reached at 180 min, and the adsorption capacity of these dyes onto GCM increases with increasing initial concentration.The adsorption of the dyes (MB and MO) molecule increases with the increasing initial concentration, which might be due to the initial concentrations of dye, offering a driving force to restrain the mass transfer conflict of the dye molecules between the liquid phases and the solid phases. 27The kinetic parameters of the adsorption of the MB and MO on GCM-water interface was studied by applying the pseudo-first-order and pseudo-second-order kinetic models for the data, with initial dye con- In this study, the Lagergren's pseudo-first-order kinetic model 28 was applied to assess the adsorption rate, as expressed in Eq. 6.
Log (q e -q t ) = Log q e -(k 1 /2.303) t ( 6) where k 1 is the rate constant of the pseudo-first-order kinetic equation, and q t and qe are the adsorption masses of MB or MO onto GCM at time t and at equilibrium, respectively.
The q e values were calculated from Figure 6a and 6b and the results reported in Table 3.The R 2 value of the plot was found to be in the range from 0.694 to 0.882 for MB; however, the R 2 value for MO was found to be in the range of from 0.533 to 0.996.The calculated value of q e (15.6 mg/g) for MB was observed to be lower than that of the theoretical values (23.96 mg/g) for the highest initial concentration (50 mg/L).The calculated value of q e for MO (2.1 mg/g) was found to be lower than that of the theoretical values (34.5 mg/g) for the highest initial concentration (50 mg/L).
The linear form of pseudo-second-order kinetics is indicated by Eq. 7 29 as t/q t = 1/(k 2 q e 2 ) + (1/q e ) t ( 7) where k 2 is the pseudo-second-order adsorption rate constant.The values of k 2 and q e for MB and MO were calcu-lated from the slope and intercept of plots of t/q t versus t as presented in Figure 6c and 6d.The R 2 values for both dyes were found to be greater than 0.99, representing the better fit of the pseudo-second-order model than the pseudo-first-order kinetic model (Table 3).The empirical model described by Weber and Morris (1963) was applied for the evaluation of the intra-particle diffusion mechanism.This process is generally the rate-controlling phase in most of the adsorption processes.In this process, the adsorbate is possibly transferred from the bulk phase of the solution to the solid phase. 30he intra-particle diffusion model can be represented as follows (Eq.8): where k i is the intraparticle diffusion rate constant and C is represented as a constant.
The k i values can be calculated from the linear plots of the adsorbate uptake (q t ) versus the square root of time (t 1/2 ) (Fig. 7).In the present study, the linear plots are not passed through the origin, it confirms that the intra-particle diffusion was not the only rate-controlling step occurring in the adsorption process.However, the intra-particle diffusion curves do not fully concur with the linear fitting.This suggests that intra-particle diffusion along with out- c) e) er-sphere diffusion was involved in the rate-controlling step for the adsorption process.
5. Thermodynamic Studies
The thermodynamic parameter such as Gibbs free energy change (ΔG 0 ), the enthalpy change (ΔH 0 ) and the entropy change (ΔS 0 ) of the present system are illustrated from Figure 8 and reported in Table 4, which can provide an important information regarding adsorption process.The thermodynamic parameters can be illustrating from the following Van't Hoff equation: As we know, ΔG 0 =, where R (0.008314 kJ/mol.K) is universal gas constant, T (K) is the temperature and Kc =q e / C e at equilibrium.From the Table 4, it was clearly observed that the resultant ΔG 0 , is negatively increased with initial concentration with positive ΔH 0 and ΔS 0 .The resultant thermodynamic parameters revealed the favor of adsorption process, was the spontaneous endothermic process.However, Table 4 reveals, the increased K c with increasing temperature, which indicate chemical interactions between the adsorbate and adsorbent. 31,32In addition, the resultant ΔH 0 and ΔS 0 values are supports the present system percentage of adsorption data, where a higher rate of adsorption is found for low initial dye concentration with high ΔH 0 and ΔS 0 , and a low rate of adsorption is observed for a high initial concentration with low ΔH 0 and ΔS 0 .
6. Possible Adsorption Mechanism of MB and MO onto GCM
From FT-IR and XPS studies (Fig. 3) of dyes loaded GCM, it was concluded that the adsorption process of MB and MO onto GCM can be caused by the interaction of dyes with organic functions such as carbonyl, epoxy or carboxylic groups on the surface of as prepared GCM.From Figure 3, it was clearly observed that the organic functional groups at GCM altering their positions by adsorption of MB and MO dyes.It may be the chemical interaction between dyes (MB/MO) and surface functional groups of GCM.Further, it was proved from thermodynamic, kinetic isotherms and pH studies, the adsorption of MB or MO was endothermic chemical interaction through diffusion.
Conclusions
A graphitic carbon-like material (GCM) was prepared from edible sugar used as an absorbent, which was highly effective for the removal of methylene blue (MB) and methyl orange (MO) from its aqueous solution.In this study, it was confirmed that the adsorption was affected by pH, dosage amount, and the initial concentration of both dyes.The removal efficiencies of MB and MO onto the GCM increased with an increase in the dosages of adsorbent up to a certain limit and then became constant.The initial solutions of pH 6 and pH 8 were found to be optimum for the removal of MO and MB, respectively.However, the removal efficiency decreased for both dyes when the initial concentrations were increased from 5 to 50 mg/L.While the equilibrium data were well fitted to the Langmuir isotherm model for MB, for MO, they were well fitted to the Freundlich model.The experimental data of both dyes fitted better with the pseudo-second-order model than compared with the pseudo-first-order model.The resultant thermodynamic parameters concluded that the adsorption process was endothermic with spontaneous at interfaces of equilibrium.From thermodynamic, kinetic and pH studies, it was also concluded that the adsorption process was through the chemical interaction between adsorbent and adsorbate.In this study, the GCM synthesized from the edible sugar can be used as a potential adsorbent in the treatment of wastewater containing dye for effective removal performance and thereby significantly reducing human health and environmental risks.Moreover, the obtained adsorption capacity of GCM for MB and MO was comparable or near with the reported similar type of materials such as activated carbons and graphene or its composites.Hence, as prepared GCM has potential adsorption capacity for the removal organic dye pollutants and thereby significantly reducing human health and environmental risks.Table 4. Thermodynamic parameters of MB and MO onto GCM at pH 8.0 for MB and pH 6.0 for MO, 2.0 g/L dosage and180 min equilibrium time in the range of temperature, 298-323 ± 0.5 K (n = 3, the reported values are mean of three measurements).
Figure 1 .Scheme 1 .
Scheme 1 represents the synthesis of graphitic carbon-like material (GCM) from the edible sugar.At first, the sugar was dissolved in water thoroughly, then the mixture was heated at ~120 °C with continuous stirring for getting caramel.The sugar solution (caramel) was then transferred to a silica crucible and heated in a furnace at N 2 atmosphere.The furnace temperature was programmed as follows: (a) from room temperature to 100 °C in 30 min, (b) Figure2aand 2b present the SEM images of the GCM at low and high resolution respectively, showing the rough surface morphology of GCM, indicating the considerable adsorption potential of MB and MO.The structure and morphology of the GCM were investigated from the SEM images.A plate-like morphology with an average size of 50-100 nm was detected from the magnification images.The XRD pattern (Fig.2c) of the GCM shows a broad peak at 2θ = 23.4o , corresponding to the phase of graphitic hexagonal carbon (JCPDF No: 75-1621 of graphene XRD pattern); however, the small peaks located at 43.5 o could be attributed to the characteristic peaks of the oxidized form of GCM.18The crystalline nature of GCM is also concluded from that the XRD pattern.The surface physical characteristics of GCM was measured by using Brunauer-Emmett-Teller (BET) surface analysis with nitrogen (N 2 ) adsorption-desorption isotherms.It was found that the surface area and pore volume was 574 m 2 /g and 0.248 cm 3 /g, respectively.And the measured pore diameter (d), 1.8 47 (< 2 nm) indicates that the GCM classified as a microporous crystalline material.To better understand the functional groups of the GCM, we applied Fourier transform infrared (FT-IR) spectroscopy, as shown in Figure3a.The FT-IR spectra of the GCM shows the availability of numerous functional groups before and after adsorption.The peaks occur at 1200 cm -1 on GCM, which might have designated -C-O-C-stretching vibrations.However, after adsorption, this peak was broadened and shifted at 1170.3 cm -1 and 1178.4 cm -1 , confirming the adsorption of MB and MO, respectively, onto the GCM.While a peak at 1598.7 cm -1 was observed on GCM, after adsorption of MB and MO, this peak shifted to 1590.4 cm -1 and the peak intensity increased, representing the -C=C-stretching vibrations.19A peak at 1717 cm -1 on GCM was also observed before and after adsorption, and the peak intensity increased after adsorption of both dyes, representing the -C=O stretching vibrations.19The GCM sample is analyzed by XPS in the range of binding energies, 0.0-1400 eV.The XPS survey (Fig.3b)
Figure 4 .
Figure 4. Effects of the different parameters on adsorption of MB and MO: dosages of GCM (a), initial concentrations of dyes (b), and different pH values of aqueous solutions (c).
This research was supported by Startup Research Program through the National Research Foundation of
Figure 8 .
Figure 8. Thermodynamic illustrations (ΔG 0 = -RT LogKc vs. T) of MB and MO adsorption onto GCM at pH 8.0 for MB and pH 6.0 for MO with 2.0 g/L adsorbent dosage for 180 min equilibrium for calculating to thermodynamic parameters ΔH 0 and ΔS 0 .
Table 1 .
Isotherm parameters of MB and MO onto GCM at 25 °C (n = 3, the reported values are mean of three measurements).
Table 2 .
Comparison of GCM adsorption capacity with previous reported activated carbons and graphene materials.
Table 3 .
Kinetic parameters of MB and MO sorption onto GCM at 25 °C (n = 3, the reported values are mean of three measurements).
The initial q e , Th Pseudo-First-Order Pseudo-second-order Weber and Morris concentration (mg/g) q e,Cal K 1 R 2 q e,Cal
Lingamdinne et al.: Adsorptive Removal of Selected Anionic ...
|
2019-04-10T13:12:15.219Z
|
2018-09-15T00:00:00.000
|
{
"year": 2018,
"sha1": "80a1dedd5592e7d466c46c7f8e7a3f24648d0d97",
"oa_license": "CCBY",
"oa_url": "https://journals.matheo.si/index.php/ACSi/article/download/4254/1853",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "80a1dedd5592e7d466c46c7f8e7a3f24648d0d97",
"s2fieldsofstudy": [
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
59603926
|
pes2o/s2orc
|
v3-fos-license
|
Raman fingerprints as promising markers of cellular senescence and aging
Due to our aging population, understanding of the underlying molecular mechanisms constantly gains more and more importance. Senescent cells, defined by being irreversibly growth arrested and associated with a specific gene expression and secretory pattern, accumulate with age and thus contribute to several age-related diseases. However, their specific detection, especially in vivo, is still a major challenge. Raman microspectroscopy is able to record biochemical fingerprints of cells and tissues, allowing a distinction between different cellular states, or between healthy and cancer tissue. Similarly, Raman microspectroscopy was already successfully used to distinguish senescent from non-senescent cells, as well as to investigate other molecular changes that occur at cell and tissue level during aging. This review is intended to give an overview about various applications of Raman microspectroscopy to study aging, especially in the context of detecting senescent cells.
Introduction
Demographic changes in the industrialized world lead to an increased occurrence of medical conditions for which biological age is the major risk factor (GBD 2015 DALYs andHALE Collaborators 2016;GBD 2015 Mortality and Causes of Death Collaborators 2016). In addition to arising complications and onset of multimorbidity due to decreased resilience, frailty severely limits the quality of life at advanced age. Thus, a better understanding of biological aging processes and identification of markers will promote the design of interventions, which target biological aging and thereby increase fitness, decrease frailty, and improve resilience at advanced age (Bellantuono 2018;Cardoso et al. 2018;Figueira et al. 2016).
One of these processes, cellular senescence, is defined not only as an irreversible growth arrest induced by either serial passaging, which causes the shortening of telomeres to a critical length (replicative senescence) (Bodnar et al. 1998;Hayflick and Moorhead 1961), or by exposure to stress (stress induced premature senescence = SIPS) (Toussaint et al. 2002), but also as a consequence of chemo-and radiation therapy Schosserer et al. 2017) or oncogene activation (Collado and Serrano 2006). Senescent cells accumulate in the body during normal aging and occur predominantly at sites of age-associated pathologies, which include atherosclerosis (Erusalimsky and Kurz 2005;Gorenne et al. 2006;Minamino 2002;Vasile et al. 2001), osteoporosis (Kassem and Marie 2011), neuroinflammation (Bitto et al. 2010), and liver cirrhosis (Wiemann et al. 2002). While considered being a beneficial tumor suppressor mechanism in the young (Campisi et al. 2011;Campisi 2005), cellular senescence is by now well accepted to contribute to in vivo aging (Baar et al. 2017;Baker et al. 2016Baker et al. , 2011Xu et al. 2018) and even tumor progression in the elderly (Campisi et al. 2011;Campisi 2005). These deleterious effects are caused at least in part by the senescenceassociated secretory phenotype (SASP) (Coppe et al. 2010), which was already shown to promote chronic inflammation and thereby fuel several aging-associated pathologies including atherosclerosis, kidney fibrosis, and cancer progression Schosserer et al. 2017). Thus, one of the major goals of current aging research is the development of compounds that specifically eliminate senescent cells (Bsenolytics^) or inhibit the SASP and thereby alleviate deleterious effects caused by senescent cells (Baar et al. 2017;Xu et al. 2018;Zhu et al. 2017Zhu et al. , 2015. However, although a prerequisite for screening and evaluation of senolytic compounds, the detection of senescent cells, especially in vivo, is still one of the challenges in the field. Currently, flattened cell morphology, activation of p16 INK4a (Baker et al. 2016;Tchkonia et al. 2013) and p53 (Tchkonia et al. 2013), activity of SA-β-Galactosidase (Debacq-Chainiaux et al. 2009), staining with Sudan Black B (Georgakopoulou et al. 2013), presence of ɣH2AX foci at the telomeres (Fumagalli et al. 2014) and senescence-associated heterochromatin foci (Narita et al. 2003), High Mobility Group Box 1 (HMGB1) secretion (Davalos et al. 2013), and growth arrest as measured by BrdU-incorporation (Lämmermann et al. 2018) are considered to be senescence markers. The drawback is that none of them is specific for senescence and some of them can only be detected in vitro. Therefore, combinations of these markers have to be used. Raman microspectroscopy could thus offer a non-invasive and label-free method that allows to monitor the progression of senescence in real time in vitro and in vivo.
Raman microspectroscopy distinguishes cellular states in a label-free and non-invasive manner
Raman spectroscopy is based on the interaction between light that is focused on a sample and the chemical bonds within the material to be analyzed. Compared to elastic or Rayleigh scattering, inelastic or Raman scattering is a rare and comparatively weak phenomenon. Depending on the direction of the energy shift (Raman shift in cm −1 ), scattered electrons are either at lower (Stokes Raman) or higher (anti-Stokes Raman) energy levels (Raman 1928).
Modern Raman microspectrometers consist of a confocal microscope equipped with one or more lasers, an efficient longpass filter to remove highly abundant Rayleigh-scattered light, a spectrometer with different gratings, and a sensitive CCD line detector (Fig. 1). UV and blue lasers go along with high energy, which might damage biological samples, and induce significant levels of autofluorescence. Thus, green and red lasers (e.g., 532 nm or 785 nm) are most commonly used for the analysis of cells and tissues. Most current Raman microscpectrometers offer automated mapping applications, whereby the laser scans over the specimen and a spectrum is recorded at every single pixel to generate a multi-dimensional hyperspectral image. While acquiring spectra is relatively simple, data processing poses a major challenge and usually consists of background removal and normalization steps, followed by multivariate statistic approaches including principal component analysis (PCA), linear discriminant analysis (LDA), classical least square (CLS) fitting, multivariate curve resolution (MCR), among others (Butler et al. 2016;Notingher et al. 2005).
Raman spectra from biological materials, typically recorded in the region of 400-2000 cm −1 (Fig. 1), provide chemical fingerprints detecting even subtle changes in the biochemical composition of cells (Beattie et al. 2013;Brauchle and Schenke-Layland 2013;Charwat et al. 2015;Rösch et al. 2006;Swain and Stevens 2007), tissues (Ashtikar et al. 2013;Bocklitz et al. 2013;Movasaghi et al. 2007), and whole organisms (Lau et al. 2012). The advantage of Raman microspectroscopy compared to traditional staining approaches lies in the fact that this technique can be used on living specimen without prior fixation and does not require any label that might interefere with normal physiology. Raman signatures of in vitro cultured cells were already successfully recorded and used for the characterization and identification of various specific cell types, as for example for endothelial (Szafraniec et al. 2018) and human lung (Surmacki et al. 2018) cell lines. Our lab was also able to distinguish different Chinese Hamster Ovary (CHO) host and production cell lines by Raman microspectroscopy (Prats Mateu et al. 2017). In chondrocytes (Pudlas et al. 2013), hematopoetic stem cells (Ilin et al. 2015), and hematopoetic progenitor cells (Choi et al. 2018), it has been shown that Raman microspectroscopy is capable of monitoring the dynamic process of cell differentiation. Also, different cellular states, such as apoptosis and necrosis, were successfully distinguished (Brauchle et al. 2014b), and cell progression through mitosis was followed by Raman microspectroscopy (Matthäus et al. 2006).
Biochemical deviations occuring in cancer have been extensively studied using Raman spectroscopy not only at cellular level in vitro (Brauchle et al. 2014a;Duraipandian et al. 2018;Lee et al. 2018;Managò et al. 2018;Terentis et al. 2013), but also in tissues ex vivo (Bocklitz et al. 2013;Santos et al. 2016). These promising results pave the way for clinical use of Raman spectroscopy for analysis of extracted specimen and identification of markers for tumor resections (Santos et al. 2017;Shipp et al. 2018). A fiber-optic Raman probe was already used during brain surgery and allowed differentiation between cancer and healthy tissue (Jermyn et al. 2015). Another putative clinical application of Raman spectroscopy is the detection of fragility fractures by using Spatial Offset Raman Spectroscopy (SORS) (Buckley et al. 2015).
Raman microspectroscopy enables distinction of senescent and non-senescencent cells
Only few studies were conducted to investigate cellular senescence using Raman microspectroscopy so far. Bai and coworkers acquired Raman signatures of mesenchymal stem cells obtained from human umbilical cord tissue during serial passaging (Bai et al. 2015). The authors found that the ratio of peaks at 1157 cm −1 vs. 1174 cm −1 , both corresponding to vibrations of proteins, could serve as a marker for late population doubling levels (PDLs). Other notable, but not significant, differences between Raman recorded by the spectrometer. A typical Raman spectrum of mammalian cells is shown. Spectral regions explaining the most prominent differences between cellular states are depicted spectra of late and early PDL cells were found within the amide II (1480-1575 cm −1 ) region. Eberhardt and coworkers analyzed four different human dermal fibroblast cell strains using Raman spectroscopy as well as Fourier transform infrared spectroscopy (FTIR) (Eberhardt et al. 2017a). Comparing Raman signatures of early PDLs, middle PDLs, and senescent cells, peak intensities at 1580 cm −1 and 1658 cm −1 assigned to nucleic acids and proteins, respectively, were found to be decreased, while lipid associated peaks at 1732 cm −1 , 2850 cm −1 , and 2930 cm −1 were increased in senescent cells. Partial least squares-linear discriminant analysis (PLS-LDA) was able to distinguish these three groups. Analysis of the difference spectra obtained through PLS-LDA again revealed changes in the amide I region (1600-1800 cm −1 ), at high wavenumbers (> 2800 cm −1 ), as well as in the amide III region (1220-1300 cm −1 ), and below 1200 cm −1 . Raman-based classification models set up for each cell strain separately revealed an overall sensitivity of 93% and specificity of 90%, although outcomes from the four cell strains differed. Senescence was confirmed by morphological changes, cell proliferation in different PDLs, as well as SA-β-galactosidase activity.
In another study, Eberhardt and coworkers expanded their Raman and FTIR-based detection of senescent dermal fibroblasts towards a 3D model of human skin (Eberhardt et al. 2017b). Fibroblastderived matrices (FDM) were built by seeding fibroblasts in PDL 4 and PDL 20 for the young and senescent model, respectively. In 3D, Raman peaks between 600 and 900 cm −1 and a peak at 1260 cm −1 associated with the amide III region were decreased in senescent cells. The spectral region between 930 and 1230 cm −1 showed increased intensities in the spectra of senescent cells. Comparison of fibroblasts from passages 4, 7, and 20 in 2D culture showed alterations below 1250 cm −1 and in the amide I and II region. PLS-LDA for cells cultivated in 2D and 3D revealed differences in the amide I and II region, as well as at 788 cm −1 , a peak that can be assigned to ring breathing modes in nucleic acids. A classification model trained with proliferating and senescent cells grown in 3D was then indeed able to predict these cellular states in 2D culture. However, vice versa, classification of 3D data was not successful when a 2D training set was used, underlining the fact that other differences became more obvious in the 3D environment. Senescence was confirmed by SA-β-galactosidase staining.
Oncogene-induced senescence was studied in MCF-7/NeuT cells (Mariani et al. 2010). Senescence was induced by doxycycline treatment, leading to oncogenic ErbB2 overexpression and consequently p21 induction (Trost et al. 2005). Raman spectra of nuclei from senescent cells showed a single peak at 1652 cm −1 , whereas two peaks at 1652 cm −1 and 1666 cm −1 were found in control cells. These two peaks were assigned to cis and trans unsaturated fatty acid isomers, respectively. However, as the amide II band is also located in this region, interpretation of the signal beeing protein-derived seems also possible. Mariani and coworkers concluded that mainly cis isomers can be found in senescent cells, leading to instabilities in the nuclear membrane. Furthermore, peaks at 1313 cm −1 and 1339 cm −1 assigned to glycoproteins were found in control, but not in senescent cell spectra. Accordingly, mRNA levels of nuclear pore complex glycoprotein Nucleoporin 210 (NUP210) were significantly decreased in senescent cells. The assignment of the glycoprotein peak was based on a publication measuring Raman spectra from an isolated antifreeze glycoprotein (Tomimatsu et al. 1976). In case of cell-based Raman signatures, proteins and nucleic acids might also be worth considering for being the source of chemical interactions located in that wavenumber region.
As summarized in Table 1, in all the studies comparing Raman signatures between young and senescent cells (Bai et al. 2015;Eberhardt et al. 2017a, b;Mariani et al. 2010), peaks assigned to the amide II region were subject to substantial changes. The amide II band between 1480 and 1575 cm −1 refers to C-N stretching and N-H bending occuring in peptides (Movasaghi et al. 2007). The amide I band between 1600 and 1800 cm −1 , related to C=O stretching and the amide III band from 1220 to 1300 cm −1 depicting C-N stretching and N-H bonding (Movasaghi et al. 2007) also constantly recured with the exception of the studies done by Bai and coworkers. Differences in the biochemical fingerprint between young and senescent cells could thus be explained by varying occurences of proteins. However, chemical interactions associated with glycoproteins, lipids, and nucleic acids also contribute to the variations located in the range of the three dominant amide bands.
Raman spectroscopy is able to visualize molecular changes occurring in skin aging
Raman-based in vivo investigations have been performed to analyze age-related changes in human skin and its components. Especially the stratum corneum (SC), the outermost part of the epidermis, has been subject to these studies (Boireau-Adamezyk et al. 2014;Choe et al. 2018;Egawa and Tagami 2008). Differences between young and aged female subjects regarding water content in the SC of forearm skin were found using Raman signatures (Egawa and Tagami 2008). Also, changes in the barrier function of SC were observed, especially a decreased lipid/protein ratio, as well as an increased transepidermal water loss with age and an increased SC thickness, though the condition of the barrier function also strongly depended on the site of measurement (Boireau-Adamezyk et al. 2014). Contradictorily, another Raman-based study, including subjects from a smaller age range, showed that the lipid/ protein ratio stayed constant with increasing age, while the expansion in SC thickness was confirmed (Choe et al. 2018). The dermis, the skin layer underneath the epidermis, was examined regarding the water content by using a prediction model, followed by Raman-based analysis, pointing out higher water content in the dermis of healthy aged and diabetic women compared to healthy young women (Téllez et al. 2015).
Special interest was given to photoaged skin which was investigated ex vivo (Gniadecka et al. 1998;González et al. 2012). Raman spectra recorded from chronologically aged skin as well as photoaged skin obtained by punch biopsies from a total of 20 individuals, showed a shift towards lower wavenumbers in the amide I band compared to young skin. In photoaged skin, the amide III region and C-H stretching bands higher than 2800 cm −1 were also shifted towards lower wavenumbers, possibly indicating an increase in protein folding in photoaged skin. However, in chronologically aged skin, only the peak at 1658 cm −1 in the amide I region was different from young individuals (Gniadecka et al. 1998). Raman spectroscopy was used to study Table 1 Most prominent peaks and spectral regions contributing to differences in spectra from senescent versus non-senescent cells. The corresponding studies, as well as peak assignments differing from Movasaghi et al. (2007) intrinsic aging and photoaging in vivo in 15 subjects between 28 and 82 years of age, divided into three different groups (de Vasconcelos Nasser Caetano et al. 2017). The authors pointed out the prolinehydroxyproline region (intensities of peaks at 855 cm −1 and 938 cm −1 ) as suitable for the evaluation of intrinsic skin aging. Similarly, Villaret and coworkers showed that the 938/922 cm −1 peak ratio was decreased in spectra from aged photo-protected skin compared to aged exposed, young photo-protected, and young exposed skin obtained via punch biopsies from 14 female individuals (Villaret et al. 2018). Results from another study (Nguyen et al. 2013) show that the prolinehydroxyproline region, more specifically the 938/ 922 cm −1 peak ratio, turned out not to be able to distinguish between resected skin samples from the dermis of four females classified in two different age groups (40 years, 70 years). Nguyen and coworkers also found that the 1658/1668 cm −1 peak ratio, assigned to reflect interactions of water with collagen, was able to differentiate between the two age groups. However, considering the biological variability, the relatively small number of analyzed samples in these studies might not be sufficent to draw generalizable conclusions. Furthermore, the penetration depth of Raman probes for in vivo use is relatively low, allowing analysis of just the upper skin layers. Interestingly, C-H stretching bands that were found to be shifted in photoaged skin (Gniadecka et al. 1998) were among the regions that also contributed to differences in the spectra of senescent cells in comparison to non-senescent cells (Eberhardt et al. 2017a). Furthermore, the amide I band was responsible for spectral differences at both cellular (Eberhardt et al. 2017a, b) and tissue level (Gniadecka et al. 1998;Nguyen et al. 2013).
The application of Raman spectroscopy to study aging in various tissues
Apart from studies in the skin, research in the field of ophthalmology already made use of Raman microspectroscopy for studying processes that occur during aging, when analyzing dried human Bruch's membranes for the quantification of advanced glycation end products (AGEs) and advanced lipoxidation end products (ALEs) that accumulate with age (Beattie et al. 2013;Glenn et al. 2007). Resonance Raman spectroscopy, a specialized Raman technique, was used to investigate age-related effects on macular pigment optical density (MPOD) (Obana et al. 2014) as well as differences in carotenoid levels in healthy subjects compared to patients with age-related macular degeneration (Bernstein et al. 2002).
Other studies focused on Raman-based analysis of bone tissue (Ager et al. 2005;Akkus et al. 2003;Gamsjaeger et al. 2010;Milovanovic et al. 2018;Toledano et al. 2018), providing insights into compositional changes that occur during aging. Ager and coworkers used deep-ultraviolet Raman spectroscopy and found significant age-related differences in the shape and intensity of the amide I band from excised cortical bones in humans (Ager et al. 2005). As reported recently, AGEs might also contribute to the aging process of bones and show a specific Raman signal (Toledano et al. 2018). Raman spectroscopy has also been applied to investigate age-related structural changes of human teeth (Ager et al. 2006;Tramini et al. 2001). Similarly, Tramini and coworkers found that the chronological age of an individual could be predicted by analysis of the dentin's Raman spectra (Tramini et al. 2001). Apart from the importance of understanding aging-related mechanisms, this approach might also be of interest for forensic investigations.
The same applies to another study showing successful classification into three age groups (< 1 year, 11-13 years, 43-68 years), based on Raman spectra of human peripheral blood (Doty and Lednev 2018). Apart from blood, other biofluids might provide suitable substrates for Raman-based investigation of aging phenomena as well. Erythrocyte aging has recently been studied with the help of Raman spectroscopy, revealing changes in lipids and membrane proteins (Dinarelli et al. 2018).
Multivariate statistics were able to classify spectra from human oral buccal mucosa into young and physiologically aged individuals, without inferering with the classification of Raman spectra based on tobacco-related changes (Sahu et al. 2012). Alterations in lipid composition due to aging were also examined in murine perivascular adipose tissue using Raman microspectroscopy and a Raman fiber optic probe . Aging-related oxidative damage in mouse oocytes leading to developmental abnormalities was studied via Raman microspectroscopy aiming towards the use of this method in assisted reproductive treatment in humans (Bogliolo et al. 2013).
When comparing these data to the Raman-based investigation of skin aging and cellular senescence, it becomes evident that peaks responsible for the differences in the analyzed spectra are frequently located in the three prominent amide regions, depicting mostly proteins, lipids, and nucleic acids.
Summary and perspectives
As shown here, the field of aging research has just begun to make use of the label-free, non-invasive technology of Raman microspectroscopy. For studying Raman fingerprints of senescent cells, caution must be given to the precise definition and characterization of the senescent state, which was neglected by some of the previous studies complicating their interpretation. Furthermore, as senescence is by now considered to occur progressively and to show heterogeneity between indiviual cells and tissues (Hernandez-Segura et al. 2017), it would be interesting to compare early to late senescent cells during development of the characteristic SASP. Coupling Raman microspectroscopy to microfluidic systems, as reviewed by Li and coworkers (Li et al. 2012), will pave the way for investigation of heterogeneity within a large cell population. It also remains to be seen if Raman bands explaining the differences between senescent and non-senescent cells vary between different cell types and tissues, and if these fingerprints might match to in vivo data.
The challenges of Raman microspectroscopy lie in the fact that the peak assignment to chemical interactions and further to biochemical structures is challenging and has to be conducted with great care. Moreover, the current instrumentation and data analysis require higher speed and simplification, since only thereby Raman microspectroscopy will become widely applicable to biologists and clinicians not specialized in biophotonics. These insights will not only help to efficiently identify senescent cells in a label-free and non-destructive manner with large potential for in vivo and ex vivo applications including compound screenings, but also to gain insights into intracellular biochemical changes that occur during aging.
Acknowledgements We are grateful for technical support by the BOKU VIBT imaging facility and helpful inputs from other group members. Parts of Fig. 1 were produced using Servier Medical Art (http://www.servier.com) under Creative Commons BY 3.0 license.
Funding information Open access funding provided by Austrian Science Fund (FWF). This work was funded by the Christian Doppler Research Association to J.G., as well as by the Austrian Science Fund (FWF) and Herzfelder'sche Familienstiftung: P30623-B26 to M.S. The financial support by the Austrian Federal Ministry of Digital and Economic Affairs, the National Foundation for Research, Technology and Development, is also gratefully acknowledged. M.S. is partner in Raman4Clinics, funded by the COST association (BM1401).
Compliance with ethical standards
Conflict of interest J.G. is co-founder and shareholder of Evercyte GmbH and TAmiRNA GmbH.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
|
2019-02-05T15:29:28.337Z
|
2019-02-04T00:00:00.000
|
{
"year": 2019,
"sha1": "eb3356c5f48286ed249abac0982cd2f833230e20",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11357-019-00053-7.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "74102720e70bcabf882a657c27a874c012bae86f",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
244809973
|
pes2o/s2orc
|
v3-fos-license
|
Identification and Construction of a Predictive Immune-Related lncRNA Signature Model for Melanoma
Objective The occurrence and development mechanisms of melanoma are related to immunity and lncRNAs. Therefore, it is necessary to systematically explore immune-related lncRNA profiles to help improve the prognosis of melanoma. Methods We integrated immune-related lncRNAs and the basic clinical information of melanoma patients in the TCGA dataset. Immune-associated lncRNAs were selected by differential expression screening and enriched for analysis. After univariate and multivariate Cox regression analyses, a new prognostic indicator based on immune-associated lncRNAs was established. Results Overall, differentially expressed immune-related lncRNAs were significantly associated with clinical outcomes in patients with melanoma. A prognostic model was then established based on 14 immune-associated lncRNAs (LRRC8C-DT, AC021188.1, MALINC1, CCR5AS, EIF2AK3-DT, AC022306.2, AC242842.1, AL034376.1, AL662844.4, AC009065.3, AC099811.3, AC125807.2, SPINT1-AS1 and AC009495.2). Melanoma patients in the high-risk group had worse overall survival than those in the low-risk group. The AUC of the risk score was 0.786. Conclusion This study identified several clinically significant immune-related lncRNAs and established a relevant prognostic model, which provided a molecular analysis of immunity in melanoma and potential prognostic lncRNAs for melanoma.
Introduction
Cutaneous melanoma is a tumor characterized by the abnormal proliferation of epidermal melanocytes that are genetically altered by the interaction of genetic, physical, and environmental factors. 1,2 Melanoma is the most invasive form of skin cancer. Risk factors for melanoma can be divided into internal and external factors and include light skin color, UV exposure, nevi, genetic predisposition, and a family history of melanoma. [3][4][5][6] In 2019, there were an estimated 95,830 cases of skin melanoma and 7000 deaths from skin melanoma in the United States. 7 Depending on the features of the tumor (location, stage, and genetic profile), the therapeutic options may include surgical resection, chemotherapy, radiotherapy, photodynamic therapy (PDT), immunotherapy, or targeted therapy. 8 The development of targeted therapies (such as BRAF and MEK inhibitors) and immunotherapies (such as anti-PD-1 antibodies alone or in combination with anti-CTLA-4 antibodies) has revolutionized the systemic treatment of advanced melanoma, and these new drugs have become pretreatments recommended by many international guidelines for melanoma treatment. 9 The crosstalk between tumor and immune cells significantly affects tumor invasion, clinical response and treatment outcome. 10 The immune system can detect and destroy tumor cells through immunosurveillance or play a tumor-promoting role via suppressed immune activity and enhanced anti-inflammatory responses; the immune environment varies greatly from one individual to another, and thus, the prognosis and response to treatment also vary among individuals. 11 Currently, no prognostic markers for melanoma have been widely recognized, and it is of great significance to study the immune molecular mechanism underlying melanoma and to discover new immune checkpoints.
Many molecules, especially long noncoding RNAs (lncRNAs), play an important role in the progression of melanoma. For example, studies showed that HOTAIR was overexpressed in metastatic tissues, which indicated that HOTAIR could promote the ability of melanoma cells to migrate and invade and that lncRNAs may be involved in the metastasis of melanoma. 12,13 In in vitro studies, knocking out the UCA1 or MALAT-1 lncRNA could reduce the migration of melanoma cells, which indicates that the increased expression of the UCA1 and MALAT-1 lncRNAs might be related to melanoma metastasis. 14 Li et al found that abnormally upregulated expression of a new lncRNA, BANCR, was involved in the proliferation of melanoma cells in vitro and in vivo. 15 However, there are still few studies on the application of immune-related lncRNAs in melanoma.
This study identified immune-related lncRNAs in melanoma and established a model related to melanoma survival based on these immune-related lncRNAs. This study provides a reference for future molecular diagnosis and treatment.
Acquisition and Analysis of Melanoma Expression Data
We downloaded the RNA sequencing (RNA-seq) expression and clinical data of melanoma specimens from The Cancer Genome Atlas (TCGA) database (https://cancergenome.nih. gov/). Moreover, immune-related genes were retrieved from immunological gene sets on the official gene set enrichment analysis (GSEA) website. Immune-related lncRNAs with a P value less than 0.05 and a coefficient greater than 0.4 were identified through correlation analysis between immunerelated genes and lncRNAs.
Cox Regression Analysis of the Immune-Associated lncRNA Signature
The combination of clinical survival data and immune-related lncRNA expression data was used to identify prognostic immune-associated lncRNAs by univariate Cox regression analysis. Multivariate regression analysis was also performed to calculate the risk score. The formula of the risk score was as follows: Risk score = coefficient1 * Expression gene1 + … + coefficientN * Expression geneN . Melanoma specimens were categorized into high-and low-risk groups based on the median risk score.
Independent Prognostic Analysis in Melanoma Patients
Clinical characteristics, including age, sex, stage, T stage, M stage and N stage, and the risk score were chosen as independent prognostic risk variables according to univariate and multivariate Cox regression analyses. In addition, principal component analysis (PCA) is a dimensionality reduction method often used in image processing. PCA was utilized to display all genes, immune-related lncRNAs, and high-and low-risk score-associated genes.
Statistical Analysis
R software 3.6.0 (ATandT Labs Research -Software Tools, RRID:SCR_002937, https://www.r-project.org/) was used for all the statistical analyses involving differential expression, Kaplan-Meier curves, Cox regression analysis, forest plots and PCA. A difference of P<0.05 was considered statistically significant.
Differentially Expressed lncRNAs in Melanoma Patients
The clinical materials and lncRNA expression data of both normal specimens and melanoma specimens were collected from TCGA database. Additionally, the relationship between immune gene expression and lncRNA expression in melanoma patients was determined, and 6359 immune-associated lncRNAs with P < 0.05 and coefficient > 0.4 were extracted. There were 6351 positively regulated immune-related lncRNAs and 8 negatively regulated immune-related lncRNAs (see Supplementary Table 1). Interestingly, the former was much higher than the latter. There were a total of 180 females and 290 males included in the study, with a mean age
Construction of a Model for Melanoma Patients
From the univariate Cox regression analysis, 48 prognostic immune-associated lncRNAs with P < 0.01 were identified (see Supplementary Table 2). Furthermore, 14 independent prognostic immune-related lncRNAs (Table 1)
Prognostic Signature of the Risk Score Combined with Clinical Variables
To assess whether the risk score predicts the prognosis of melanoma independently from other clinicopathological features, we performed a Cox regression analysis. The results of the univariate analyses showed a statistically significant relationship between age, stage, T-stage, N-stage, risk score and survival outcome (P < 0.001) ( Figure 3A and Table 2). Multifactorial analyses showed that age, T stage, N stage, and risk score were independent prognostic factors for melanoma patients (P < 0.001) ( Figure 3B and Table 2). To assess the sensitivity and specificity of the risk score in predicting the prognosis of melanoma patients, we performed receiver operating characteristic analysis. The area under the ROC curve of risk scores was 0.786, which was larger than the areas under the ROC curves of other clinical variables ( Figure 4). Finally, the PCA results are presented in Figure 5. There were no significant differences between the high-risk and low-risk groups in terms of the expression of all genes ( Figure 5A) or immune-related lncRNAs ( Figure 5B). There was a significant difference between the two risk groups in the expression of the fourteen immune-related lncRNAs ( Figure 5C) used in the prognostic model. In conclusion, these results indicate that the immune-related lncRNA signature identified above is an independent prognostic factor of melanoma.
Discussion
Cutaneous melanoma is the most common malignant skin carcinoma and has a high incidence. For patients with early-stage (stage I-IIIB) melanoma, surgery is the primary treatment, while immunotherapy has played an active role in treating unresectable or metastatic tumors. 8 However, similar to the problems encountered in many tumor treatments, immunotherapy has various immunosuppressive mechanisms, the immune environment varies greatly from one individual to another, the prognosis and response to treatment vary among individuals, and only a small percentage of patients benefit from immunotherapy. 11,16 Hence, in-depth exploration of the immunomolecular treatment of melanoma will provide new treatment and diagnosis options. For example, SNHG5 promoted melanoma development by inhibiting miR-26a-5p expression and inducing TRPC3 expression, suggesting the potential of SNHG5 as a novel target therapy for melanoma. 17 In this study, we identified fourteen immune-related lncRNAs and used these lncRNAs to build a prognostic model. The AUC of the risk score was higher than that of other clinical variables. Recent sequencing discoveries have demonstrated that there are various mechanisms underlying lncRNA expression in cells, ranging from gene expression to protein translation and stability. 18 LncRNAs have been proven to participate in different processes of melanoma, such as progression and metastasis. Fourteen immune-related lncRNAs, LRRC8C-DT, AC021188.1, MALINC1, CCR5AS, EIF2AK3-DT, AC022306.2, AC242842.1, AL034376.1, AL662844.4, AC009065.3, AC099811.3, AC125807.2, SPINT1-AS1 and AC009495.2, were shown to have a great influence on melanoma. To investigate whether the immune-associated lncRNA signature can be an independent prognostic factor for cutaneous melanoma, we performed univariate and multivariate Cox analyses, which showed that the lncRNA signature is an independent prognostic factor for cutaneous melanoma independent of clinicopathological features. ROC curves and PCA proved the accuracy of the prognostic model.
For melanoma, which is a tumor with a poor prognosis, the establishment of an effective prognostic model and the exploration of its mechanism of action can improve the It has been shown that the antitumor effects of immune checkpoint inhibitors can be enhanced by repolarizing M2 macrophages into M1 macrophages, and CD8 T cells located at the melanoma tumor invasive margin can predict the clinical response to PD-1 blockade therapy and are positively correlated with the response to pembrolizumab. 20,21 AC021188.1 has been found to be associated with prognosis and was included in 5-disease prognostic signature lncRNAs in head and neck squamous cell carcinoma (HNSCC). 22,23 Bida et al revealed the effect of the lncRNA MA-linc1 on cell cycle progression and cancer growth and found that its high expression was correlated with lower survival in breast cancer and lung cancer patients, and this might be related to the inhibition of MA-linc1-enhanced anti-mitotic drug paclitaxel-induced apoptotic cell death. 24 CCR5 expression sustains myeloidderived suppressor cell (MDSC) suppression activities, intratumoral Treg infiltration, and melanoma tumor growth and is highly attractive as a means to quench or eliminate unconstrained tumor cell growth. 25,26 A study also showed that the expression of CCR5AS could affect HIV infection and disease progression. For example, inhibiting CCR5AS expression decreased the infection of CD4+ T cells in vitro. 27,28 AC242842.1 is a pyroptosis-related and immune-related lncRNA signature that plays a key role in melanoma and can also be used for predicting prognosis. 5,29 AL662844.4 is an autophagy-related lncRNA that plays a key role in bladder cancer. 30 AC125807.2 is a potential prognostic biomarker in lung adenocarcinoma. 31,32 LncRNAs such as AC009495.2 can distinguish acute myeloid leukemia types and significantly change the behavior of acute myeloid leukemia cells. 33,34 Additionally, SPINT1-AS1 plays a role in esophageal squamous cell carcinoma, 35 breast cancer, 36 renal clear cell carcinoma and other tumors. 37 The advantages of this study include that it was a TCGAbased study that included a large sample size and adjusted for the patient's clinical and demographic characteristics. However, there was a limitation in this study: we did not provide information on experimental data for deep verification, and we will improve this aspect of the study in the future.
Conclusion
This study identified several clinically significant immunerelated lncRNAs based on data from cutaneous melanoma patients in TCGA database and established a relevant
Data Sharing Statement
The datasets supporting the conclusions of this article are available in TCGA database. (https://cancergenome.nih.gov/).
Ethics Approval and Informed Consent
This study was exempt from ethical review and approval from the Institutional Review Board of Guangdong Second Provincial General Hospital. substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; took part in drafting the article or revising it critically for important intellectual content; agreed to submit to the current journal; gave final approval of the version to be published; and agree to be accountable for all aspects of the work.
Funding
There is no funding to report.
|
2021-12-03T16:07:29.686Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "7b3ce062db79f00acce16409196b60fc859e311a",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=76381",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "836a53a4d48682ed128b02c2750dbf60f07e61f2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17012935
|
pes2o/s2orc
|
v3-fos-license
|
Sequence analysis, expression profiles and function of thioredoxin 2 and thioredoxin reductase 1 in resistance to nucleopolyhedrovirus in Helicoverpa armigera
The thioredoxin system, including NADPH, thioredoxin (Trx), and thioredoxin reductase (TrxR), plays significant roles in maintaining intracellular redox homeostasis and protecting organisms against oxidative damage. In this study, the characteristics and functions of H. armigera HaTrx2 and HaTrxR1 were identified. Sequence analysis showed that HaTrx2 and HaTrxR1 were both highly conserved and shared high sequence identity with other insect counterparts. The mRNA of HaTrx2 was expressed the highest in 5th instar 96 h and was mainly detected in heads and epidermis. The expression of HaTrxR1 was highly concentrated in 5th instar 72 h and 96 h, and higher in malpighian tube, midgut and hemocyte than other examined tissues. HaTrx2 and HaTrxR1 were markedly induced by various types of stress. HaTrx2- or HaTrxR1-knockdown increased ROS production in hemocytes and also increased the lipid damage in NPV infected H. armigera larvae. Furthermore, interference with expression of HaTrx2 or HaTrxR1 transcripts in H. armigera larvae resulted in increased sensitivity to NPV infection and shortened LT50 values. Our findings indicated that HaTrx2 and HaTrxR1 contribute to the susceptibility of H. armigera to NPV and also provided the theoretical basis for the in-depth study of insect thioredoxin system.
Scientific RepoRts | 5:15531 | DOi: 10.1038/srep15531 and ideally suited to regulate the functions of proteins 4 . It is generally believed that the antioxidant effect of Trxs are mainly manifested in two aspects: first, Trxs can serve as electron donors for peroxidases to cope with ROS and, thus, to reduce lipid peroxidation, DNA damage and protein inactivation; second, as a disulfide reductase of intracellular proteins, Trxs can reduce the disulfide bonds of many proteins (such as kinases, phosphatases and transcription factors) to restore physiological function 10 .
Trxs have been widely studied in mammals 11,12 , plants 13,14 , and bacteria 15,16 because of their essential roles in protection against oxidative stress, whereas reports focusing on Trxs in insects are limited. In Drosophila, three Trx genes (Trx1, Trx2, and TrxT) have been identified [17][18][19] , and the loss of Trx-2 promoted the expression of other antioxidant genes and exacerbated oxidative stress-dependent phenotypes 20 . In Bombyx mori, BmTrx has been shown to protect against oxidative stress caused by extreme temperatures and microbial infection 21 . In Apis mellifera, three Trxs have been identified: AmTrx1, which is located in the mitochondrion, AmTrx2, which is a putative ortholog of Drosophila Trx2 and may play a vital role in redox homeostasis, and AmTrx3 22 . In Apis cerana cerana, some Trxs, including AccTrx-like1 23 , AccTrx2 24 and AccTrx1 25 , have been demonstrated to participate in antioxidant defense. All of the above studies suggest that Trxs play a major role in maintaining redox homeostasis and resisting adverse circumstances in insects.
TrxRs are homodimeric flavoproteins that belong to the pyridine nucleotide-disulfide oxidoreductase family and can catalyze the natural substrates of thioredoxin 26 . There are two forms of TrxRs in different organisms: low molecular weight (MW) TrxRs, of approximately 35 kDa, which are mainly found in bacteria, plants and parasites; and high MW TrxRs, of approximately 55 kDa, which are mainly found in higher eukaryotes 27 . The N-terminus of mammalian TrxRs possesses a redox catalytic site structure consisting of -Cyc-Val-Asn-Gly-Cys-(CVNVGC), and the C-terminus exhibits an extension redox active site sequence of -Gly-Cys-Sec-Gly-(GCUG) 28,29 , while the C-terminal conserved sequence is -Cys-Cys-Ser-(CCS) in insects 30 . TrxRs can transfer reducing equivalents from NADPH to thioredoxin; the electron transfer path is from NADPH to FAD, then to N-terminal redox active sites, followed by the C-terminal active motifs, and finally to Trxs 28 . The physiological roles of TrxRs have been widely studied in mammals, including their functions in redox homeostasis and antioxidant defense 31 , regulating cell growth and inhibiting cell apoptosis 32 , and controlling early embryonic development 33 . There have been some reports about the use of Trx and TrxR as targets of cancer therapy 34,35 .
In contrast to the many studies addressing TrxRs in mammals, knowledge of TrxRs in insects is lacking. In Drosophila, two TrxRs have been identified: TrxR-1, which encodes three splice variants (one mitochondrial and two cytoplasmic forms), and TrxR-2, which encodes a protein with a potential targeting peptide 36 . The TrxR-1 null mutant of D. melanogaster leads to death at the end of the second larval instar 37 , and both cytosolic and mitochondrial TrxR-1 forms have been shown to be necessary for survival 36 . In Anopheles gambiae, TrxR-1, which occurs in three splice variants, shares 69% sequence identity with D. melanogaster TrxR-1 and possesses a conserved Cys-Cys active motif in its C-terminal extension 30 . In A. mellifera, only one TrxR gene has been identified, which exhibits two putative splice variants, but it does not appear that they encode the mitochondrial variant 22 . In A. cerana cerana, AccTrxR1 was shown to be induced by ultraviolet light (UV) and heat (37 °C) and to be involved in protection against oxidant stress 38 . In Chironomus riparius, the transcription of CrTrxR1 was found to be up-regulated after paraquat and cadmium chloride exposure and is considered to be a biomarker of oxidative stress induced by environmental contaminants 39 .
The cotton bollworm (Helicoverpa armigera) is one of the lepidopteran pests that cause the most damage, resulting in enormous economic losses in the cotton, corn, vegetable and other crop industries throughout Asia 40 . Although its population has decreased since the introduction of Bt-cotton in China in 1997, the control of this pest is a longstanding problem due to its ability to develop insecticide resistance 40,41 . The genes of the thioredoxin system are being considered as targets for the treatment of inflammation or cancer in humans 42,43 , and another antioxidant gene (thioredoxin peroxidase) was shown to be involved in resistance to the biocontrol fungus Nomuraea rileyi in Spodoptera litura 44 . We hypothesize that Trx and TrxR can help to resist the infection of pathogenic microorganism in insect. To elucidate the functions of thioredoxin system genes in H. armigera, we investigated their spatio-temporal distribution and evaluated their transcript levels after various types of stress treatments, including temperatures of 0 °C and 37 °C, UV, mechanical injury, E. coli exposure, Metarhizium anisopliae exposure, and nucleopolyhedrovirus (NPV) infection. Furthermore, ROS generation and lipid peroxidation in HaTrx2-or HaTrxR1-knockdown larvae and normal larvae were measured. Finally, RNA interference (RNAi) technology was used to study these two genes involved in resistance to NPV. Our results will contribute to further studies on Trx and TrxR in Insecta and will aid in the development of novel insecticides targeting Trx and TrxR.
Results
Sequence analysis of HaTrx2 and HaTrxR1. Sequence analysis showed that the full-length cDNA of HaTrx2 was 800 bp, including a 321 bp open reading frame (ORF) and encoding a deduced polypeptide of 107 amino acids with a predicted molecular weight of 12.03 kDa and a pI of 4.82. Multiple alignment analysis of the amino acid sequence showed that HaTrx2 shared high amino acid identity (61%-92%) with Trx sequences from other selected insect species. The active site sequence CGPC was found in the N-terminal portion of the HaTrx2 sequence and was highly conserved among all of the selected insect species (Fig. 1A). As shown in Fig. 1B, phylogenetic analysis revealed that HaTrx2 was most closely related to the PpTrx2 homologue (Papilio polytes, BAM19091.1) and PxTrx2 homologue (Papilio xuthus, BAM17831.1), consistent with the evolutionary relationships predicted from the multiple alignment of amino acid sequences. The potential tertiary protein structure of HaTrx2 was constructed with the SWISS-MODEL server and PyMOL-v1.3r1 software, and the cysteines (Cys 32 and Cys 35 ) in the conserved redox active motif were identified (Fig. 1C).
The ORF of HaTrxR1 was 1572 bp, encoding a polypeptide of 523 amino acids residues with a predicted molecular weight of 57.16 kDa and a theoretical pI of 7.57. Multiple sequence alignment revealed that HaTrxR1 shared 83% identity with BmTrxR1-X2 and 64%-71% identity with TrxR sequences from other selected insect species. The active site sequence CVNVGC was found in the N-terminal portion, while CCS was found in the C-terminal portion of the HaTrxR1, and these sequences were highly conserved among the selected insect species (Fig. 2A). Phylogenetic analysis showed that HaTrxR1 was more closely related to the BmTrxR1-X2 homologue (B. mori, XP_004921588.1) than other selected species, and this result was consistent with the evolutionary relationship predicted from the multiple alignment of amino acid sequences (Fig. 2B). The tertiary protein structure of HaTrxR1 was constructed using the SWISS-MODEL server and PyMOL-v1.3r1 software, and the conserved redox active motifs (CVNVGC and CCS) were identified (Fig. 2C).
Temporal and spatial expression profiles of HaTrx2 and HaTrxR1. To determine the transcription profile of HaTrx2 in different developmental stages and larval tissues in H. armigera, qRT-PCR was carried out using total RNA prepared from the above collected samples. Standard curves for the primers were generated before formal experiments. The correlation coefficients (R 2 ) of the four genes (HaTrx2, HaTrxR1, RPS15, and RPL32) were greater than 0.99, and the amplification efficiencies of the primers were 98.06%, 103.97%, 105.20%, and 98.70%, respectively ( Figure S1). The HaTrx2 transcript showed ubiquitous expression in all developmental stages, mainly being expressed in the 96 h larvae of the 5th instar (Fig. 3A). The spatial expression profiles revealed that the HaTrx2 gene could be detected in all of the investigated tissues, and the expression levels were higher in the head, epidermis, midgut and Malpighian tubules than other tissues (Fig. 3B).
The qRT-PCR results showed that HaTrxR1 was mainly expressed in 24 h, 48 h, 72 h, and 96 h larvae of the 5th instar and the first-day pupae, with relatively lower expression being observed in other larval stages (Fig. 3C). The obtained spatial expression profiles showed that the HaTrxR1 gene was mainly expressed in the hemocytes, midgut, Malpighian tubules, and CNS (Fig. 3D).
The response of the expression profiles of HaTrx2 and HaTrxR1 to various types of adversity. To study the effect of various adverse stresses on HaTrx2 and HaTrxR1 transcription, larvae were challenged with low temperature, high temperature, UV light, mechanical injury, E. coli exposure, M. anisopliae exposure, and NPV infection. As shown in Fig. 4, the transcription of HaTrx2 was significantly induced by the 0 °C, 37 °C, UV, mechanical injury, and E. coli exposure treatments at 2 h, 6 h, and 12 h, in addition to being increased at 6 h and 12 h after M. anisopliae exposure treatment and being markedly up-regulated at 24 h, 48 h, 72 h, 96 h, and 120 h after NPV infection. For the HaTrxR1 transcript, we observed a similar tendency (Fig. 5). HaTrxR1 transcription was markedly up-regulated at 2 h, 6 h, and 12 h after the 0 °C, 37 °C, UV, mechanical injury, M. anisopliae exposure treatments, in addition to being significantly up-regulated at 2 h and 12 h after E. coli exposure and being markedly increased at 48 h, 72 h, 96 h, and 120 h after NPV infection (Fig. 5). Taken together, all of the above results suggested that HaTrx2 and HaTrxR1 may play an important role in protection against the oxidative stress caused by various types of adversity, and especially NPV infection.
ROS generation and lipid peroxidation in HaTrx2-or HaTrxR1-knockdown larvae and normal larvae.
To confirm HaTrx2 and HaTrxR1 play vital roles in protecting H. armigera against oxidative damage caused by NPV infection, ROS generation was determined in HaTrx2-or HaTrxR1-knockdown larvae and normal larvae. As shown in Fig. 6A,B, the fluorescence intensity of larvae hemocytes in the NPV + dsHaTrx2 or NPV + dsHaTrxR1 group were both stronger than that in the NPV+ dsEGFP or NPV groups.
As ROS damage also caused lipid peroxidation in living organism. We measured the concentration of a terminal product (malonyl dialdehyde, MDA) of lipid peroxidation in hemocytes after HaTrx2-or HaTrxR1-knockdown to confirm HaTrx2 and HaTrxR1 play vital roles in protecting H. armigera against oxidative damage caused by NPV infection. The results showed that MDA levels were markedly increased after HaTrx2-or HaTrxR1-knockdown compared to EGFP dsRNA injection or NPV infection (Fig. 6C).
RNA interference and survival assay.
To further confirm the functions of HaTrx2 and HaTrxR1, the adverse stress of NPV infection was chosen because the expression of these two genes was increased to a greater extent by NPV infection than the other selected adverse stresses ( Fig. 4 and 5).
The results of agarose gel electrophoresis and real-time PCR analyses showed that the transcripts of HaTrx2 and HaTrxR1 were significantly decreased at 24 h, 48 h, and 72 h after HaTrx2 and HaTrxR1 dsRNA injection compared with EGFP dsRNA injection (Fig. 7A,B). HaTrx2 expression was decreased by 40.77%, 84.43%, and 39.89% (Fig. 7C), while HaTrxR1 expression was decreased 42.54%, 77.81%, and qRT-PCR results also showed that expression level of HaTrxR1 increased significantly after HaTrx2 knockdown, however, HaTrx2 expression level remained unchanged after HaTrxR1 knockdown Black represents 100% identity, gray represents 75% identity and white represents <75% identity. The conserved CVNVGC motif and CCS motif are boxed and the active sites are marked by ↑ . ( Figure S2). To determine how is the NPV infection going on at 48 h post infection and after 48 h of dsRNA injection, qRT-PCR was used to quantify the virus gDNA abundance. As shown in Figure S3A, viral gDNA level at 48 h after NPV infection increased about 300 times than 0 h or 24 h after NPV infection. HaTrx2-or HaTrxR1-knockdown also obviously promoted viral gDNA levels compared to EGFP dsRNA injection at 48 h ( Figure S3B).
Discussion
Many studies addressing the functions of the thioredoxin system, which are involved in regulating cellular redox homeostasis and resisting oxidative stress caused by adversity, have been conducted in mammals 43,45 and some model insect species 21,24,30,38 . However, research on Trx and TrxR in the lepidopteran pest H. armigera is lacking. In this study, HaTrx2 and HaTrxR1 were identified and characterized in the larvae of H. armigera. Sequence analysis suggested that HaTrx2 shared high amino acid identity (61%-92%) with other insect counterparts, and all of these proteins contained the highly conserved CGPC active-site motif, which is essential for their catalytic activity 7 . Multiple alignment and phylogenetic analysis revealed that HaTrxR1 shared 64%-83% sequence identity with other insect species, including the important active site sequence CVNVGC in the N-terminal portion and the CCS motif in the C-terminal extension ( Fig. 2A) 30 . These results demonstrated that both HaTrx2 and HaTrxR1 possessed redox active sites and belonged to the typical Trx and TrxR families, respectively, and they might be involved in resistance to adversity.
The changes in HaTrx2 and HaTrxR1 transcription observed at different developmental stages showed that these two genes were mainly expressed in 5th instar and pupal stage. The obtained spatial expression profiles revealed that the HaTrx2 gene was expressed at higher levels in the head, epidermis, midgut and Malpighian tubules than other tissues (Fig. 3B), suggested that it may play vital roles in antioxidant defense in these tissues, which are central organs in metabolism and detoxification. The expression of Trx in larval tissues appears to show a species-dependent pattern: BmTrx is mainly expressed in the fat body and silk gland 21 ; AccTrx1 and AccTrx-like1 exhibit higher expression in the epidermis than in other tissues 23,25 ; and AccTrx2 is expressed at higher levels in the brain and midgut 24 . However, the HaTrxR gene is mainly expressed in the hemocytes, midgut, Malpighian tubules, and CNS (Fig. 3D), implying that it may mainly play crucial roles in these tissues with antioxidant functions.
It has been reported that adverse environmental factors, such as pesticides, heavy metals, UV radiation, and abnormal temperatures can lead to oxidative damage to living organisms 46 . A majority of antioxidant enzymes, such as peroxidases and catalases, play significant roles in the scavenging or quenching of oxidants and, thus, constitute a primary short-term line of defense. In previous studies, the expression of BmTrx in the fat body of B. mori larvae was shown to be greatly increased after treatment with H 2 O 2 , paraquat, low or high temperatures, or microorganism (bacterium, fungus, and NPV) infection 21 ; and AccTrx1 was found to be induced by treatment with H 2 O 2 , temperatures of 4, 16, and 42 °C and pesticides (acaricide, phoxim, cyhalothrin, and paraquat) 25 , suggesting that Trx may play important roles in protection against oxidative stress caused by an adverse environment. The TrxR gene of A. gambiae was also shown to be induced by injury, bacterial challenge, and malaria infection 47 . In the present study, the transcripts of HaTrx2 and HaTrxR1 were significantly induced by various types of adversity, including low temperature, high temperature, UV light, mechanical injury, E. coli exposure, M. anisopliae exposure, and NPV infection, which suggests that HaTrx2 and HaTrxR1 may participate in resistance to these adverse conditions. The possible mechanism underlying Trx and TrxR involvement in antioxidant defense may be elucidated as follows: ROS are first formed under adverse stress and then act on cellular biomacromolecules that are susceptible to oxidation stress by disrupting intracellular redox homeostasis, and Trx and TrxR may be play crucial roles in the removal of excessive ROS to protect organisms 48 .
In this study, to confirm the role of HaTrx2 and HaTrxR1 in the removal of excessive ROS caused by NPV infection to protect H. armigera larvae, the expression of HaTrx2 and HaTrxR1 was successfully knockdown with the injection of the related dsRNA, as examined by semi-quantitative RT-PCR and qRT-PCR. Further study confirmed that HaTrx2-or HaTrxR1-knockdown increased ROS production in hemocytes and also increased the lipid damage in NPV infected H. armigera larvae. Together, these results indicated that HaTrx2 and HaTrxR1 may participate in the removal of excessive ROS caused by NPV infection in H. armigera. However, further study to provide in-depth confirmation is warranted.
In S. litura, larval mortality was accelerated after knockdown of the antioxidant gene SlTpx through dsRNA interference in the presence of N. rileyi infection, suggesting that SlTpx plays a vital role in resisting oxidative damage caused by N. rileyi infection 44 . In D. melanogaster, Drosophila cells become susceptible to H 2 O 2 treatment after knockdown of the Tpx transcript through RNAi 49 . Here, the expression of HaTrx2 and HaTrxR1 was found to be significantly stimulated by NPV infection, which is widely applied in the management of the pest H. armigera due to its strong pathogenicity. In a further experiment, knockdown of HaTrx2 or HaTrxR1 transcripts resulted in increased sensitivity to NPV infection and shortened LT 50 values. All of these observations indicated that expression of HaTrx2 and HaTrxR1 is essential in defense against NPV infection in H. armigera larvae. In conclusion, we have characterized two typical thioredoxin system genes from H. armigera, and determined the temporal and spatial expression profiles of HaTrx2 and HaTrxR1. The transcription of HaTrx2 and HaTrxR1 was induced by various types of adversity (low temperature, high temperature, UV light, mechanical injury, E. coli exposure, M. anisopliae exposure, and NPV infection), suggesting HaTrx2 and HaTrxR1 play important roles in resistance to various types of adversity. HaTrx2-or HaTrxR1-knockdown increased ROS production in hemocytes and also increased the lipid damage in NPV infected H. armigera larvae. RNAi experiments further confirmed that HaTrx2 and HaTrxR1 are involved in resistance to NPV infection. These observations provide powerful evidence demonstrating that HaTrx2 and HaTrxR1 play vital roles in protecting H. armigera against oxidative damage and enrich our knowledge of the thioredoxin system in insects. Therefore, the development of novel chemicals and microbial pesticides targeting HaTrx2 or HaTrxR1 for H. armigera control will require further in-depth research. Raw powder of H. armigera NPV (5 × 10 11 PIB/g) was bought from the Henan Jiyuan Baiyun Industry Co., Ltd (China) and stored at 4 °C for later use.
Sequence analysis of HaTrx2 and HaTrxR1. The GenBank accession number of HaTrx2 and
HaTrxR1 were JQ744277.1 and KM658552. The physicochemical properties of HaTrx2 and HaTrxR1 were analyzed using the online bioinformatics ProtParam tool (http://web.expasy.org/protparam/). Homologous protein sequences of Trxs and TrxRs from various species were obtained from the NCBI database and aligned using DNAman6.0.3 software. Phylogenetic analysis was carried out using MEGA5.10 software. Finally, the tertiary protein structures of HaTrx2 and HaTrxR1 were predicted with the online server SWISS-MODEL and were modified with PyMOL-v1.3r1 software 51 .
Developmental analysis and tissue distribution of HaTrx2 and HaTrxR1. To examine the temporal expression profiles of HaTrx2 and HaTrxR1, H. armigera samples were collected at different developmental stages, including eggs; 24 h larvae of the first, second, third, and 4th instar; 0, 24, 48, 72, 96, and 120 h larvae of the 5th instar; 0, 1, 3, 5 and 9 day pupae; and 1 day adults (equal numbers of females and males). To analyze the spatial expression patterns of HaTrx2 and HaTrxR1, the tissues of the 5th instar 48 h larvae were collected, including the head, epidermis, fat body, hemocytes, midgut, Malpighian tubules, salivary glands, and central nervous system (CNS) 52 . Each sample was repeated three times and immediately stored at − 80 °C for total RNA extraction.
Effect of different types of stress on the expression of HaTrx2 and HaTrxR1.
For the temperature treatments, 0 °C (low temperature) and 37 °C (high temperature) were chosen 21,53 . The first-day larvae of the 5th instar were held for 12 h under the two temperatures, while the controls were maintained at 27 °C (normal temperature) 21 . In the UV treatment, the first-day larvae of the 5th instar were irradiated with 300 nm wavelength light, and the control larvae were kept under normal light for 12 h 53 . In the mechanical injury experiment, each larva was impaled 10 times with an insect pin (30 × 0.5 mm), and normal larvae were used as controls. In the E. coli infection treatment, E. coli cells were diluted in PBS and subsequently injected into the abdomens of first-day larvae of the 5th instar with a syringe, injecting 10 μ L of 1.0 × 10 5 E. coli cells per larva 21 . Control larvae were injected with an equal volume of PBS (10 μ L/larva). For challenge by M. anisopliae, M. anisopliae was first inoculated on potato dextrose agar plates and incubated at 26 °C for 7-10 days. The produced conidia were then scraped and diluted with sterile water containing 0.1% Tween− 80 to 1.9 × 10 8 conidia/μ L, which has been reported as the LC 50 concentration of H. armigera 54 . The first-day larvae of the 5th instar were injected with 5 μ L of the diluted M. anisopliae suspension, and control larvae were injected with an equal volume of PBS (5 μ L/larva) 55 . The treatment and control larvae from each group were collected at 0, 2, 6, and 12 h after treatment, then immediately stored at − 80 °C for further total RNA extraction. For the virus challenge, the first-day larvae of the 4th instar were inoculated with 10 μ L of NPV at a concentration of 1.0 × 10 6 PIB/mL per larva, and control larvae were inoculated with 10 μ L of sterile water. The treatment and control larvae were collected after 0, 24, 48, 72, 96, and 120 h and then immediately stored at − 80 °C for later total RNA extraction 46 . At least three independent biological replications were carried out in each of the adverse condition experiments, and at least 15 larvae were used in both the control and treatment replications.
Primer design. The primers of HaTrx2 and HaTrxR1 used for RT-PCR, real-time PCR, and dsRNA synthesis were designed with DNAClub software according to their sequences. The H. armigera ribosomal proteins S15 (RPS15) and L32 (RPL32) were used as internal controls for real-time PCR normalization. All of the primers were synthesized by Sangon Biotechnology Co., Ltd. (Shanghai, China) ( Table 1).
Total RNA extraction, cDNA synthesis, and real-time PCR amplification. Total RNA was extracted from the above samples using the TRIzol reagent (Invitrogen, USA) following the manufacturer's protocols. The purity and concentration of the RNA samples were determined three times with an ultraviolet spectrophotometer (Abs260) to reduce deviation. First-strand complementary DNA (cDNA) was synthesized from 1 μ g of total RNA following the instruction manual of the PrimeScript RT reagent kit with gDNA Eraser (Takara, Kyoto, Japan) and immediately stored at − 80 °C for later use. The cDNA samples were evaluated in triplicate.
qRT-PCR was performed using SYBR green supermix (TaKaRa) in a Bio-Rad CFX Connect TM Real-Time PCR System (Bio-Rad, USA) to determine the gene expression levels. The real-time PCR amplification conditions for HaTrx2, HaTrxR1, RPS15, and RPL32 are listed in Table S1. The reliability of the qRT-PCR results was confirmed through standard curve and melting curve analyses. Standard curves were generated using 10-fold dilution series of cDNA as a template for each treatment, employing a linear regression model ( Figure S1) 56 . The efficiencies (E) of the primers used for qRT-PCR were calculated according to the equation: E = (10 [−1/slope] − 1) × 100% 57 . The specificity of the amplified product was further confirmed through melting curve analysis from 65 °C to 95 °C and agarose gel electrophoresis. The mRNA expression of target genes was quantified using the comparative CT (cross threshold) method 58 . The CT value of the reference gene was subtracted from the CT value of the target gene to obtain Δ CT.
Continued
The normalized fold changes of target gene mRNA expression were expressed as 2 −ΔΔCT , where Δ Δ CT is equal to Δ CT treated sample -Δ CT control .
Synthesis of dsRNA and detecting of RNAi efficiency.
To synthesize the dsRNAs, gene-specific primers containing a T7 polymerase promoter sequence were used to amplify the target sequences via reverse transcription-PCR (RT-PCR) ( Table 1). The applied RT-PCR amplification conditions are listed in Table S1. The MEGAscript RNAi kit (Ambion) was employed to synthesize the dsRNAs according to the manufacturer's instructions. DNase and ribonuclease (RNase) were used to remove the template DNA and single-stranded RNA from the transcription reaction. dsRNAs were purified with MEGAclear columns (Ambion) and eluted with diethyl pyrocarbonate (DEPC)-treated nuclease-free water. The purity and concentration of dsRNA were then measured via ultraviolet spectrophotometry (Table S2) and gel electrophoresis. As a negative control, dsRNA of enhanced green fluorescent protein (EGFP) was also synthesized.
To evaluate the effects of RNAi on the gene expression, the first day larvae of 4th instar which inoculated with NPV (total quantity of 10 4 PIB per larva) at 48 h were injected with 15 μ g dsRNA of HaTrx2, HaTrxR1, or EGFP, respectively. The whole-body samples were collected at 24, 48, and 72 h after dsRNA injection, and then used for total RNA extraction and real-time PCR analysis. To determine the relation between HaTrx2 and HaTrxR1, HaTrxR1 expression was measured after HaTrx2 dsRNA injection at 48 h and HaTrx2 expression was also measured after HaTrxR1 dsRNA injection at 48 h.
Measurement of ROS production and lipid peroxidation.
To study the effect of RNAi on ROS generation in hemocytes, newly molted 4th instar larvae were fed the NPV-contaminated diet (10 μ L of NPV at the concentration of 1.0 × 10 6 PIB/mL per larva) as the method of NPV challenge. After NPV infection 48 h, the infected larvae were injected with the dsRNA of EGFP, HaTrx2, or HaTrxR1, respectively. After dsRNA injection 48 h, hemolymph of each groups were collected from H. armigera and centrifuged immediately at 4000 × g at 4 °C for 10 min to isolate hemocytes. Then, ROS production was measured with Reactive Oxygen Species Assay Kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). Hemocytes were incubated with DCFH-DA (2, 7-dichlorofuorescin diacetate) at a final concentration of 10 μ M for 20 min. Hemocyte morphology was observed using a OLYMPUS BX61 (Olympus, Tokyo, Japan) laser scanning confocal microscope. ROS production in hemocytes was measured fluorometrically at excitation and emission wavelengths of 488 and 525 nm, respectively. Usually, ROS damage caused lipid peroxidation in the organism. As a terminal product of lipid peroxidation, MDA was measured to evaluate the degree of lipid peroxidation in the hemolymph using MDA Assay Kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China).
qRT-PCR analysis of virus gDNA abundance at 48 h post NPV infection and after 48 h of dsRNA injection.
qRT-PCR was used to quantify the virus abundance in NPV infected larvae at 48 h post NPV infection and after 48 h of dsRNA injection using specific primers to the polyhedrin gene (Accession no. NC_002654.2) of HaSNPV. The first-day larvae of the 4th instar were inoculated with 10 μ L of NPV at a concentration of 1.0 × 10 6 PIB/mL per larva as the above method. The samples of whole body were collected at 0, 24, and 48 h after NPV infection, respectively. After 48 h of NPV infection, dsRNA of EGFP, HaTrx2, or HaTrxR1 were injected to the NPV infected larvae, respectively. The sample of each treatment were collected at 48 h after dsRNA injection. The genomic DNA was extracted and used in qRT-PCR technique as described 59 . H. armigera actin gene (Accession no. HM629437.1) was used as the housekeeping gene for normalization the data of virus gDNA quantification.
RNA interference and survival assay.
To determine the effects of HaTrx2-or HaTrxR1-knockdown on the susceptibility of H. armigera larvae to NPV infection, two inoculation doses (10 μ L of NPV at the concentration of 1.0 × 10 6 PIB/mL or 1.0 × 10 7 PIB/mL per larva) were selected 60 .
The first-day larvae of the 4th instar were inoculated with 10 μ L of NPV at a concentration of 1.0 × 10 6 PIB/mL per larva (the group of "NPV1") according to the above method (for the virus challenge). Control larvae were inoculated with 10 μ L of sterile water (the "CK" group) or injected with 10 μ L DEPC solution (the "DEPC" group). At 48 h after NPV inoculation, 15 μ g of HaTrx2 dsRNA (the "NPV1 + dsHaTrx2" group), 15 μ g of HaTrxR1 dsRNA (the "NPV1 + dsHaTrxR1" group), or 15 μ g of EGFP dsRNA (the "NPV1 + dsEGFP" group) was injected into the proleg of each H. armigera larva using a capillary microsyringe. As the above method, The first-day larvae of the 4th instar were inoculated with 10 μ L of NPV at a concentration of 1.0 × 10 7 PIB/mL per larva (the group of "NPV2"). At 48 h after NPV infection, 15 μ g of HaTrx2 dsRNA (the "NPV2 + dsHaTrx2" group), 15 μ g of HaTrxR1 dsRNA (the "NPV2 + dsHaTrxR1" group), or 15 μ g of EGFP dsRNA (the "NPV2 + dsEGFP" group) was injected into the proleg of each H. armigera larva. The number of dead larvae was observed and recorded in each group until the larvae pupated. At least 30 larvae were included in each replicate, and every treatment was replicated three times.
Statistical analysis. The real-time PCR experiments and RNAi experiments were both carried out with three independent replications, and the results are presented as the means ± standard deviation (SD). Statistically significant differences in gene expression observed in the real-time PCR assays are denoted by *(0.01 < p < 0.05) and **(p < 0.01), as obtained through pair-wise Student's t-test analysis. The mortality rate was analyzed using ANOVA followed by Turkey's HSD multiple comparison test in SPSS 17.0 software to detect statistically significant differences between different groups (p < 0.05).
|
2017-12-15T23:42:50.758Z
|
2015-10-27T00:00:00.000
|
{
"year": 2015,
"sha1": "a76f08f9b8ed5987117cb0b070d9e1eed8d3533f",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep15531.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a76f08f9b8ed5987117cb0b070d9e1eed8d3533f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
196190449
|
pes2o/s2orc
|
v3-fos-license
|
From graveyard to graph
The technological developments in the field of textual scholarship lead to a renewed focus on textual variation. Variants are liberated from their peripheral place in appendices or footnotes and are given a more prominent position in the (digital) edition of a work. But what constitutes an informative and meaningful visualisation of textual variation? The present article takes visualisation of the result of collation software as point of departure, examining several visualisations of collation output that contains a wealth of information about textual variance. The newly developed collation software HyperCollate is used as a touchstone to study the issue of representing textual information to advance literary research. The article concludes with a set of recommendations in order to evaluate different visualisations of collation output.
Introduction
Scholarly editors are fond of the truism that the detailed comparison ('collation') of literary texts is a tiresome, error prone, and demanding activity for humans and a task suitable for computers. Accordingly, the past decades have born witness to the development of a number of software programs which are able to collate large numbers of text within seconds, thus advancing significantly the possibilities for textual research. These developments have led to a renewed focus on textual variation, liberating variants from their peripheral place in appendices or footnotes and giving them a more prominent position in the edition of a work. Still, automated collation continues to engross researchers and developers, as it touches upon universal topics including (but not limited to) the computational modelling of humanities objects, scholarly editing theory, and data visualisation. The present article takes visualisation of collation result as its point of departure. We use the representation of the results of a newly developed collation tool, 'HyperCollate', as a use case to address the more general issue of using data visualisations as a means of advancing textual and literary research. The underlying data structure of HyperCollate is a hypergraph (hence the name), which means that it can store and process more information than string-based collation programs. Accordingly, HyperCollate's output contains a wealth of detailed information about the variation between texts, both on a linguistic/semantic level and a structural level. It is a veritable challenge to visualise the entire collation hypergraph in any meaningful way, but the question is, really, do we want to? In particular, therefore, we investigate which representation(s) of automated collation results best clear the way for advanced research into textual variance.
The article is structured as follows. After a brief introduction of automated collation immediately below, we define a list of textual properties relevant for any study into the nature of text. We then consider the strengths and weaknesses of the prevailing representations of collation output, which allows us to define a number of requirements for a collation visualisation. Subsequently, the article explores the question of visual literacy in relation to using a collation tool. Since visualisations function simultaneously as instruments of study and as means of communication, it is vital they are understood and used correctly. In line with the idea of visual literacy, we conclude with a number of recommendations to evaluate the visualisations of collation output. The implications of creating and using visualisations to study textual variance are discussed in the final parts of the article. Before we go on, it is important to note that we define 'textual variance' in the broadest sense: it comprises any differences between two or more text versions, but also the revisions and other interventions within one version. Indeed, we do not make the traditional distinction between 'accidentals' and 'substantives'. This critical distinction is the editor's to make, for instance by interpreting the output of a collation software program.
Automated collation
Collation at its most basic level can be defined as the comparison of two or more texts to find (dis)similarities between or among them. Texts are collated for different reasons, but in general, collation is used to track the (historical) transmission of a text, to establish a critical text, or to examine an author's creative writing process. Traditionally, collation has been considered an auxiliary task: it was an elementary part of preparing the textual material in order to arrive at a critically established text and not necessarily a part of the hermeneutics of textual criticism. The reader was presented with the end-result of this endeavour (a critical text), and the variant readings were stored in appendices or footnotes, the kind of repositories that would get so few visitors that they have been bleakly referred to as cemeteries (Vanhoutte 1999;De Bruijn 2002, 114). In the environment of a digital edition, however, users can manipulate transcriptions which are prepared and annotated by editors. Many digital editions have a functionality to compare text versions and, accordingly, collation has become a scholarly primitive, like searching and annotating text. The digital representation of the result of the comparison thus brings textual variants to the forefront instead of (respectfully) entombing them.
Properties of text
It's important to note that offering users the opportunity to explore textual variance in a digital environment is an argument an sich: it stresses that text is a fluid and intrinsically unstable object. And, as anyone who has worked with historical documents knows, these fluid textual objects often have complex properties, such as discontinuity, simultaneity, non-linearity, and multiple levels of revision. 1 The dynamic and temporal nature of textual objects means that they can be interpreted in more than one way but existing markup systems like TEI/XML can never fully express the range of textual and critical interpretations. 2 Nevertheless, the benefits of 'making explicit what was so often implicit … outweighed the liabilities' of the tree structure (Drucker 2012), and as it happens, the textual scholarship community has embraced TEI/XML as a means of 1 See Haentjens Dekker and Birnbaum (2017) for an exhaustive overview of textual features and the extent to which these can be represented in a computational model. 2 The TEI Guidelines offer the element <cert> to indicate the degree of certainty associated with some aspect of the text markup, but as Wout Dillen points out, this requires an elaborate encoding practice that is not always worth the effort (2015, 90) and furthermore the ambiguity is not always translatable to the qualifiers Blow,^Bmedium,^and Bhigh.F encoding literary texts. Expressing the multidimensional textual object within a tree data structure (the prevalent model for texts) requires a number of workarounds and results in an encoded XML transcription which contains neither fully ordered nor unordered information (Bleeker et al. 2018, 82). This kind of partially ordered data is challenging to process. As a result, XML files are often collated as strings of characters, inevitably leaving out aspects of the textual dynamics such as deletions, additions or substitutions. The conversion from XML to plain text implies that the multidimensional features of the text expressed by <del> and <add> tags are removed; the text is consequently flattened into a linear sequence of words. Only in the visualisation stage of the collation workflow do features like additions or deletions occur again (Fig. 1).
Although these versions of Krapp's Last Tape are compared on the level of plain text only, the alignment table in Fig. 1 also shows the in-text variation of witnesses 07 and 10, thus neatly illustrating the informational role of visualisations. The main objective for the development of the collation engine HyperCollate was to include textual properties like in-text variation in the alignment in order to perform a more inclusive collation and to facilitate a deeper exploration of textual variation. A look at the drafts of Virginia Woolf's Time Passes 3 offers a good illustration of some textual features we'd like to include in the automated collation. For reasons of clarity, we limit the collation input to two small fragments: the initial holograph draft 'IHD-155' (witness 1) and the typescript 'TS-4' (witness 2). Both fragments are manually transcribed in TEI/XML. The transcriptions below are simplified for reasons of legibility. A quick look at these fragments reveals that they contain linguistic variation between tokens with the same meaning as well as structural variation indicated by the markup. Here, the ampersand mark '&' in witness 1 and the word token 'and' in witness 2 constitute linguistic variation: two different tokens with the same meaning. Furthermore witness 1 presents a case of in-text or intradocumentary variation: variation within a witness' text (see also Schäuble and Gabler 2016;Bleeker 2017, 63). If we look at the revision site that is highlighted in the XML transcription of witness 1, we see several orders in which we can read the text: including or excluding the added text; including or excluding the deleted text. In other words, there are multiple 'paths' through the text,: the textualstream diverges at the point where revision occurs, indicated by the <del> element and the <add> element. When the text is parsed, the textual content of these different paths should be considered as being on the same level: they represent multiple, co-existing readings of the text. Intradocumentary variation can become highly complex, for instance in the case of a deletion inside a deletion inside a deletion, etc. The structural variation in this example becomes manifest if we compare the two witnesses: the excerpt in witness 1 is contained by one <s> element, while the phrase in witness 2 is contained by two <s> elements. However structural variation does not only occur across documents: when an author indicates the start of a new chapter or paragraph by inserting a metamark of some sorts, this is arguably a form of structural intradocumentary variation.
To summarise, we can distinguish different forms of textual variance. Variation can occur on the level of the text characters (linguistic or semantic variation) and on the structure of the text (sentences, paragraphs, etc.). Furthermore, we distinguish between intradocumentary variation (within one witness) and interdocumentary variation (across witnesses). Arguably all forms are relevant for textual scholarship, but taking them into account when processing and comparing texts has both technical and conceptual consequences. These consequences have been discussed extensively elsewhere (Bleeker et al. 2018) and will be briefly repeated in section 5 below. The main goal of the present article is to focus on the question of visualisation. Assuming we have a software program that compares texts in great detail, including structural information and in-witness revisions, how can we best visualise its ouput? first and foremost, The additional information (structural and linguistic, intradocumentary and interdocumentary) needs to be visualised in an understandable way. The visualisations can be useful for a wide range of research objectives, such as (1) finding a change in markup indicating structural revision like sentence division, (2) presenting the different paths through one witness and the possible matches between tokens from any path, (3) complex revisions, like a deletion within a deletion within an addition, (4) studying patterns of revision, and so on. This begs the question: is it even possible or desirable to decide on one visualisation? Is there one ultimate visualisation that reflects the dynamic, temporal nature of the textual object(s) by demonstrating both structural and linguistic variation on an intradocumentary and interdocumentary level? the existing field of Information Visualisation can certainly offer inspiration, but simply adopting its methods and techniques will not suffice, since it deals primarily with objects which are 'self-identical, self-evident, ahistorical, and autonomous' (Drucker 2012), adjectives which could hardly be applied to literary texts.
Existing Visualisations of collation results
Let us consider the various existing visualisations of collation output and explore to what extent they address the conditions outlined above. We can distinguish roughly five types of visualisation: alignment tables, parallel segmentation, synoptic viewers, variant graphs, and phylogenetic trees or 'stemmata'. A smaller example of a collation of two fragments from Woolf's A Sketch of the Past (holograph MS-VW-SoP and typescript TS1-VW-SoP) serves as illustration of the effect of the visualisations: Witness 1 (MS-VW-SoP): with the boat train arriving, people talking loudly, chains being dropped, and the screws <del>the</del> beginning, and the steamer suddenly hooting Witness 2 (TS1-VW-SoP): with the boat train arriving; with people quarrelling outside the door; chains clanking; and the steamer giving those sudden stertorous snorts These two small fragments are transcribed in plain text format and subsequently collated with the software program CollateX. Unless indicated otherwise, the result from this collation forms the basis for the visualisation examples below.
Alignment table
An alignment table presents the text of the witnesses in linear sequence (either horizontally or vertically), making it well-suited to a study of the relationships between witnesses on a detailed level, but less so to acquire an overview of patterns in revision. Note that 'aligned tokens' are not necessarily the same as 'matching tokens': two tokens may be placed above each other because they are at the same relative position between two matches, even though they do not constitute a match. For this reason, alignment tables often have additional markup (e.g. colours) to differentiate between matches and aligned tokens. The arrangement of the tokens is also one of the advantages of an alignment table: it shows at first glance the variation between tokens at the same relative position. In other words, this representation indicates tokens which match on a semantic level, such as synonyms or fragments with similar meanings, such as 'talking loudly' and 'quarrelling outside the door' (Fig. 2).
Ongoing research into the potential of an alignment table visualisation to explore intradocumentary variation (see Bleeker et al. 2017, visualisations created by Vincent Neyt) focuses on increasing the amount of information in an alignment table by incorporating intradocumentary variation in the cells. The alignment table in Fig. 3 shows that witness 1 (Wit1) contains several paths; matching tokens are displayed in red.
Synoptic viewers
A synoptic edition contains a visual representation of the collation results from the perspective of one witness, where the variants are indicated by means of a system of signs or diacritical marks. In contrast to an alignment table, a synoptic overview is more suitable as an overview examination of the patterns of variation. The following paragraphs discuss two ways of presenting textual variation synoptically: parallel segmentation and an inline apparatus. It may be clear that both are skeuomorphic in character, in the sense that they mimic the analogue examination and presentation of textual variants. This characteristic should not necessarily be considered negative, however, precisely because it is a tried and tested instrument for textual research.
Parallel segmentation
The term 'parallel segmentation' may be confusing, as it is also the name of the (TEI) encoding for a critical apparatus. In this context, parallel segmentation is used to describe the visualisation of textual variation in a side-by-side manner, often with the corresponding segments linked to one another. The quantity of online, open source tools for a parallel segmentation visualisation suggests that it is a popular way of studying textual variation (e.g. the Versioning Machine, 4 the Edition Visualisation Technology -EVTproject, 5 and the visualisation of Juxta Commons). 6 As Fig. 4 shows, parallel segmentation entails presentation of the witnesses as reading texts in separate panels which can be read vertically (per witness) or horizontally (interdocumentary variation across witnesses). Colours indicate the matching and non-matching segments.
To be clear: this parallel segmentation visualisation concerns the presentation of variance; it is not a collation method in and of itself. The segments are encoded by the editor, for instance using the TEI <app>/<lemm>/<rdg> construction to link matching segments. In contrast to the inline apparatus presentation (see 2b below), which uses a base text, parallel segmentation presents the witnesses are presented as variations on one another. Most tools allow for an interactive visualisation in the sense that clicking on a segment in one witness highlights the corresponding segments in the other witness(es). As represented in
Critical or inline apparatus
Conventionally, an apparatus accompanies a critically established text which figures as a base text. The apparatus is made up of a set of notes containing variant readings, often recorded in some shorthand using diacritical signs, witness sigli, and some context. Variant readings encoded according to the TEI guidelines can be generated as said footnotes, or the reader can select certain readings to be displayed/ignored. Alternatively, an inline apparatus entails a synoptic visualisation of the variant readings in the form of diacritical marks inside a reading text. This kind of synoptic overview can draw the reader's attention to the places in the text that underwent heavy revisions. A classic example of a synoptic visualisation is found in the Ulysses edition (Joyce 1984(Joyce -1986), a presentation format which Hans Walter The witnesses are displayed side-by-side, with cancelled text in witness Version A represented by strikethrough, added text by green, and matching text by highlight. In this example, the collation has been carried out manually and transcribed according to the TEI Parallel Segmentation method (Schacht 2016. 'Introduction') Fig. 5). The clear advantage of a digital synoptic edition is that the diacritical signs can be replaced with visual indications which have a lower readability threshold than diacritical marks, such as different colours or a darker shade behind the tokens that vary in other witnesses (cf. the Faust edition).
Variant graph
A variant graph is a collection of nodes and edges. It is to be read from left to right, top to bottom, following the arrows. This reading order makes it a directed acyclic graph (DAG): it can be read in one order only, without 'looping' back. In some visualisations, the text tokens are placed on the edges (e.g. Schmidt and Colomb 2009); in others, they are placed in the nodes (e.g. CollateX; Fig. 6). In contrast to the alignment table, there is no 'visual alignment' in the variant graph: matching tokens are merged. Only the variant text tokens are made explicit; witness sigla indicate which tokens belong to which witness. By following a path over nodes and edges, users can read the text of a specific witness and see where it corresponds with and diverges from other witnesses. One of the main advantages of a variant graph is that it doesn't impose one single order: in the visualisation, no path through the text is preferred over the other. The variant graph thus facilitates recording and structuring non-linear structures in manuscript texts, making it easier to visualise layers of writing without preferring one over the other. Because the variant graph is capable of including more information than for instance an alignment table, it is a useful visualisation with which to analyse the collation outcome in detail.
The vertical or horizontal direction of the variant graph depends on the tool or the preference of the user. Horizontally oriented variant graphs imitate to some extent the Western reading orientation (from left to right), while variant graphs that are vertically situated appear to anticipate the reading habits of 'homo digitalis' (from top to bottom). In both cases, longer witnesses result in endless scrolling and a loss of overview. This was reason for the TRAViz project to insert line breaks based on the assumption that online readers prefer vertical scrolling but also like to be reminded that the text in the variant graph derives from a codex format (Jänicke et al. 2014 ; Fig. 7).
The variant graphs of CollateX in the figures directly above are non-interactive by design (since they are visual renderings of a collation output). However, the usefulness of interactive visualisations has been positively noted in several contributions (e.g., Andrews and Van Zundert) and projects. TRAViz, for instance, lets users interact with the graph and adjust it to match their needs and interests, and the variant graphs generated by the Stemmaweb tool set 7 allow for their nodes to be connected, input to be adjusted, and edges to be annotated with additional information about the type of variance. Such features emphasise the visualisation's double function as a means of communication and a scholarly instrument: on the one hand, it allows the user to clarify and communicate her argument about textual variation. On the other, the possibility of adjusting the visualisation and thus the representation of variation foregrounds the idea that the output of a tool is open to interpretation.
Phylogenetic trees or stemmata
One final type of visualisation is the phylogenetic tree (also known as 'stemma codicum' or 'stemmata'). Stemmata are not a collation method: they are created by the scholar or generated based on collation output like alignment tables or variant graphs. For that reason, stemmata do not directly concern the visualisation of collation output, primarily because the phylogenetic tree is used to store and explore the relationships between witnesses (and not between tokens). Nevertheless, this kind of tree provides a valuable perspective on visualising textual variation on a macro level: even at first glance, the tree conveys a good deal of information. The arrangement of the nodes within a stemma is meaningful; nodes close together in the stemma imply a high similarity between the witnesses. Each node in a tree represents a witness, and the edges which connect the nodes represent the process of copying one witness to another (a process sensitive to mistakes and thus variation). Stemmata are traditionally rooted, the witness represented as root being the 'archetype', which implies that all witnesses derive from one and the same manuscript (Fig. 8). More recently unrooted trees have 7 Stemmaweb brings together several tools for stemmatology: https://stemmaweb.net/ (last accessed on 2018, April 27). been introduced that do not assume one 'ancestor' or archetype witness and simply represent relationships between witnesses (Fig. 9). 8 A visualisation method similar to (and probably inspired by) stemmata or phylogenetic trees is the genetic graph in which the genetic relationships between documents related to a work are modelled (see Burnard et al. 2010, §4.2 ; Fig. 10). Nodes represent documents; the edges may be typed to indicate the exact relationship between documents (e.g. 'influence'), and they are usually directed so as to convey the chronology of the text's chronological development. A genetic graph is also not a direct visualisation of collation output, but a visual representation of the editor's argument about the text's development and her construction of the genetic dossier. With this overview representation, the editor may point to the existence of textual fragments like paralipomena, which were previously ignored or delegated to footnotes, critical apparatuses, or separate publications.
The kind of macrolevel visualisations provided by stemmata or genetic graphs present the necessary overview and invite more rigorous exploration. Diagrams, graphs, or coloured squares add new perspectives to the various ways in which we look at text.
HyperCollate
HyperCollate, a newly developed collation tool at the R&D department of the Humanities Cluster of the Dutch Royal Academy of Science, examines textual variation in an inclusive way using a hypergraph model for textual variation. HyperCollate is an implementation of TAG, the data model also developed at the R&D department (Haentjens Dekker and Birnbaum 2017). A discussion of the collation tool's technical specifications is not within the scope of the present article (see Bleeker et al. 2018); for now, it suffices to know that a hypergraph differs from traditional graphs, the edges of which can connect only two nodes with each other, because the edges in a hypergraph can connect more than two nodes with one another. These 'hyperedges' connect an arbitrary set of nodes, and the nodes in turn can have multiple hyperedges. Conceptually, then, the hyperedges in the TAG model can be considered as multiple layers of markup/information on a text. The hypergraph for variation used by HyperCollate is an evolved model based on the variant graph. By treating texts as a network, HyperCollate is able to process intradocumentary variation and store multiple hierarchies in an idiomatic manner. In other words, because HyperCollate doesn't require TEI/XML transcriptions to be transformed into plain text files, TEI tags indicating revision like <del> and <add> can be used to improve the collation result. HyperCollate accordingly uses valuable intelligence of the editor expressed by markup to improve the alignment of witnesses.
Since the internal data model of HyperCollate is a hypergraph, the input text can be an XML file and doesn't need to be transformed into plain text. The comparison of two data-centric XML files is relatively simple, and it is even a built-in of the oXygen XML (Burnard et al. 2010), with the nodes A to Z representing different documents in the genetic dossier of a hypothetical work editor, but as explained above, a typical TEI-XML transcription of a literary text with intradocumentary variation constitutes partially ordered information. In order to process this kind of information, HyperCollate first transforms the TEI-XML witnesses into separate hypergraphs and then collates the hypergraphs. Graph-to-graph collation ensures that the input text can be processed taking into account both the textual content and the structure of the text. For each witness, HyperCollate looks at the witness' text, the different paths through the witness' text, and the structure of the witness, and subsequently compares the witnesses on all these levels. Accordingly, the output of HyperCollate contains a plethora of information. Similar to CollateX, 9 a widely used text collation tool, the output of HyperCollate could be visualised in different ways (e.g., an alignment table or a variant graph). By default, HyperCollate's output is visualised as a variant graph, primarily because a variant graph does not have a single order so it is relatively straightforward to represent the different orders of the tokens as individual paths. The question is, how (and where) to include the additional information in the visualisations? A variant graph may be more flexible regarding the token order, but the nodes and edges can only contain so much extra information, as Fig. 12 below shows.
A favourable consequence of HyperCollate is that, in case of intradocumentary variation, each path through a witness is considered equally important. This feature is in stark contrast with current approaches to intradocumentary variation, which usually entail a manual selection of one revision stage per witness (see Bleeker 2017, 110-113). By means of illustration, let us take a look at another collation of two small fragments from Woolf's Time Passes containing intradocumentary structural variation. The fragments are manually transcribed in TEI/XML and simplified for reasons of clarity. The XML files form the input of HyperCollate. Witness 1 contains an interesting addition (highlighted): Woolf added a metamark and the number '2' in the margin. The transcriber interpreted the added number as an indication that the running text should be split up and a new chapter should be started, so she tagged the number with the <head> element. 10 This means that the tokens of this witness can be ordered in two ways: excluding the addition and including the addition. Furthermore, the <head> element in witness 1 is at the same relative position as the <head> element in witness 2, so that the two headers are a match (even though their content is not). Figure 11 shows the variant graph visualisation of the output. Note that the paths through the witnesses can be read by following the witness sigli on the edges (w1, w1:add, w2); the markup <head> is represented as a 'hyperedge' 11 on the text nodes: An alternative way of representing HyperCollate's output in a variant graph is by enclosing both linguistic and structural information within the text nodes (Fig. 12).
The visualisations of the collation hypergraph in Figs. 11 and 12 represent the collation output of two small and simplified witnesses. It may be clear that collating two larger TEI/ XML transcriptions of literary text, each containing several stages of revisions and multiple layers of markup, results in a collation hypergraph that, in its entirety, cannot be visualised in any meaningful way. At the same time, the various types of information contained by the collation hypergraph are of instrumental value to a deeper study of the textual objects. For that reason, HyperCollate offers not one specific type but rather lets the user select from a wide variety of visualisations, ranging from alignment tables to variant graphs. In selecting the output visualisation, the user decides which information she prefers to see and which information can be ignored. She may consider an alignment table if she's primarily interested in the relationships between witnesses on a microlevel, or a variant graph if an insightful overview of the various token orders is more relevant to her research. Furthermore, she may decide what markup layers she want to see: arguably knowing that every token is part of the root element 'text' is of less concern than detecting changes in the structure of sentences. Making such decisions does require the user to have a basic knowledge of the underlying dataset and a clear idea of what she's looking for.
Requirements for visualising textual variance
This overview allows us to draw a number of conclusions regarding the visualisation of textual variation and to what extent each visualisation considers the various dimensions of the textual object. We have seen that intradocumentary variation is as of yet not represented by default; the editor is required to make certain adjustments to the visualisation. Alignment tables and parallel segmentation can be extended to some extent, for instance by using colours and visualising deletions and additions. Regular variant graphs may include intradocumentary variation if the different paths through the texts are collated as separate witnesses 12 ; only HyperCollate's variant graph output includes both intra-and interdocumentary variation. Structural variation, is currently only taken into account by HyperCollate and consequently only visualised in HyperCollate's variant graph. While the added value of studying this type of variation may be clear, it remains a challenge to visualise both linguistic/semantic and structural variation in an informative and clear manner. Fig. 11 may clearly convey the structural difference between witness 1 and witness 2 (i.e., the <head> element), but the raw collation output contains much more information which, if included, would probably overburden the user. A promising feature of visualisations intended to further explorations of textual variation is interactivity. One can imagine, for instance, the added value of discovering promising sites of revision through a graph representation, zooming in, and annotating the relationships between the witness nodes. Fig. 12 Alternative visualisation of HyperCollate output, with each node containing the Xpath-like information about the place of the text in the XML tree (e.g. the path /TEI/text/div/p/s/ indicates that the ancestors of a text node are, bottom up, an <s> element, a <p> element, a <div> element, the <text> element and the <TEI> element) of text. Instead, each visualisation highlights a different aspect of textual variance or provides another perspective on text. Each perspective puts another textual characteristic before the footlights, while (ideally) making users aware of the fact that there is much more happing behind the familiar scenes. As Tanya Clement argues, focusing on one aspect can be instrumental in our understanding of text, helping the user 'get a better look at a small part of the text to learn something about the workings of the whole' (Clement 2013, §3). Indeed it seems that multiple and interactive representations (cf. Andrews and Van Zundert 2013;Jänicke et al. 2014;Sinclair et al. 2013) are a promising direction.
Visual literacy and code criticism
The process of visualising data is a scholarly activity in line with the process of modelling, hence the resulting visualisation influences the ways in which a text can be studied Collation output can be visualised in different ways, which raises essential questions regarding the assessment and evaluation of visualisations. The function of a digital visualisation is two-fold: on the one hand, it serves as a means of communication and on the other hand it provides an instrument of research. The communicative aspect implies that visualisation is first and foremost an affair of the scholar(s) who creating visualisations. The diversity of visualisations, each of which highlights different aspects of the text, reflects the hermeneutic aspect inherent to humanist textual research. Thus, by using visualisation to foreground textual variation, editors are able to better represent the multifocal nature of text. In order to choose an appropriate representation of collation output, then, scholars need to know what argument they want to make about their data set, and how the visualisation can support that argument by presenting and omitting certain information. Accordingly, they can estimate the value of a visualisation for a specific scholarly task and expose the inevitable bias embedded in technology.
When a visualisation is used as an instrument of study and exploration, it is vital to be critical about its workings and its (implicit) bias. This includes an awareness of which elements the visualisation highlights and, just as important, which elements are ignored. As Martyn Jessop has pointed out, humanist education often overlooks training in 'visual literacy', which can be defined as the effective use of images to explore and communicate ideas (Jessop 2008, 282). Visual literacy, then, denotes an understanding of the fact that a visualisation represents a scholarly argument. Jessop identifies four principles that facilitate the understanding of a visualisation: aims and methods, sources, transparency requirements, and documentation (Jessop 2008 290). The documentation of a visualisation of collation output then, could describe what research objective(s) it aims to achieve, on what witnesses it is based, and how these witnesses have been transcribed, tokenized, and aligned. 13 Another suitable rationale for critically evaluating the visualisation process is offered by the domains of 'tool criticism' or 'code criticism' (Traub and van Ossenbruggen 2015;Van Zundert and Dekker 2017, 125). Tool criticism assumes that the code base of scholarly tools reflects certain scholarly decisions and assumptions, and it raises critical questions in order to further awareness of the relationships between code and scholarly intentions. Questions include (but are not limited to) 'is documentation on the precision, recall, biases and pitfalls of the tool available', or 'is provenance data available on the way the tool manipulates the data set?' (Traub and van Ossenbruggen 2015).
Indeed, when it comes to evaluating the visualisation of automated collation results, one may well ask to what extent these witnesses and the ways in which they have been processed by the collation tool are subject to bias and interpretation. Like transcription (and any operation on text for that matter), collation is not a neutral process: it is subject to the influence of the editor. This becomes clear if we look at the different steps in the collation workflow as identified by the Gothenburg model (GM; 2009). The GM consists of five steps: tokenisation, normalisation, alignment, analysis, and visualisation. For each step, the editor is required to make decisions, e.g. 'what constitutes a token', 'do I normalise the tokens and, if so, do I present the original and the normalised tokens', or 'what is my definition of a match and how do I want to align the tokens?' As Joris Van Zundert and Ronald Haentjens Dekker emphasise, not all decisions made by collation software are easily accessible to the user, simply because they are the result of 'incredibly complex heuristics and algorithms' (Van Zundert and Dekker 2017, 123). To illustrate this, we can look at the decision tree used by HyperCollate to calculate the alignment of two simple sentences.
The graph in Figs. 13 and 14 are complementary and show all possible decisions the alignment algorithm of Hypercollate can take in order to align the tokens of witness A and witness B and the likely outcomes of each decision. An evident downside of such trees is that they become very large very quickly. For that reason, we see them as primarily useful for editors keen to find out more about the alignment of their complex text.
The GM pipeline is not strictly chronological or linear. Although automated collation does start with tokenization, not every user insists on normalising the tokens, and a step can be revisited if the outcome is considered unsatisfactory or not in line with the user's expectations. Though visualisation comes last in the GM model, this article has argued that it is surely not an afterthought to collation. In fact, the visual representation of textual variance entails an additional form of information modelling: editors are compelled to give physical form to an abstract idea of textual variation which exists at that point only in the transcription and (partly) in the collation result. Using the markup to obtain a more optimal alignment, as HyperCollate does, only emphasises this point: marking up texts Fig. 13 The collation of witness B against witness A, with potential matches indicated in red entails making explicit the knowledge and assumptions that would otherwise have been left implicit. Visualising the markup elements, then, implies that these assumptions and thus a particular scholarly orientation to text is foregrounded.
Conclusions
The present article investigated several methods of representing textual variation: alignment tables, synoptic viewers, and graphs. Two small textual fragments containing in-text variation and structural variation formed the example input for the alignment table and the variant graph visualisation. The fragments were transcribed in TEI/XML and subsequently collated with CollateX and Fig. 14 The decision tree for collating witness B against witness A. Chosen matches indicated in bold, discarded matches rendered as strike-through; others are potential matches. Arrow numbers indicate the number of matches discarded since the root node (this number should be as low as possible). Red leaf nodes indicate a dead end, orange leaf nodes a 'sub-optimal' match, and green leaf nodes indicate an optimal set of matches HyperCollate respectively. In addition, we looked at existing visualisations of the Versioning Machine and the Diachronic Slider. These visualisations were judged on their potential to represent different types of variance in addition to the regular interdocumentary variation: intradocumentary, linguistic, and structural. Visualising these aspects of text paves the way for a deeper, more thorough, and more inclusive study of the text's dimensions. We concluded that there is currently no ideal visualisation, and that the focus should not be on creating an ideal visualisation. Instead, we propose appreciating the multitude of possible visualisations which, individually, amplify a different textual property. This requires us to appreciate what a visualisation can do for our research goals and, furthermore, to evaluate its effectiveness. To this end, methods from code criticism and visual literacy can be of aid in furthering an understanding of the digital representations of collation output as rhetorical devices. We propose evaluating the usefulness of a visualisation on the basis of the following principles: 1) Interactivity. This may range from annotating the edges of a graph, adjusting the alignment by (re)moving nodes, to alternating between macro-and micro level explorations of variance. 2) Readability and scalability. Especially in a case of many and/or long witnesses, alignment tables and variant graphs become too intricate to read: their function becomes primarily to indicate complex revision sites. 3) Transparency of the textual model. The visualisation not only represents textual variance, but simultaneously makes clear what scholarly model is intrinsic to the collation. It needs to be clear which scholarly perspective serves as a model for transcription and representation. 4) Transparency of the code. Visualisations represent the outcome of an internal collation process which is usually not available to the general user audience. A clear, step-by-step documentation of the algorithmic process helps users understand what scholarly assumptions are present in the code, what decisions have been made, what parameters have been used, and how these assumptions, decisions, and parameters may have influenced the outcome. Decision trees may be of additional use. This applies particularly to interactive visualisations: if it's possible to adjust parameters or filters, these adjustments need to be made explicit.
Digital visualisation is sometimes regarded as an afterthought in humanities research, or even considered with a certain degree of suspicion. Some consider it a mere technical undertaking, an irksome habit of some digital humanists who recently learned to work with a flashy tool. Yet if used correctly, these flashy tools may also function as instruments of study and research, which means they should be evaluated accordingly.
Within the framework of visualising collation output, visual literacy is key. Having a critical understanding of the research potential of visualisations facilitates our research into textual variance. After all, these representational systems produce an object which we use for research purposes; we need to take seriously the ways in which they do this. In addition to communicating a scholarly argument, digital visualisations of collation output foreground textual variation. The collation tool HyperCollate facilitates the examination of a text from multiple perspectives (some unfamiliar, some inspiring, some contrasting, but all of them highlighting a particular element of interest). This freedom of choice invites scholars to reappraise prevalent notions and continue exploring the dynamic nature of text in dialogue with other disciplines. Digital visualisations, then, give us a means to take variants out of the graveyard and into an environment in which they can be fully appreciated.
|
2019-07-14T07:01:41.362Z
|
2019-06-19T00:00:00.000
|
{
"year": 2019,
"sha1": "22e9664981541e54abc5a865eaf79f5122a25ec4",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s42803-019-00012-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "5ea0a40d40471d1e3bb9fe52ea1bf41503ac4c3f",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
40462531
|
pes2o/s2orc
|
v3-fos-license
|
Enhancing Ionic Conductivity of Bulk Single Crystal Yttria-Stabilized Zirconia by Tailoring Dopant Distribution
We present an ab-initio based kinetic Monte Carlo model for ionic conductivity in single crystal yttria-stabilized zirconia. Ionic interactions are taken into account by combining density functional theory calculations and the cluster expansion method and are found to be essential in reproducing the effective activation energy observed in experiments. The model predicts that the effective energy barrier can be reduced by 0.15-0.25 eV by arranging the dopant ions into a super-lattice.
Yttria-stablized zirconia (YSZ) is a widely used electrolyte in solid oxide fuel cells (SOFC) and oxygen sensors because of its high ionic conductivity at high temperatures [1]. Driven by the need to reduce the operating temperature of SOFC, much of the current research effort focuses on the design of new solid electrolyte materials with significantly enhanced ionic conductivity at intermediate temperatures [2,3]. The presence of free surfaces in nanoscale thin films and interfaces in heteroepitaxial structures has been found to enhance ionic conductivity [4][5][6][7][8][9]. However, the effect of dopant distribution on the ionic conductivity of bulk single-phase electrolytes has largely remained unexplored, despite the fundamental importance of dopant-vacancy interaction in ionic transport [10,11] and the possibility of tailoring dopant distributions by novel deposition techniques [12].
Atomistic simulations have the promise to become a useful design tool for new electrolyte materials, by predicting the ionic conductivity of candidate structures and elucidating the fundamental transport mechanisms [11,[13][14][15][16][17]. Unfortunately they are still limited in their length and time scales. For example, to accurately describe the long-range ionic interactions in YSZ requires density functional theory (DFT) models with relatively large supercells. The high computational cost limits the time scale of ab initio molecular dynamics simulations to picoseconds [14]. Hence, a major challenge at present is to construct a kinetic Monte Carlo (kMC) model that not only can access the macroscopic time scale [16,17], but also retains the accuracy of DFT models in describing the ionic interactions. In the pioneering kMC model for YSZ [16], ionic interactions are ignored in the metastable states, i.e., all possible states are sampled with uniform probability. While it successfully predicts a maximum in the conductivity as a function of doping concentration, the predicted temperature dependence is significantly weaker than experiments, signaling the importance of ionic interactions. The lack of ionic interaction also makes this model unsuitable to predict the effect of dopant distribution on ionic conductivity.
In this letter, we develop a kMC model for oxygen vacancy diffusion in YSZ that faithfully captures the ionic interactions. DFT calculations with supercell sizes significantly larger than previous studies [16,18] are performed to accommodate long-range interactions, and the data are used to construct a cluster expansion (CE) model. KMC simulations using this model predicts an effective activation energy that agrees better with experiments than the non-interacting model. The kMC simulations further predict that the maximum conductivity is achieved when the Yttrium dopant ions are distributed as [100] lines and form a 2D rectangular super-lattice in the two other directions. The effective energy barrier in this structure is lower than the random distribution by 0.15-0.25eV.
Ionic conduction in YSZ occurs through oxygen anion diffusion by the vacancy mechanism. The ionic conductivity is the averaged effect of many vacancy jumps and can be predicted from a kMC simulation over a sufficiently long time. The degrees of freedom in our kMC model are the positions of the oxygen vacancies, which hop on the simple cubic anion sublattice of YSZ. At each kMC step, the probability rates of all vacancy jumps to their nearest neighbor positions are calculated by j = ν 0 exp − E b kB T , where k B is Boltzmann's constant, T is temperature, and E b is the activation energy barrier for each jump. ν 0 is a trial frequency and is set to 10 13 Hz [16]. At each step, only one event is selected based on the probability rates of all possible events [19]. From a long kMC simulation, the diffusion coefficient of the vacancies center-of-mass is computed from the meansquare-displacement. The ionic conductivity is then computed from the Nernst-Einstein relation [27].
A fundamental input to the kMC simulation is the energy barriers for vacancy jumps, E b , which depends on the ionic configurations around the jumping vacancy. In a previous model [16], E b is assumed to depend only on the chemical species of the two cations closest to the jumping vacancy, as shown in Fig. 1(a). Because the energy barrier of every jump equals that of the reverse jump, one can show that all metastable states in this model must have identical energy. Hence we will refer to it as the non-interacting model. Experimental and com- putational data have suggested that interactions play an important role in ionic conduction [10,11]. To account for interactions, we use the kinetically resolved activation (KRA) model [20,21] b is the "kinetically resolved" barrier when the two metastable states happen to have identical energy, and ∆E is the energy difference between the two states. We take the energy barriers in the non-interacting model [16] as our E 0 b . The function f is often approximated by [20,21]. Here we use a slightly better approximation for function f by assuming that the energy landscape between the two metastable states has a sinusoidal shape when ∆E = 0, and is modified by a linear term when ∆E = 0 [27]. Hence our task of specifying the energy barrier E b is reduced to an accurate description of the energy difference ∆E.
Because the ionic interactions in YSZ are long ranged, they can only be captured accurately by DFT calculations in relatively large supercells. It is impossible to perform DFT calculations for all ionic configurations sampled by the kMC simulation. Instead, we use the cluster expansion method (CEM) to limit the necessary number of DFT calculations. In CEM, every metastable state in the kMC simulation can be uniquely mapped to a spin configuration {s i } of an Ising model [18]. The energy as a function of ionic configuration can be expressed by a cluster expansion [22] where V α is called effective cluster interaction (ECI) for cluster α, and φ α = i∈α s i is the cluster function involving all spin variables belonging to cluster α. [23]. This supercell contains 108 cation sites (Zr or Y) and 216 anion sites (O or V O ) and is significantly larger than previous studies [16,18], in order to accurately account for long range Coulombic and elastic interactions. K-point sampling is limited to the Γ-point considering the large size of supercell. The volume and the shape of the supercell are allowed to relax together with the ionic positions. A high energy cut-off of 520 eV is used to avoid Pulay stress. Each ionic configuration takes ∼ 10 4 CPU-hours to be fully relaxed.
The number of clusters needs to be significantly truncated for robust fitting, otherwise the cluster expansion model can be overly-adapted to the fitted data set [24]. First, we only keep the clusters in which any two spins are separated by less than 1.5a 0 , where a 0 is the lattice parameter of YSZ. Second, we only keep clusters that involve up to 3 spins. Accounting for the translational and rotational symmetries, 192 independent clusters survive this truncation. For further truncation, a Monte Carlo algorithm is used to select n c clusters out of 192 possible clusters by minimizing the cross validation score [18]. To measure the predictive power of CEM, we separate the DFT data into two sets. Set I contains 100 data points and are used to fit the ECI's. Set II contains 40 data points are used to benchmark the CEM's predictions. The root mean square difference between the DFT energies and the CEM's predictions per cation in Set I and Set II are defined as the error of fitting and the error of prediction, respectively.
These two errors have different dependence on n c . For example, when n c = 97, the error of fitting is 0.0006 eV while the error of prediction is 0.012 eV. The large differ- ence between the two errors means that the fitted CEM has entirely lost its predictive power if n c is too large. Only when n c ≤ 9, both errors are the same, and decrease with increasing n c . Hence, in this work, the optimal choice is n c = 9 [27], where both the error of fitting and the error of prediction is 0.005 eV, as shown in Fig. 2. This error is small enough for our kMC simulations and is smaller than a previous study [18]. To our knowledge this is the first time the predictive power of an interaction model for YSZ has been demonstrated by monitoring the error of prediction. Fig. 2(b) shows the effective binding energy between an Y ion and an oxygen vacancy predicted by the CEM model. The preference of the oxygen vacancy to the second nearest neighbor site of Y is clearly seen, consistent with previous experimental and theoretical works [16,17]. The fitted CEM allows us to compute the energy difference ∆E between the two states before and after a vacancy jump, which modifies the energy barrier E b of the jump. Using this energy barrier model, we performed kMC simulations in 3[100] × 3[010] × 3[001] YSZ supercells in which the doping concentration varies from 5% to 13%. The ionic conductivity at each doping concentration is computed by averaging over 40 randomly generated Y distributions. As shown in Fig. 3(a), the ionic conductivity (at 1800 K) is maximum at 8-mol%, consistent with earlier experimental [25,26] and theoretical results [16,17]. Fig. 3(b) is the Arrhenius plot of ionic conductivity at 8mol% doping concentration. The predicted activation energy is 0.74eV at high T and 0.85eV at low T , in much better agreement with experiments (0.85-1.0 eV) [26] than the non-interacting model (0.59 eV) [16]. The remaining difference with experiments may be due to the error in E 0 b taken from [16]. The neglect of activation entropy in vacancy jumps does not affect the conclusion because it does not change the slope of the Arrhenius plot.
The conductivity results shown in Fig. 3 are the av-eraged value over 40 random Y configurations, the standard deviation over which is about 19% of the average. This indicates that the ionic conductivity is sensitive to the spatial distribution of Y cations and poses the question: what is the optimal Y distribution that maximizes ionic conductivity? To answer this question, we have performed kMC simulations for a variety of Y distributions, in which the Y cations are segregated into either spherical clusters, (001) layers, or [100] rods. The simulation results suggest two design principles that ultimately guide us to the optimal Y distribution. When all Y cations in the supercell are segregated into a spherical cluster, the ionic conductivity is actually lower than the random distribution (by 27% at 1800 K), contrary to the prediction based on the non-interacting model [15]. This is because the interaction between Y cations and oxygen vacancies, as shown in Fig. 2, attracts the vacancies next to the cluster. To diffuse over long distances and contribute to the ionic conductivity, vacancies must detach from the Y clusters. This requires overcoming a binding energy of ∼ 0.12 eV, which reduces the ionic conductivity.
Here the reduction of ionic conductivity is caused by the increase of spatial variation of the potential energy. Based on this result, we can formulate design principle I: the optimal Y distribution should minimize the energy variation of the metastable states as oxygen vacancies jump along the conduction direction. In this work, we focus on conduction along the [100] (i.e. x) direction. A promising candidate structure is to have Y cations segregated into planes, so that each (001) cation layer is either completely filled by Y, or completely filled by Zr. Due to translational invariance, the potential energy from cation-vacancy interaction remains constant after an oxygen vacancy jumps in the [100] direction [28]. The layered structure can be fabricated using thin film deposition techniques such as PLD [12].
Unfortunately, the layered structure also has a lower conductivity than the random distribution (by 59% at 1800 K). This is surprising because one might expect enhanced conductivity, as the non-interacting model would predict, due to the existence of Y-free channels. The reduction in conductivity is caused by the segregation of oxygen vacancies to the two anionic layers immediately next to the Y layer. Because there is always a first nearest neighbor (1nn) Y, vacancy diffusion in this layer experiences a high energy barrier (with E 0 b = 1.29 eV). Given that vacancies prefer to be the second nearest neighbors (2nn) of Y, as shown in Fig. 2, it is somewhat surprising that the vacancies segregate to the nearest anionic layer. This problem is resolved by noticing that when the vacancy becomes the 1nn of two Y cations, it becomes the 2nn to four Y cations on the same cation layer. This result motivates our design principle II: the optimal Y distribution should not induce a high oxygen vacancy density in the first nearest neighbor sites of any Y cations. It turns out that the design principles I and II can be simultaneously satisfied when Y cations are segregated into [100] lines. When Y lines are well separated from each other, kMC simulations show that the oxygen vacancy density is peaked at 2nn sites surrounding the Y lines.
To enhance conductivity, we would like to pack more Y lines per unit volume in order to enhance the overall vacancy density. But when the distance between Y lines becomes too small, oxygen vacancies start to occupy 1nn sites around Y, deteriorating the ionic conductivity.
We examined a variety of 2D structures formed by [100] Y-segregated lines. The optimal structure at 1800 K is a square lattice with periodicity 1.5a 0 in y and z directions (structure A). Interestingly, the optimal structure at 500 K is different; it is a rectangular lattice with periodicity 1.5a 0 and 2a 0 in y and z directions, respectively (structure B). In both structures, the vacancy concentration is high in fast diffusing channels (E 0 b = 0.58 eV) away from 1nn sites of Y, see Fig. 4(a). Fig. 4(b) plots the temperature dependence of conductivity for structures A and B. Their activation energy is around ∼ 0.6 eV, significantly lower than the random distribution. Compared with the random Y distribution (at 8mol%), the ionic conductivity of structure A is enhanced by a factor of 1.35 at 1800 K, 11 at 500 K and 86 at 300 K. For structure B, the enhancement factor is 22 at 500 K and 532 at 300 K. This result provides a theoretical upper limit of ionic conductivity that can be achieved by rearranging dopants in YSZ.
In summary, we have developed an ab-initio based kMC model of the vacancy diffusion in bulk YSZ that accurately accounts for the ionic interactions. The predicted ionic conductivity shows much better agreement with experiments regarding the temperature dependence than the non-interacting model. The model predicts strong dependence of ionic conductivity on the spatial distribution of dopant cations. The maximum conduc-tivity is reached when Y cations are arranged into a rectangular superlattice of [100] lines. Fabrication of this structure is challenging, but may be feasible with novel deposition techniques. The method presented here can be easily applied to other solid electrolytes, in which the optimal dopant microstructure may be different from that in YSZ and may be easier to synthesize.
|
2010-11-09T17:56:19.000Z
|
2010-11-08T00:00:00.000
|
{
"year": 2011,
"sha1": "e4119d72d67ec6e6b78acdaada13c340861f11e4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1011.1838",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e4119d72d67ec6e6b78acdaada13c340861f11e4",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
221790072
|
pes2o/s2orc
|
v3-fos-license
|
Anterograde Axonal Transport in Neuronal Homeostasis and Disease
Neurons are highly polarized cells with an elongated axon that extends far away from the cell body. To maintain their homeostasis, neurons rely extensively on axonal transport of membranous organelles and other molecular complexes. Axonal transport allows for spatio-temporal activation and modulation of numerous molecular cascades, thus playing a central role in the establishment of neuronal polarity, axonal growth and stabilization, and synapses formation. Anterograde and retrograde axonal transport are supported by various molecular motors, such as kinesins and dynein, and a complex microtubule network. In this review article, we will primarily discuss the molecular mechanisms underlying anterograde axonal transport and its role in neuronal development and maturation, including the establishment of functional synaptic connections. We will then provide an overview of the molecular and cellular perturbations that affect axonal transport and are often associated with axonal degeneration. Lastly, we will relate our current understanding of the role of axonal trafficking concerning anterograde trafficking of mRNA and its involvement in the maintenance of the axonal compartment and disease.
INTRODUCTION
From the discovery of kinesin-1 (Vale et al., 1985) and cytoplasmic dynein in the late 20th century and their initial characterization as anterograde and retrograde motors, respectively (Hirokawa et al., 1990(Hirokawa et al., , 1991, substantial effort has been made to decipher their role in neuronal development, connectivity, and synaptogenesis. Since neurons are highly polarized cells with a heavily arborized dendritic network and an elongated axon that can extend over a meter away from their soma, they rely extensively on efficient intracellular transport for the targeting and sorting of proteins and organelles from the soma to their neurite network, where the transfer of information between presynaptic neurons and postsynaptic cells occurs (Südhof, 2018). The somatodendritic and axonal domains have distinct traffic properties and show selectivity towards specific populations of carrier vesicles (Farías et al., 2015). Indeed most somatodendritic vesicles fail to enter the axonal compartment at the level of the axon initial segment (AIS), a highly ordered specialized region of the proximal axon, which acts as a barrier to the diffusion of proteins and lipids between the two compartments (Farías et al., 2015). Long-range trafficking is largely performed by several motor proteins of the kinesin superfamily and cytoplasmic dynein (Hirokawa and Tanaka, 2015;Reck-Peterson et al., 2018). Kinesins mostly deliver their cargoes toward the periphery, while dynein moves in the opposite direction toward the center of the cell ( Figure 1A).
Intracellular transport is a fundamental mechanism underlying a variety of neuronal processes, including the establishment of cell polarity, axon growth or regeneration, synaptogenesis, and synaptic transmission and plasticity. Thus, axonal transport has been extensively studied in the past decades. Although the biochemical mechanisms of molecular motors-based transport are well understood, many of the regulatory pathways remain poorly understood, particularly in connection with pathology. It is not surprising to observe that axonal transport perturbations are often associated with severe neurodegenerative pathologies, though whether they are the direct cause or the result of these pathologies remains an open question. Indeed, the nervous system can be affected by a variety of adult-onset neurodegenerative diseases, which are characterized by early synaptic deficit and neurite dysfunction, a phenomenon referred to as ''dying back'' (Brady and Morfini, 2017). Thus, axonal homeostasis is often affected well before degenerative symptoms can manifest themselves at the level of the neuronal soma. Several pieces of evidence have shown a correlation between mutation of components of the transport machinery (microtubule, molecular motors, and molecular adaptors) and the genesis of neurodevelopmental and neurodegenerative diseases (Maday et al., 2014;Beijer et al., 2019;Sleigh et al., 2019). Also, impairment of axonal transport has been reported in a multitude of neurological disorders that are not directly linked to mutations of proteins belonging to the transport machinery (Sleigh et al., 2019).
Some cargoes are transported along axons anterogradely, some retrogradely and some bidirectionally. Synaptic vesicles, neurofilaments (NFs), and cytosolic proteins are examples of cargoes transported in anterograde fashion while signaling endosomes, autophagosomes, and injury signals are transported retrogradely (Olenick and Holzbaur, 2019). Mitochondria, certain endosomal populations, lysosomes, and mRNAs are transported in a bi-directional manner (Olenick and Holzbaur, 2019). Adaptor proteins selectively recruit molecular motors to specific cargoes targeting them to different transport pathways, which are often interdependent if not convergent (Jean and Kiger, 2012). Interestingly, the aforementioned routes of cargo transport in axons are also taken advantage of by external pathogens such as viruses (Taylor and Enquist, 2015). Even though the two routes are often interdependent as previously mentioned, we will concentrate on the mechanisms of anterograde axonal transport of membrane-bound and membrane-less organelles in neuronal physiology, focusing on several key aspects of axonal growth and synaptogenesis, and other cellular mechanisms such as local mRNA translation and liquid phase separation (LPS) that are likely to be fundamental actors in the regulation of axonal homeostasis and functions. We will also address the links between axonal transport dysfunctions and neurodegeneration, focusing on few neurodegenerative diseases as an example of how defects in anterograde axonal transport can result in neurodegeneration. Though outside of the scope of this review, an extensive wealth of evidence links neurodegeneration and retrograde axonal transport. For extensive coverage of these pathologies and their link to intracellular transport please refer to these comprehensive reviews (Schiavo et al., 2013;De Vos and Hafezparast, 2017;Beijer et al., 2019). We will also briefly discuss the contribution of the cytoskeleton as a necessary platform to facilitate long-range trafficking of mitochondria, which, while moving bidirectionally, need to be addressed as they represent the main source of energy for intracellular transport.
CYTOSKELETAL ELEMENTS OF AXONAL TRANSPORT
Due to their extremely polarized morphology and their status of postmitotic cells, neurons need to maintain a solid structural cytoskeleton, which is composed of microtubules (MTs), intermediate filaments, and actin filaments. This structure is fundamental to neuronal function and its disruption is associated with neurodegeneration (Beijer et al., 2019).
Active axonal transport of proteins and membranous organelles takes place along MTs (Weisenberg, 1972;Desai and Mitchison, 1997), upon which molecular motors of the kinesin superfamily (Vale et al., 1985;Hirokawa et al., 1989;Lawrence et al., 2004), and cytoplasmic dynein Reck-Peterson et al., 2018) are loaded ( Figure 1A). Axonal MTs are longitudinally aligned with their growing plus-end directed towards the axon tip; a large number of kinesins are moving from MT minus to plus-end in a processive manner, while dynein goes in the opposite direction (Howard et al., 1989;Wang et al., 2015). In addition to MTs, NFs are the most abundant cytoskeletal component in axons and control axonal diameter (Grant and Pant, 2000). NFs are formed by neurofilament light (NF-L), medium (NF-M), and heavy (NF-H) chains, apart from the peripheral nervous system, where they contain peripherin as well (Grant and Pant, 2000). While kinesins and dynein are MT associated motors, a third family of molecular motors, myosins, is reliant on actin filaments (Xiao et al., 2016;Beijer et al., 2019). Interestingly, Myosin Va can couple MT and Actin filament-based transport via its interaction with Kinesin heavy chain and NF-L, thus helping to regulate the cargo distribution across the cytoskeleton (Cao et al., 2004;Rao et al., 2011).
Neuropathologies Related to Cytoskeletal Defects
In light of their essential structural function in axons, NFs are critical for axonal transport. NF-L in particular has been shown to regulate NF integrity and their axonal transport (Yates et al., 2009). Not surprisingly, alteration of cytoskeletal elements has been described in several neurodegenerative diseases, where either cytoskeletal proteins or their adaptor/regulators are mutated (Beijer et al., 2019). Perhaps one of the best examples of such pathologies is the Charcot-Marie-Tooth disease (CMT), which is the most common hereditary neuropathy, characterized by distal muscular atrophy and sensory loss Kinesin-mediated anterograde transport during axon elongation and synaptogenesis. Anterograde microtubule-dependent movements of membranous organelles and RNA granules are supported by various plus-end-directed kinesin motors. Organelles such as mitochondria, vesicles, RNA are transported from the soma toward axon tip during axonal growth and synapse formation. In the absence of motor activity, some kinesins also contribute to MT depolymerization during growth cone retraction.
Frontiers in Molecular Neuroscience | www.frontiersin.org (Züchner and Vance, 2006). CMT subtype E (CMT2E) is associated with mutations affecting the integrity of the neuronal cytoskeleton, where mutant NF-L disrupts neurofilament assembly and axonal transport (Jordanova et al., 2003;Lancaster et al., 2018), which in turn perturbs mitochondrial distribution, determining their accumulation within cell bodies and proximal axons (Brownlees et al., 2002). A recessive nonsense mutation was identified in an early-onset CMT patient, which causes a nearly total loss of NF-L mRNA and the subsequent depletion of NF-L protein in patient's cultured neurons (Sainio et al., 2018). Mutations of different functional NF-L domains were also shown to have different effects on filament assembly, with the Q333P mutation leading to reduced NF dimerization (Gentil et al., 2013), while the P8L mutation of the head domain affects NF-L phosphorylation, resulting in the destabilization of NF complexes (Brownlees et al., 2002).
NF-H mutations have also been implicated in CMT. A frameshift variant of NF-H leading to the translation of the 3 UTR has been described in families affected by CMT (Rebelo et al., 2016) and shown to result in prominent intracellular protein aggregation, affecting motor neuron viability (Rebelo et al., 2016). These aggregates are recognized by the autophagic pathway, triggering caspase 3 activation, and apoptosis (Jacquier et al., 2017).
While CMT has been associated with direct mutations of cytoskeletal proteins, disruption of MTs can also occur indirectly as a consequence of the mutation of partner proteins that act as MT adaptors and/or interactors. Indeed, mutations of the small heat shock protein HSPB1 and HSPB8 cause distal hereditary motor neuropathy (dHMN) and CMT, and are associated with cytoskeletal abnormality (d' Ydewalle et al., 2011;Irobi et al., 2012;Bouhy et al., 2018). S135F and P182L mutations of HSPB1 were shown to decrease acetylated α-tubulin abundance, severely affecting axonal transport (d' Ydewalle et al., 2011). Furthermore, HSPB1-P182L mutation affects the assembly and transport of NFs, leading to the formation of intracellular aggregates, which include NF-M (Ackerley et al., 2006).
Though the list of neuronal pathologies displaying cytoskeletal defects is constantly growing, we would like to discuss briefly two additional diseases, as an example of pathologies where alteration of cytoskeletal elements is a hallmark of the disease.
Hereditary spastic paraplegia (HSP) is a pathology that leads to axonal degeneration in the corticospinal tracts and, to a lesser extent, in the dorsal column fibers (Shribman et al., 2019). HSP displays perhaps one of the strongest examples of the correlation between defective axonal transport and neurodegeneration (Dion et al., 2009) since most of the genes implicated in HSP encode for proteins that are engaged in intracellular trafficking. The most prevalent form of autosomal dominant HSP stems from point mutation or deletion in the SPG4 gene encoding spastin, a protein involved in MT severing (Roll-Mecak and Vale, 2008). Spastin deletion in mice resulted in defective axonal trafficking, manifested as the accumulation of organelles and NF into focal swellings found exclusively in axonal regions that exhibited fast transition between MT stabilization states (Tarrade et al., 2006). Furthermore, spastin mutants fail to sever MTs, leading to the mislocalization of intracellular organelles (McDermott et al., 2003). A spastin isoform has also been shown to significantly impair fast axonal transport (Solowska et al., 2008) via the activation of kinases and phosphatases that play a major role in regulating motor proteins binding to MT and cargoes (Leo et al., 2017). Alteration of MT bundling could also contribute to the disease since spastin was described to be able to bundle MTs in vitro (Salinas et al., 2007).
Interestingly, NFs are used as a clinical biomarker in a sporadic and clinical trial for several neurodegenerative diseases, including ALS (Loeffler et al., 2020). Indeed, accumulation of intermediate filament proteins including peripherin is a common pathological feature in both sporadic and familial ALS (Figlewicz et al., 1994;Tomkins et al., 1998;Al-Chalabi et al., 1999;Gros-Louis et al., 2004). NF-H side arm phosphorylation has been reported to slow down the axonal transport of NF by increasing its pausing (Ackerley et al., 2003). Alteration in the stoichiometry of NF subunits has been linked to ALS, while NF side arm phosphorylation is induced by excitotoxic glutamatemediated activation of JNK, p38 and CDK-p25 kinase (Bajaj and Miller, 1997;Ackerley et al., 2000Ackerley et al., , 2004. Overexpression of NF-H, NF-L, or peripherin in mice recapitulated the disease pathological features (Collard et al., 1995;Millecamps et al., 2006). Also, TAR DNA-binding protein 43 (TDP-43), one of the key proteins identified in ALS patients neuronal inclusions, can interact with the neuronal cytoskeleton (reviewed in Oberstadt et al., 2018;Hergesheimer et al., 2019), has been shown to alter the stability of NF-L mRNA when mutated (Volkening et al., 2009;Prasad et al., 2019) and impair trafficking and anterograde transport of messenger ribonucleoprotein (mRNP) granules (Alami et al., 2014). Furthermore, loss of function mutations in tubulin alpha 4A protein (TUBA4A) that disrupt MT stability and diminish their repolymerization have been documented in familial ALS cases , though their impact on axonal trafficking has not been fully elucidated yet. However, since MT stability is central to axonal trafficking, it is likely to be detrimental.
MOLECULAR DRIVERS OF ANTEROGRADE AXONAL TRANSPORT AND THEIR ROLE IN NEURODEGENERATIVE DISEASES The Kinesin Family of Molecular Motors
A total of 45 genes organized in 15 families are associated with kinesins (also called KIFs) in the human genome (Miki et al., 2001;Lawrence et al., 2004;Hirokawa and Tanaka, 2015;Nabb et al., 2020; Table 1). Kinesin-1, kinesin-2, kinesin-3 and to a lesser extent kinesin-4 subfamily members are implicated in both fast (50-400 mm/day) and slow (less than 8 mm/day) axonal transport (Maday et al., 2014). Fast axonal transport traffics membranous organelles, proteins, and mRNA granules, while slow axonal transport moves MT/NF fragments or other cytosolic proteins necessary for the establishment of neuronal polarity, axon growth and synapse formation (Hirokawa and Tanaka, 2015;Nabb et al., 2020). Plasma membrane proteins generally originated in the rough endoplasmic reticulum at the level of the neuronal soma, must also be delivered peripherally by specialized transport vesicles, and be sorted separately, depending on their axonal or dendritic localization (Bentley and Banker, 2016;Nabb et al., 2020). Kinesin complexes are composed of a globular motor domain, which binds and moves along the MT lattice upon ATP hydrolysis (Hua et al., 1997;Schnitzer and Block, 1997;Kon et al., 2005;Wang et al., 2015), and a tail domain that contributes to the motor auto-inhibition mechanism and the recruitment of various cargoes either directly or through interaction with intermediate scaffolding complexes (Hirokawa et al., 2010). It has been reported that a single cargo could be associated with several motors proteins and the resulting force produced by the ratio between plus-end and minus-end directed motors might determine the final directionality of the movement (Kural et al., 2005;Hendricks et al., 2010), however, only a few cargoes were addressed in this work and it remains unclear whether these findings extend to other cargoes as well. The binding of the cargoes to the motor complex via kinesin light chains in the soma, and their release at their final destination, often depends on phosphorylation/dephosphorylation of the motor (Horiuchi et al., 2007;Guillaud et al., 2008;Verhey and Hammond, 2009).
While the motor domain is highly conserved and well-characterized in its structure and function (Sweeney and Holzbaur, 2018), the tail domain is more variable and less understood (Nabb et al., 2020). Most work regarding the tail domain characterization has been the focus on kinesin-1 (KIF5A/B/C) and kinesin-3 (KIF1A) family members, which are most studied motors responsible for anterograde transport in the axon, and for which a large number of adaptor proteins mediating their binding to a different population of vesicles has been identified (Verhey et al., 2001;Setou et al., 2002;Wang and Schwarz, 2009;Fu and Holzbaur, 2014). Our knowledge of adaptor proteins for other kinesin families is less defined and the complexity of these interactions is enhanced by the number of vesicle populations in neurons and the need for fine sorting compounded by the extreme neuronal morphology. Indeed, selective anterograde transport in axons and dendrites is essential for the maintenance of neuronal function and polarity, as proteins and vesicles move in one of these compartments and are excluded from the other (Nabb et al., 2020). We will address some of the adaptor proteins involved in this sorting in the following sections, but a more extensive and detailed coverage can be found in this review (Nabb et al., 2020).
The regulation of transport initiation is a critical aspect of kinesin's ability to mediate axonal transport. Indeed, free cytosolic kinesin-1 and 3 are blocked in an autoinhibited state and can only bind to MTs after a conformational change made possible by their interaction with their cargo (Guedes-Dias and Holzbaur, 2019). Said binding depends on electrostatic interactions between kinesin and tubulin (Woehlke et al., 1997), and the interaction between motor and MTs seems to be stronger for kinesin 3 compared to kinesin-1 (Okada and Hirokawa, 2000;Atherton et al., 2014;Soppina and Verhey, 2014;Lessard et al., 2019). The nucleotide state of MTs can also influence the binding of kinesin-3, which displays higher affinity for GTP-like MTs (Guedes-Dias et al., 2019), while kinesin-1 preferences are still unclear (Nakata et al., 2011;Li et al., 2017;. A well-known example of MAPs, which has been reported to inhibit the binding and motility of kinesin-1 is Tau (Dixit et al., 2008;Kellogg et al., 2018;Monroy et al., 2018). Interestingly, Tau mutations account for approximately 50% of cases of Frontotemporal Dementia and Parkinsonism linked to chromosome 17 (FTDP-17), which is characterized by progressive dementia with gradual functional decline (Siuda et al., 2014;Ikeda et al., 2019). However, a large percentage of familiar FTDP-17 are also associated with concurrent mutation of the progranulin (GRN) gene linked to a similar region on chromosome 17 (Forrest et al., 2018). MAP7 on the other hand facilitates the binding of kinesin-1 to MTs via its interaction with the stalk domain (Monroy et al., 2018;Hooikaas et al., 2019). Said interaction was recently shown to be important for axonal sorting of cargoes, as MAP7D2 isoform preferentially localizes to MTs in the proximal axon region, where it recruits kinesin-1 (Pan et al., 2019).
Kinesin-Based Transport Role in Axonal Growth, Brain Wiring, and Neuronal Development
After the establishment of neuronal polarity, axonal elongation is sustained by the addition of membranes to neurite growing tips ( Figure 1B). Indeed, plasma membrane precursors and vesicles transported by kinesin-driven axonal anterograde transport from the soma toward the growth cone are crucial to axonal development and wiring (Guedes-Dias and Holzbaur, 2019). KIF13B, for example, anterogradely transports PIP3-containing vesicle, regulating the establishment of neuronal polarity (Horiguchi et al., 2006). Knockdown of KIF13B in hippocampal neurons results in an ''axonless'' phenotype and Par1b/MARK2mediated phosphorylation of KIF13B was shown to mediate axon formation (Yoshimura et al., 2010). In PC12 cells, KIF2 deletion inhibits anterograde transport of membranous vesicles and associated receptors, negatively impacting neurite outgrowth (Morfini et al., 1997). KIF2-dependent translocation of IGF-1 receptor stimulates membrane expansion and axonal assembly at growth cone via exocytosis of plasmalemmal precursor vesicles in hippocampal neurons (Pfenninger et al., 2003). KIF3 and KIF4 have also been shown to transport membranous organelles through the interaction with fodrin (Takeda et al., 2000) and an unidentified binding protein (Sekine et al., 1994) respectively. KIF3A mediates the transport of PAR-3 to the distal tip of axon in hippocampal neurons, where disruption of PAR-3-KIF3A binding significantly impairs the establishment of neuronal polarity (Nishimura T. et al., 2004). Recently, anterograde axonal transport of lysosome-related organelles is critical for presynaptic biogenesis (Vukoja et al., 2018). Indeed, loss of the kinesin adaptor Arl8 was found to result in an impaired delivery of essential components to the presynaptic site, leading to defects in neurotransmission (Vukoja et al., 2018).
Also, in order to deliver additional plasma membrane to axon tips, axonal transport traffics cytoskeletal components, and mitochondria, providing the structural framework and energy required to support axonal growth (Maday et al., 2014). In Zebrafish, KIF5A transports mitochondria into sensory axons through its C-terminal interaction with the adaptors Trak1 and Miro1/2 (Campbell et al., 2014). Mutation in KIF5A significantly reduces the proportion and speed of anterogradely moving mitochondria, resulting in a deficit in axonal mitochondria, which promotes axonal degeneration. In addition to fast axonal transport of mitochondria, KIF5A is also involved in slow axonal transports of NFs. Indeed, NF-H, NF-M, and NF-L accumulate in the soma of peripheral sensory neurons in KIF5A inducible knock-out mice. Such somatic accumulation of neurofilament proteins results in axonal reductions, loss of large-caliber axons, and degeneration (Xia et al., 2003;Xiao et al., 2016). Interestingly, KIF5A participates in both fast (mitochondria) and slow (NFs) anterograde axonal transport, simultaneously contributing to the delivery of energy and the structural scaffolds necessary for the elongation and maintenance of axon growth.
Another kinesin, KIF4A, carries integrin β1 into immature axons. Indeed, It was shown that depletion of KIF4A by shRNA negatively impacts the level of integrin β1 in developing axons and reduces axon elongation in embryonic neurons (Heintz et al., 2014), highlighting the essential role of integrin transport in axonal elongation and initial wiring between immature neurons. It has been previously reported that KIF4A also acts as a regulator of neuronal survival through its interaction and suppression of PARP1 activity in the nucleus; indeed, membrane depolarization induces CaMKII-Ca 2+ phosphorylation of PARP1, determining its activation after dissociation from KIF4A (Midorikawa et al., 2006). Activation of PARP1 protects mature neurons from apoptosis and allows KIF4A to translocate into the cytoplasm to participate in active transport (Midorikawa et al., 2006). Taken together these observations support a dual function of KIF4A during neuronal development, with KIF4A promoting axonal elongation and connectivity in immature neurons, while protecting mature neurons from apoptosis, thus stabilizing a functional neuronal network. KIF4 was shown to transport anterogradely the P0 protein component of ribosomes along axons (Bisbal et al., 2009). Knockdown of KIF4 in dorsal root ganglion neurons leads to the accumulation of ribosomes in the soma and their disappearance from axons (Bisbal et al., 2009), negatively impacting axonal local protein translation.
Various cellular processes promote growth cone retraction and axonal degeneration of collaterals and branches that failed to establish functional contacts. Kinesin-13 family members, including KIF2A and KIF2C, are important for the homeostatic regulation of neuronal connectivity and brain wiring. KIF2A particularly, while in absence of detectable motor activity, acts as MTs depolymerizer in growth cones to suppress axon collaterals (Homma et al., 2003). Indeed, it has been shown that conditional knock-out of KIF2A promotes mossy fiber sprouting and dendro-axonal conversion of dentate gyrus (DG) cells with aberrant over-extended dendrites gradually acquiring axonal properties in the DG . Thus, while lacking anterograde motor activity, KIF2A appears to be an essential regulator of neuronal connectivity and the establishment of precise postnatal hippocampal wiring, by determining the pruning of growth cones failing to connect to their postsynaptic target ( Figure 1B).
Axonal Trafficking During Synaptogenesis and Synaptic Transmission
In addition to the establishment and stabilization of neuronal connections, the formation and maintenance of functional synapses are also largely dependent on axonal transport mechanisms. Indeed, synaptic vesicle precursors (SVPs) are known to be transported anterogradely by members of kinesin-3 family, such as KIF1A and KIF1B (Okada et al., 1995). In KIF1A knock-out mice, neurons accumulate SVPs in the soma and fail to establish normal synaptic connections (Yonekawa et al., 1998), while overexpressing KIF1A promotes the formation of presynaptic terminals (Kondo et al., 2012). SVPs are transported by KIF1A and KIF1B via either liprin-α or DENN/MADD scaffolding complexes (Miller et al., 2005;Niwa et al., 2008). After the delivery to a presynaptic bouton, SVPs can be recycled directly in the terminal (Miller et al., 2005). KIF1A is also believed to contribute to the active transport of synaptic vesicles between neighboring presynaptic release sites, a pool of vesicles referred to as synaptic vesicle ''super pool'' (Staras et al., 2010). In cultured giant presynaptic terminals, an axosomatic relay synapse in the auditory brainstem considered one of the largest mammalian excitatory synapses, where MTs depolymerization significantly disrupts the fast-directional transport of the vesicles between neighboring release sites, KIF1A has been found to colocalize with two synaptic vesicles markers, synaptophysin and VGLUT1 (Guillaud et al., 2017). These observations suggest that, in mature synapses, KIF1A-mediated transport plays a significant role in the trafficking and delivery of SVPs and fully functional synaptic vesicles during synaptic transmission ( Figure 1B). Indeed, KIF1A homolog Unc-104 is involved in synapse maturation and synaptic transmission (Zhang et al., 2017). KIF1A and KIF1B also contribute to the anterograde transport of dense-core vesicles (DCVs), through interaction with liprin-α (Lo et al., 2011), in a way that is regulated by Ca 2+ (Stucchi et al., 2018) or through JNK-dependent phosphorylation of synaptotagmin-4 (Bharat et al., 2017). Interestingly, KIF1A associates with DCVs containing Chromogranin-A or BDNF, which move both anterogradely and retrogradely in axons, suggesting that KIF1A might remain attached to DCVs undergoing retrograde transport after the release of BDNF (Stucchi et al., 2018). The anterograde transport from the soma to the synapse of BDNF-containing DCVs is also mediated by KIF5 and its interaction with phosphorylated huntingtin, while their retrograde transport depends on non-phosphorylated huntingtin (Colin et al., 2008). The redundancy of DCVs-transport mechanisms highlights the importance of DCVs targeting and accumulation in the presynaptic compartment and their putative roles in synapse maturation and homeostatic plasticity (Sorra et al., 2006;Tao et al., 2018).
Receptors and voltage-gated channels also need to be efficiently delivered to the synapse to guarantee synaptic transmission. Indeed, conditional KIF5A knock-out mice show behavioral deficits reminiscent of epilepsy, which correlate with a significant reduction in the surface expression of GABA receptors (Nakajima et al., 2012). KIF5A is reported to interact specifically with GABAR-associated protein known to be involved in GABA receptors trafficking, suggesting an important role for KIF5A-mediated transport in inhibitory synaptic transmission. Additionally, KIF5B stalk domain has been shown to directly interact with voltage-gated sodium channel Na1.8 and its overexpression promotes Na1.8 accumulation and neuronal excitability in axons of DRG neurons (Su et al., 2013), suggesting that KIF5B is required for the anterograde transport and function of voltage-gated sodium channels in physiological condition. The correlation between increase in the transport of Na1.8 and KIF5B in pathological conditions, however, needs further investigation (Bao, 2015), and the transport mechanisms of Na1.8 and other sodium channels remain to be fully elucidated. KIF5Bsyntabulin-mediated anterograde transport of mitochondria was also shown to be essential for synaptic maturation, basal and sustained neurotransmitter release, and short-term presynaptic plasticity in superior cervical ganglia (SCG) neurons (Ma et al., 2009). Syntabulin is a syntaxin-binding protein that links vesicles to kinesin heavy chain and thus transports syntaxin-containing vesicles into neuronal processes, and its impairment causes a reduction of mitochondria along the axon, correlating with an acceleration of synaptic depression and the slowdown of the recovery rate after synaptic vesicle depletion (Ma et al., 2009).
Neurodegenerative Diseases Linked to Kinesin Mutations
In support of their fundamental role in driving axonal transport, mutations of kinesin motors are associated with a spectrum of neurodegenerative diseases (Beijer et al., 2019; Figure 2). De novo mutations of KIF1A have been found in conjunction with cerebellar atrophy, spastic paraparesis, optic nerve atrophy, peripheral neuropathy, epilepsy and cognitive impairment (Citterio et al., 2015;Esmaeeli Nieh et al., 2015;Lee et al., 2015;Ylikallio et al., 2015;Cheon et al., 2017). Some of these mutations are critical for the structure and function of the motor domain and affect axonal transport (Klebe et al., 2012;Lee et al., 2015;Langlois et al., 2016;Samanta and Gokden, 2019). KIF1A was also found mutated in the hereditary sensory and autonomic neuropathy type II (HSANII), an autosomal-recessive disorder characterized by peripheral nerve degeneration (Rivière et al., 2011). More recently, a missense mutation in KIF1A has been shown to increase excitatory synaptic functions in hippocampal neurons and epileptic seizure-like activity in Zebrafish, indicating a direct link between disruption of KIF1A-mediated axonal transport and epileptogenesis (Guo et al., 2020).
KIF5A variants have also been implicated in neurodegenerative diseases such as CMT2, HSP, and ALS (Brenner et al., 2018;Citrigno et al., 2018;Filosto et al., 2018;Nam et al., 2018). Interestingly, the site of the mutation correlates with the clinical phenotype. Indeed, mutations in the motor or neck domain are associated with CMT2 and HSP, while a mutation of KIF5A C-terminus and a mutation that affect splicing are linked to an intermediate slowly progressive form of ALS (Brenner et al., 2018;Citrigno et al., 2018;Filosto et al., 2018;Nam et al., 2018). Several mutations of KIF5A neck and motor domain leading to HSP have been characterized in detail in vitro and have been found to exhibit reduced ATPase activity, microtubule affinity and gliding velocity, which affect the processivity and directionality of the motor and can result in reduced cargo flux and consequent deficient synaptic supply (Ebbing et al., 2008;Goizet et al., 2009;Jennings et al., 2017;Dutta et al., 2018).
An autosomal dominant mutation of KIF1Bβ, Q98L, which decreases ATPase activity and motor motility, was initially reported to cause CMT2A in a limited number of pedigrees (Zhao et al., 2001). The lack of confirmation in additional families, however, cast some doubts on the relevance of the mutation (Drew et al., 2015). Recently, a novel KIF1Bβ mutation, Y1087C, was identified in connection with CMT2 (Xu et al., 2018). This mutation was shown to impair the binding between KIF1Bβ and the insulin-like growth factor 1 receptor (IGF1R), affecting IGF1R axonal transport, decreasing its exposure on the neuronal surface and consequently negatively impacting Insulin growth factor 1 (IGF-1) signaling, which is essential for neuronal development and survival (Xu et al., 2018). However, whether this mutation is causative of CMT2 or a polymorphism altering IGF1R trafficking is still an object of debate, since the frequency of the Y1087C mutation is much higher than the total amount of CMT2 cases.
Mitochondrial Trafficking
The homeostatic regulation of axonal growth, neuronal wiring, and synaptic transmission require an extensive amount of energy and rapid protein turnover in the axons, growth cones, and presynaptic terminals, which need to be supported by local production of ATP and proteins along the axon and at the synapse. Axonal transport plays a key role in these phenomena. In addition to the aforementioned syntabulin and Trak1/Miro, RanBP2 (Cho et al., 2007) and FEZ1 (Ikuta et al., 2007) have also been reported to recruit KIF5B and KIF5C to mitochondria and regulate their mobility and trafficking in axons. Interestingly, abnormal co-aggregates of FEZ1 and Kinesin-1 were described in the brains of mouse models of Alzheimer's disease, suggesting a perturbation of FEZ1-mediated synaptic protein delivery (Butkevich et al., 2016). The existence of several mitochondria adaptor complexes reflects the importance of the axonal transport of mitochondria for the local production of ATP needed to sustain axonal functions (Saxton and Hollenbeck, 2012). Therefore, it is not surprising that, even in the absence of KIF5 mediated transport, a limited fraction of mitochondria is still transported by other kinesins. Indeed, KIF1Bα and KIF1C have been reported to contribute to mitochondria transport through interaction with KBP (Nangaku et al., 1994;Wozniak et al., 2005), as well as KLP6, an uncharacterized kinesin homolog that regulates both mitochondrial morphology and transport (Tanaka et al., 2011).
Transported axonal mitochondria need to remain functional to provide adequate energy support over long distances. Thus, mutations affecting the integrity of mitochondrial morphology and the dynamic balance between their fission and fusion, influence axonal transport (Beijer et al., 2019; Figure 2). Indeed, CMT2A, the most prominent subtype of CMT, is characterized by mutations of mitofusin 2 (MFN2), an outer mitochondrial membrane GTPase that plays a critical role in mitochondrial fusion (Verhoeven et al., 2006). MFN2 has been shown to interact with the Miro/Milton adaptor complex essential for mitochondrial mobilization along MTs. The mutant form disrupts the function of the adaptor complex, thus inducing mitochondrial clustering/aggregation along the axonal length (Baloh et al., 2007;Misko et al., 2010). Interestingly, both mutations in the Miro/Milton complex mediating its interaction with MT and as well as NF-L mutants, indirectly affect mitochondrial transport and localization (Ni et al., 2015). In addition to fusion, dysregulation of mitochondrial fission is also causative of CMT. Recessive mutations of the ganglioside-induced differentiation-associated protein 1 (GDAP1), a mitochondrial factor whose activity is dependent on the fission factors Fis1 and the dynamin-related protein1 (Drp1), determines a reduction in mitochondrial fission activity, while the dominant ones negatively impact mitochondrial fusion (Niemann et al., 2009).
Mitochondrial transport and function are also affected by alteration of the endoplasmic reticulum (ER) and its contacts with mitochondria, where Ca 2+ exchange between the two organelles occurs. Indeed, disruption of the ER network has been shown to result in axonal degeneration (Yalçın et al., 2017). Mitochondrial Ca 2+ uptake is required for correct intracellular signaling, homeostasis, and mitochondrial integrity and transport, therefore mutations in Ca 2+ channels also lead to mitochondrial dysfunction (Kumar et al., 2018). The integral ER membrane protein vesicle-associated membrane protein-associated protein B (VAPB), which is associated to ALS (Nishimura A. L. et al., 2004;Chen et al., 2010), interacts with the outer mitochondrial membrane and its mutation impacts mitochondrial Ca 2+ uptake and induces the formation of abnormal ER inclusions (De Vos et al., 2012). Interestingly, the ER fusion protein atlastin 3 (ATL3) has been identified in patients with hereditary sensory and autonomic neuropathy (Guelly et al., 2011;Fischer et al., 2014;Kornak et al., 2014). Defects in ATL3 result in an increased number of ER-mitochondria contact sites augmented Ca 2+ crosstalk between the two organelles and decreased number and motility of axonal mitochondria (Krols et al., 2019).
Mutations in tRNA synthetases, enzymes that attach amino acids to their cognate tRNA molecules in the cytoplasm and mitochondria, affect mitochondrial function and have been associated with a number of human neurodegenerative diseases (Antonellis and Green, 2008;Spaulding et al., 2016). Indeed, Glycyl-tRNA synthetase (GARS) dominant mutations have been described in inherited neuropathies such as CMT2D and dHMN with upper limb predominance (dHMN-V; Xie et al., 2007;Antonellis and Green, 2008). Interestingly, dominant GARS mutations impair neuronal mitochondrial metabolism and cause alterations of VAPB and mitochondrial calcium uptake (Boczonadi et al., 2018). While the disease does not seem to be caused by a loss of the canonical function of these enzymes (Storkebaum et al., 2009;Stum et al., 2011;Ermanoska et al., 2014), mutations of mostly the cytosolic form of tRNA synthetase have been shown to result in toxic gain of function, which impair the signaling output of different families of neurotrophic factor receptors (Stum et al., 2011;He et al., 2015;Sleigh et al., 2017a,b).
mRNA Axonal Trafficking
We have previously discussed how the correct arrangement of the cytoskeleton and the coordinated action of a cohort FIGURE 2 | Schematic representation highlighting the association between RNA granule transport, neurofilaments (NFs), mitochondria, and kinesin motors with selected neuronal degenerative diseases. In the case of mitochondria defects, the mutated proteins underlying neurodegeneration are listed. of molecular motors are essential for the establishment and maintenance of axonal biology. As axons depend on the delivery of proteins and organelles, fast and local availability of proteins to sustain axonal high turnover rate can also be supported by local translation. While mRNA transport and local protein translation in dendrites have been well documented, the mechanisms of axonal mRNA targeting and translation are still the subject of intense investigation. Indeed, several pieces of evidence have shown that axonally synthesized proteins support axon function, survival, and growth (Sahoo et al., 2018b).
Early observations highlighting how, after detachment from the cell bodies, growth cones were still able to respond to guidance cues in a manner that was dependent on calcium signaling and local protein synthesis, supported the existence of axonal translation (Campbell and Holt, 2001;Ming et al., 2002). The identity and concentration as well as the localization of the cue determine the extent and the nature of the translational response (Brittis et al., 2002;Leung et al., 2006;Manns et al., 2012;Nédelec et al., 2012). Chemotrophic signals, for instance, are known to elicit mRNA transport into axons and growth cones. Indeed, Neurotrophin-3 (NT3) induces targeting and translation of β-actin mRNA into growth cones, which correlates with an increase in growth cone protrusions (Zhang et al., 2001). NGF triggers β-actin mRNA transport into axons (Willis et al., 2005). β-actin is also involved in calcium-mediated growth cone guidance, which is affected by inhibition of β-actin local synthesis or misslocalization of its mRNA (Yao et al., 2006;Welshhans and Bassell, 2011).
RNA-binding proteins (RBP) recognize specific sequence located mostly in the 5 and 3 UTR regions of mRNA (emerging evidences implicate the coding region as well), and bind to kinesins or dynein to be transported to axons or dendrites; while the 5 UTR elements are often linked to translation regulation, 3 UTRs regions are essential for targeting to specific subcellular compartments (Hüttelmaier et al., 2005;Chatterjee and Pal, 2009;Merianda et al., 2013;Tushev et al., 2018). The aforementioned β-actin mRNA, for instance, is localized to growth cones by the RBP Zipcode-Binding Protein 1 (ZBP1; Yao et al., 2006). mRNA, RBP and ribosomes are co-transported in large RNA granules, which have been linked to stress granules, where mRNA translation is actively repressed (Kanai et al., 2004;Sahoo et al., 2018a;Pushpalatha and Besse, 2019). These granules display anterograde and retrograde microtubule-based motor movements (Gumy et al., 2014). In addition to KIF5, KIF1Bb might also be involved in mRNA transport, although the mechanism of interaction remains unclear (Lyons et al., 2009).
Axonal injuries are known to trigger local mRNA translation of proteins that will initiate a regenerative transcriptional program in the nucleus through a retrograde signaling cascade originating from the site of injury (Hanz et al., 2003;Perlson et al., 2005;Yudin et al., 2008;Rishal and Fainzilber, 2014;Terenzio et al., 2018). Perturbation of this retrograde mechanisms can cause a delay in axonal regeneration and decrease neuronal survival (Perry et al., 2012;Sahoo et al., 2018b;Terenzio et al., 2018). Nerve injury also induces local translation of mTOR, which in turn controls the axonal synthesis of several retrograde injury signals; thus, disruption of mTOR activity decreases neuronal survival after injury (Terenzio et al., 2018).
Axonal protein synthesis plays also important roles in neurological diseases such as SMA (Spinal Muscular Atrophy), ALS, or Alzheimer's disease. Indeed, a growing list of mRNAs and RNA binding proteins has been described to be axonally mislocalized in neurodegenerative disease (reviewed in Khalil et al., 2018 ; Figure 2). For example, loss of SMN significantly alters axonal mRNA levels required for axonal growth and synaptic transmission (Saal et al., 2014;Khalil et al., 2018). Indeed, alterations in the local synthesis of key axonal survival proteins implicated in neurodegenerative diseases have been observed (Kar et al., 2018;Khalil et al., 2018). For instance, expression of the ALS mutants of RNA-binding protein TDP-43 showed decreased mobility of axonal RNPs and reduced axonal transport in motor neurons (Alami et al., 2014), and ALS-causing TDP-43 mutations alter the axonal content of both mRNAs and miRNAs in cultured spinal motor neurons (Rotem et al., 2017). Treatment of hippocampal neurons with amyloid peptide Aβ1-42 promotes axonal translation of Atf4 mRNA and ATF4 retrograde transport leading to neuronal cell death (Baleriola et al., 2014). A recent study showed mRNA translation in axons in connection with late endosomes (Cioni et al., 2019). Interestingly, Rab7a mutants, including those associated with CMT2B, negatively impacted axonal protein synthesis, impaired mitochondrial function, and axonal viability (Cioni et al., 2019). This study highlights the high degree of crossinteraction between different axonal organelles and how these vesicles act as platforms for several signaling pathways as well as different biological cellular functions that have not been associated with intracellular trafficking until recently.
CONCLUSIONS AND PERSPECTIVES
The combined use of transgenic animal models, primary neuronal cultures, neurons derived from human inducible pluripotent stem cells, in addition to the recent technological advances in proteomics, drug design, and super-resolution microscopy has allowed the in-depth study of the underlying molecular mechanisms behind neurodegenerative diseases (Millecamps et al., 2006;De Vos and Hafezparast, 2017). Many key questions, however, remain open, including the precise molecular identity of the transported vesicles, whether or not it is subjected to change along axons and whether there are any region-specific differences in organelle trafficking within the axonal compartment. Rapid advances in high resolution live imaging in vitro and in vivo will provide a technological platform to further our knowledge of these phenomena. For example, the trafficking of membrane-less organelles such as stress and/or RNA granules is critical for the maintenance of neuronal homeostasis. The presence of mRNAs granules implies that selected proteins can be locally translated in axons and synapses. Identifying which mRNA can be transported, by which trafficking pathways, and where translation takes place, is, thus, paramount to our understanding of axonal biology. Luckily, several novel proteomic approaches have been designed to identify newly synthesized proteins (Forester et al., 2018;Koppel and Fainzilber, 2018;Terenzio et al., 2018;Holt et al., 2019), together with new imaging tools engineered to visualize localized mRNA and protein translation (Morisaki et al., 2016;Wu et al., 2016). These new technological developments will give us valuable insights into the cooperation between intracellular transport mechanisms and local protein synthesis in both physiological and pathological conditions.
LPS of biomolecules has also recently emerged as a novel fundamental mechanism underlying subcellular organization and regulation . The formation of highly condensed molecular assemblies, also known as membrane-less organelles or bio-condensates, within aqueous solutions such as the cytoplasm, plays critical roles in the maintenance of neuronal functions and in neurodegeneration (Elbaum-Garfinkle, 2019). The formation of various components of mRNA and/or stress granules that are targeted to and transported in axons have also been shown to be regulated by LPS. Indeed, the Fragile X Mental Retardation Protein (FMRP) undergoes phosphorylation-dependent phase separation with RNA in a synaptic activity-dependent manner to generate membrane-less-RNA-protein transport granules (Tsang et al., 2019). TDP43 low-complexity domain phase-separates to form cytoplasmic stress granules (Babinchak et al., 2019) and the persistence of phase-separated TDP43 independently of stress granules can induce neuronal cell death (Gasset-Rosa et al., 2019). TDP-43-containing axonal mRNA transport granules have also been reported to display liquid-like properties (Gopal et al., 2017). Additionally, synapsin-1 has been demonstrated to phaseseparate and promote synaptic vesicles clustering at the synapse regulating the mobility of synaptic vesicles in axon terminals (Milovanovic et al., 2018). Active zone protein RIM-1 has also been shown to undergo a phase transition, which might represent the basic mechanism underlying the organization of release sites at the synapses (Wu et al., 2019). Lastly, LPS of disordered proteins such as Tau in Alzheimer's disease (Ambadipudi et al., 2017;Wegmann et al., 2018), FUS/TDP43 in ALS (Murakami et al., 2015;Patel et al., 2015;Conicella et al., 2016), huntingtin protein in Huntington's disease (Peskett et al., 2018) have been recently reported to be critical for their pathological aggregation and toxicity. Similar mechanisms might also be involved in the aggregation of β-amyloid precursor proteins in Parkinson's disease (Boke et al., 2016;de Gap et al., 2019) and α-synuclein.
Although the contribution of LPS to long-range transport in neurons remains an open question, perturbations in LPS likely affect the formation of phase-separated transport granules (reviewed in Nötzel et al., 2018). A recent study has reported that the long-distance trafficking of mRNA granule/lysosome complex depends on LPS of annexin 11 and that this mechanism is critical for their axonal transport (Liao et al., 2019). Another in vitro study also suggested that prolonged LPS of Tau can lead to the formation and aggregation of pathogenic Tau, a form of Tau known to affect axonal transport (Kanaan et al., 2020). Though we have just started to decipher the molecular mechanisms leading to the formation of these bio-condensates, their recruitment onto molecular motors and their targeting to axons and synapses, the discussed pathological aggregation of various neuronal proteins, point to a plausible correlation between perturbations in protein LPS and neurodegeneration. Thus, the integrative study of transport mechanisms, local protein synthesis, and LPSs is critical to reconstructing a comprehensive picture of the multiple cellular and molecular pathways that cooperatively or sequentially take place to efficiently regulate axonal functions.
AUTHOR CONTRIBUTIONS
LG, SE-A and MT participated in the design and writing of this review. MO made the figures.
FUNDING
This work was generously funded by Japan Society for the Promotion of Science (JSPS)/Kakenhi #18K16467 to LG and JSPS/Kakenhi #20K07458 to MT.
|
2020-09-19T13:06:29.331Z
|
2020-09-18T00:00:00.000
|
{
"year": 2020,
"sha1": "2d3b74ec49565c33954f6b7e8ec4f152370c2d2f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnmol.2020.556175/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d3b74ec49565c33954f6b7e8ec4f152370c2d2f",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
233578549
|
pes2o/s2orc
|
v3-fos-license
|
The Potential of Albanian Tourism Sector
: The aim is to develop a profile of Albania’s hotels based on a critical analysis of the attitude of foreign tourists visiting the country. COVID-19 negatively affected the Albanian tourism sector because 2,657,818 foreign citizens visited Albania in 2020, which is 41.49% less than in 2019. To investigate the potential of Albanian tourists, this study employs a quantitative analysis and a Regression Model. The results demonstrate that the tourist is a rational decision-maker and our findings indicate that there are differences in expectations and perceptions among respondents. These differences are not significantly correlated with the respondents’ gender, but in terms of education level, the differences are significant for empathy, where the respondents with a college degree have a higher level of expectations than respondents that have higher education. Our findings highlight the practical implications of research for managers of hotels because they have to take into account that tourists are very sensitive to the level of understanding of their specific needs by hotel staff. Recently, more than before pandemic COVID-19, the relationship between expectations and perceptions of tourists visiting Albania is strongly influenced by tangible elements of the touristic package.
Introduction
Tourism is an important and rapidly growing sector within the national and international economy, boosted by the development of new tourist markets. While international competition is intensifying, the sector requires a more accurate assessment of customer expectations in order to identify any gap that may arise between them and the quality of services offered.
The aim was to develop a profile of Albania's hotels on the basis of critical analysis of the attitude of foreign tourists visiting the country, given the fact that Albania is a tourist destination with an exit to the Adriatic Sea (Central Albania to North) and also to the Ionian Sea (Central Albania to South). During Enver Hoxha's communist regime, Albania was isolated as a tourist destination. Therefore, since 1990, it has begun being an active player in the tourist market of its region. Until 2010, Albania's tourist attractions were not well known because the country was considered a tourist destination only for adventure.
In this difficult period for humanity, everyone tries to be optimistic about the postpandemic future. It was expected that tourism would be the most affected sector of the economy, even though it is the most vulnerable sector, but also among the sectors with significant incomes for Albanians, given that in 2019 the country was visited by 6.4 million foreign tourists who spent about 2 billion Euro [1]. In the period from 2018 to 2020, people should have spent the holidays in Albania and it should have been visited by about 7 million foreign tourists, which means that Albania should have provided tourism services with a value of over 2.2 billion Euros. Realistically, according to INSTAT's statistics [1], in 2020, 2.66 million foreign nationals entered Albania, 60% less than in 2019. However, based on statistics provided by accommodation units, over 90% of tourists declared as foreigners are of Albanian origin and foreign citizenship or come from Albanian territories such as Kosovo or Northern Macedonia, which forms the so-called "patriotic tourism" in Albania.
sion that factor-cluster segmentation plays an important role in residents' perceptions of tourism satisfaction.
Tourist satisfaction has been a research topic receiving special attention even before the pandemic crisis generated by COVID-19 [5,6]. A critical analysis starts by comparing the tourists' perceptions with their expectations, and this comparison leads to a positive or negative feeling, as Lewin (1938) [7] explained with his expectancy-disconfirmation theory, developed further by Oliver and Swan [8]. Much empirical research on tourists' satisfaction is based on the relationship between expectations and perceptions in different countries and regions. In Europe, many studies were for popular tourist destinations such as Spain Croatia, Cyprus, France, Greece, Italy, Malta, and Spain [9]. In Asia, China benefits from many quantitative and qualitative studies [10,11].
Zeithaml, Berry, and Parasuraman [12] consider that expectations are important for the assessment of customer satisfaction, but it is difficult to arrive at a consensus regarding the definition of these expectations [13] because many factors contribute to their formation: customer desires [14,15], standards of the services [16], and efficacy of the services [17]. The level of expectations depends on personal characteristics such as nationality, gender, and education level [18].
Camilleri [19] considered that the efficiency of a tourism company can be continuously improved if the customers' expectations are seriously taken into consideration because in the end there will be a positive correlation between customers' expectations and the perceived quality of tourism services. Later, Cardoso et al. [20] demonstrated that the difference between expectations and perceptions of tourists is due to a lack of communication between clients and hotel employees.
Given factors that trigger the desire of tourists for a new destination, it is important to make the tourist destination more attractive and to take into account demographic features. Taking into account the research of Prayag et al. [21] and of Yuan and Wu [22] we underline that the relationship between expectations and perceptions depends on the quality experiences of the tourists.
Customers have a pre-formed image of the quality of the tourism service based on their expectations. As a result, perceptions substantially contribute to the degree of tourist satisfaction levels. The complexity of perception as a process is well described by Moutinho [23] who stresses that stimuli (auditory, visual, tactile, olfactory, and/or taste) and demography, such as those factors listed above, affect perception levels. Measuring the level of perception of the quality of the tourism service is very challenging. There is a fine line separating the impact of the factors, positive or negative, and the tourist perceptions. For example, a customer at a five-star rated hotel might bring an expectation of what constitutes an acceptable wait time, and when she/he arrives at the reception to complete the check-in procedures, the wait time experienced is higher than that. This time could be perceived by the customer as time wasted, giving rise to an emotional dimension [24][25][26].
Differences between tourists' expectations and perceptions of the service quality alter their behavior and generate different levels of attitudes towards the service received [27,28]. Developing countries such as Albania are perceived as providing a lower quality tourist package, out of kilter, with lower expectations in regard to the price paid for the service.
The Dimensions of Tourism Package
Consistent with the previous arguments, the dimensions of a tourist package are very important for customers because their level of satisfaction depends on the gap between expectations and perceptions of the service quality [29,30]. Buttle [31] considers that there is a direct relationship between service quality and customer satisfaction. Starting from a deeper analysis of the dimensions of service quality suggested by many scholars, the authors of [32] summarized the dimensions of service quality: reliability, access, responsiveness, competence, courtesy, communication, understanding the customer, credibility, security, tangibility. A few years later, Parasuraman, Zeithaml, and Berry [15] arrived at the conclusion that only five dimensions (reliability, assurance, tangibles, empathy, and responsiveness) are relevant to the measurement of service quality. However, the SERVQUAL model, developed by Parasuraman, Zeithaml, and Berry [15] has received scholarly criticism due to the direct relationship between the customer decision-making process and the perceptions of the service delivered [33][34][35][36].
The tourist is considered a rational decision-maker who follows determined steps between stated intention and final decision [37,38]. Tourists are often under the pressure of their emotions and these emotions influence the rationality of decision-making. Various researchers [39,40] have explored the causal relationship between the dimensions of the tourist package and the level of satisfaction, but they have struggled to evaluate the complexity of the decision-making process because this process implies a chain of decisions and it is influenced by personal and situational factors [41][42][43].
Our research moves away from a tourist-decisions-centered approach by taking into account the dimensions of the tourist package, and we try to integrate this decision-making process with other decisions.
Therefore, the following hypotheses were developed: Hypothesis 1 (H1). There is a direct relationship between the gender of the respondents and their expectations regarding the quality of tourism services.
Hypothesis 2 (H2). The education level of respondents influences their expectations regarding the quality of the tourism services.
Hypothesis 3 (H3).
There is a direct relationship between tourists' perceptions of service standards mediated by external indicators of quality and their gender.
Hypothesis 4 (H4).
The education level influences tourists' perceptions regarding the quality of the service.
Hypothesis 5 (H5).
The stated importance that tourists associate with the dimensions of the package influences their level of satisfaction.
Methodology Research
The number of tourists entering Albania has been steadily increasing-from 3,513,666 in 2012 to 5,117,000 in 2017; 6,406,038 in 2019 and then decreasing to 2,657,818 in 2020. Albania has a single airport, and this leads to higher tariffs/fees. As a result, the flow of people traveling to Albania by airplane is very low (1,659,594 tourists in 2019 and 657,467 tourists in 2020) [1]. However, in recent years, action has been taken to license another airport in Albania to facilitate air travel, which would be reflected in a reduction in airfares and an increase in visitors.
Although Albania is situated in proximity to the Mediterranean Sea, the lack of ports, the lack of facilities for yacht owners, and the lack of facilities for anchoring cruise ships make the maritime potential not fully exploited due to poor investment policies. That is why the number of foreign tourists who choose to travel to Albania by sea [1] is very small compared to Albania's capacity and tourism potential (842,904 tourists in 2019 and 233,538 tourists in 2020).
The research for this study involved surveying foreign guests at Albanian hotels in the three-and four-star category; hotels that are located in the largest beach area, in Durres and its surroundings. We chose this area because it is one of the most attractive tourist areas in Albania for foreign tourists. We distributed over 300 questionnaires, of which 236 were correctly completed (78.67% of the questionnaires). The data from these completed questionnaires were processed using SPSS. The questionnaire comprised two parts, informed by the variables in Hypotheses 1-5. In the first part, the questionnaire was based on the model developed by Parasuraman, Zeithmal, and Berry [15]. In the second part, the questionnaire was completed by removing items that did not fit into the normally distributed statistical values.
The five SERVQUAL dimensions provided the variables informing our research and for each of them, we evaluated the level of expectations, the level of perceptions, as well as the gap between expectations and perceptions, as follows: Tangibles (four items), Reliability (five items), Responsiveness (four items), Assurance (four items), and Empathy (five items). Respondents assessed the quality of tourism services on a Likert scale from 1 (Totally Disagree) to 5 (Totally Agree). Internal consistency estimates (Cronbach alpha coefficients) were 0.84.
Sample
Descriptive statistics (Table 1) can be summarized as follows. There is a balance between female respondents (50.8%) and male respondents (49.2%), which is an advantage for the reliability of the results. The majority age group ranges between 31-55 years old (50.8%) and we start from the premise that the respondents have the experience and the maturity necessary to set their expectations and a correct image of the quality of the tourism service in Albania. Regarding their education level, the majority of respondents have higher education qualifications (university and master's level: 69.5%), facilitating the analysis of the relationship between expectations and perceptions, taking into account education level as a possible mediating variable. The majority of respondents made their hotel booking online (50.8%), which indicates that they already had an experiential picture against which to assess the expected quality level of the tourism service influencing their choice. Respondents came from different countries, which we grouped into two categories: countries from the former communist bloc (Bosnia Herzegovina, Czech Republic, FYR Macedonia, Poland, Romania, Russia, Serbia, and Ukraine: 77.5%), and developed capitalist countries (Germany, Great Britain, and Italy: 22.5%).
The Regression Model
Null Hypothesis (H0). There is no direct relationship between the five variables of tourism service and the dependent variable related to the level of customer satisfaction. The null hypothesis was tested using a regression model, to elucidate the dimensions of touristic services in the framework of the relationship between expectations and predictions. The regression model designed in the initial stage of the research consists of the five dimensions proposed as a basis to define the level of satisfaction among tourists visiting Albania (Table 2). The R-value is 0.993, giving the statistical confidence to continue the analysis. The R 2 value is 0.985 meaning that 98.5% of the variation in the satisfaction of tourists in Albania is explained by the five dimensions (Tangibles, Reliability, Responsiveness, Assurance, and Empathy) applied to describe how the hotel service is provided. Based on these findings, the model is valid and robust, and we can continue to analyze the impact of the five dimensions on tourists' level of satisfaction.
In the next stage, we conducted an ANOVA test. This did not indicate any significant difference between the five dimensions of the model (F = 3035.095; df = 5; p < 0.001). The results show that the F-Statistic exceeds the upper bound of the critical value band and the p-value is found to be smaller than 0.001. Thus, it rejects the null hypothesis that suggests there is no relationship between the dimensions and tourists' level of satisfaction.
The individual regression statistics show that all five variables, at p < 0.05, are significant for tourist respondent satisfaction levels. The variance inflation test (VIF) was run for the model and for all five dimensions, VIFs ranged from 1.046 to 1.559. The highest VIF is for Responsiveness (1.559) and is lower than the cut-off point of 10 and thus there is no risk of collinearity [44].
As we can see from the analysis of Table 3, the variable Responsiveness has the greatest impact on the satisfaction of respondents (β = 0.390, sig < 0.05) because the tourists are influenced by the way and the promptness with which hotel employees offer services and information. Reliability (β = 0.358, sig < 0.05) shows that the quality of hotel services is important for tourists visiting Albania. Tourists are influenced by tangible elements of the tourist package, such as the hotel's exterior and/or interior appearance, and by the way in which the staff present themselves (Tangibles-β = 0.335, sig < 0.05). The last satisfaction dimension, Assurance (β = 0.341, sig < 0.05), shows that tourists are sensitive to their confidence in the hotel staff, as well as the level of hotel staff knowledge regarding the quality of tourism services. Empathy (β = 0.224, sig < 0.05) has a lower level of influence on satisfaction even if tourist expectations are that hotel staff better understand their specific needs. These considerations suggest that the regression model is robust.
Results and Discussions
We evaluated the differences between perceptions and expectations of tourists visiting Albania (Table 4). The greatest difference between the expectations and perceptions of the respondents regarding the quality of the tourism service is with respect to Tangibles where there is a negative difference of 0.8138, which highlights the fact that tourists' expectations were much higher for the appearance of the physical environment or other material factors. The negative differences were registered to the other four dimensions leading to the conclusion that tourist expectations were higher than their perceptions (0.7599).
Our findings prove that all five dimensions registered a negative difference between the expectations of the tourists and their perception of the quality of the hotel services. Therefore, the clients were not satisfied, and their perceptions were not in line with their expectations.
Expectations and perceptions of tourists visiting Albania were then evaluated controlling for both gender (Table 5) and education level ( Table 6). The mean of the respondents' expectations, by gender, reveals that females and males have the same expectations regarding each of the components of the tourism service.
To validate the hypotheses regarding tourist expectations related to tourism service quality we applied the Levene test for equality of variances [45] and the t-test for equality of means.
The results indicated that for Reliability, F = 30.066, sig = 0.007 (sig ≤ 0.05), and for Assurance, F = 0.883, sig = 0.035 (sig ≤ 0.05), there are significant differences between female and male respondents in terms of expectations regarding the two dimensions of the tourism service. For the other three dimensions, Tangibles: F = 1.751, sig = 0.574 (sig > 0.05); Responsiveness: F = 14.077, sig = 0.887 (sig > 0.05); and Empathy: F = 0.082, sig = 0.242 (sig > 0.05), there are no significant differences between females and males in terms of expectations regarding these dimensions of the tourism service. Female respondents have higher expectations for the following four dimensions: Tangibles, Responsiveness, Assurance, and Empathy, while male respondents have higher expectations regarding the Reliability of service quality.
As a result, Hypothesis 1 is partially validated. Our research continued with the analysis of how the level of education influences tourists' expectations regarding the quality of tourism services ( Table 6).
The results for four of the dimensions are the following: Reliability: 16.976, sig = 0.000 (sig ≤ 0.05); Responsiveness: F = 3.399, sig = 0.035 (sig ≤ 0.05); Assurance: F = 33.720, sig = 0.000 (sig ≤ 0.05); and Empathy: F = 3.855, sig = 0.023 (sig ≤ 0.05). There are significant differences between females and males in terms of expectations regarding two dimensions of the tourism service. For Tangibles, F = 2.985, sig = 0.053 (sig > 0.05), there are no significant differences between females and males in terms of expectations regarding these dimensions of the tourism service.
Taking into account that there were no significant differences for one dimension (Tangibles), for the other four dimensions (Reliability, Responsiveness, Assurance, and Empathy) we continued with multiple comparisons of tourists' expectations, by education level (Table 7). Our results indicate that for four dimensions there are significant differences (sig > 0.05) between the respondents as follows: Reliability-there is a positive difference between respondents with a master's qualification (they are more demanding in terms of the hotel staff's ability to perform services accurately) and respondents with university degrees (0.11424; sig = 0.052).
Responsiveness-there is a negative difference between respondents with a master's (they are less demanding in terms of the speed with which hotel staff respond to customer needs), respondents with a university degree (−0.12891; sig = 0.058), and respondents with a college degree (−0.04861, sig = 0.703).
Assurance-there is a positive difference between respondents with a master's (they are more demanding in terms of knowledge and courtesy of the hotel staff to perform personalized customer services) and respondents with a university degree (0.05078; sig = 0.298).
Empathy-there is a positive difference between respondents with a college degree (they are more demanding in terms of the ability of the hotel staff to perform personalized services and pay individual attention to customer needs) and respondents with a university degree (0.01562; sig = 0.896).
As a result, Hypothesis 2 is partially validated. Our findings indicate that there are differences in expectations among respondents and the differences are not significantly correlated with the respondents' gender. For example, women's expectations are slightly higher than those of men's are in four of the five dimensions, but in terms of education level, the differences are significant for Empathy where the respondents with a college degree have a higher level of expectations than respondents with higher education [46].
Dimension Education Level (I) Education Level (J) Means Difference of Education Level (I-J) Standard Error Sig
The analysis of Table 8 shows that the mean of respondents' perceptions, by gender, reveals that there are no significant differences for any of the five dimensions that describe the tourist package; both females and males have the same perceptions regarding each of the components of the tourism service. To validate the hypotheses regarding the tourist perceptions related to tourism service quality, we used Levene's test for equality of variances and the t-test for equality of means.
We continued to evaluate the perceptions of respondents (by gender- Table 8 and by education level- Table 9) to compare them with the differences recorded for the expectations. The results indicated that for Tangibles, F = 0.077, sig = 0.001, and for Assurance, F = 13.728, sig = 0.000 (sig ≤ 0.05), there are significant differences between females and males in terms of perceptions regarding the two dimensions of the tourism service.
As a result, for the other three dimensions we have the following results: Reliability: F = 2.706, sig = 0.576 (sig > 0.05); Responsiveness: F = 0.692, sig = 0.745 (sig > 0.05); and Empathy: F = 7.595, sig = 0.213 (sig > 0.05). There are no significant differences between females and males in terms of perceptions regarding these dimensions of the tourism service. Female respondents have higher perceptions for the following two dimensions: Reliability, and Assurance, while male respondents have higher expectations with regard to Tangibles, Responsiveness, and Empathy of service quality.
As a result, Hypothesis 3 is partially validated. We continued the analysis of the perceptions of respondents by education level (Tables 9 and 10).
Taking into account that there were significant differences across all dimensions, we continued with multiple comparisons of tourists' expectations by education level (Table 10).
Our findings indicated that for all dimensions there are significant differences (sig > 0.05) between the respondents as follows.
Tangibles-there is a positive difference between respondents with a college degree and respondents with a university degree (0.06944; sig = 0.538) and between respondents with master's degrees and respondents with a university degree (0.04644; sig = 0.717). The respondents with a college degree and the respondents with a master's degree are more demanding in terms of the physical appearance of the services and of the environment than respondents with a university degree.
Reliability-there is a positive difference between respondents with a university degree (they are more demanding in terms of the hotel staff's ability to perform services accurately) and respondents with a college degree (0.05382; sig = 0.448).
Responsiveness-there is a positive difference between respondents with a college degree (they are more demanding in terms of the speed with which the hotel staff respond to customer needs) and respondents with a university degree (0.05729; sig = 0.448). Assurance-there is a positive difference between respondents with a master's (they are more demanding in terms of knowledge and courtesy of the hotel staff to perform personalized customer services) and respondents with a university degree (0.14019; sig = 0.050).
Empathy-there is a positive difference between respondents with a college degree (they are more demanding in terms of the ability of the hotel staff to perform personalized services and pay individual attention to customer needs) and respondents with a master's (0.07778; sig = 0.270).
As a result, Hypothesis 4 is not validated. The set of dimensions defining the way a tourism service is provided was included in the multivariate analysis in order to achieve a sequencing of the aspects leading to a competitive advantage. Respondents stated that the Empathy dimension has a similar importance to other dimensions, but by applying the regression model, this dimension lost importance because respondents tended to overestimate the importance of Empathy in influencing the level of satisfaction. On the other hand, Responsiveness has the greatest influence on respondents' satisfaction, with higher importance than stated by tourists ( Table 11).
The comparative analysis of the declared contribution and the calculated contribution of each dimension suggests that, with the exception of Empathy (β = 0.22), respondents objectively assessed their expectations regarding the quality of the tourism service.
Although the emotional elements captured in the Empathy dimension were mentioned as being important, the level of satisfaction is, in fact, determined by the rational elements of Responsiveness (β = 0.39), Reliability (β = 0.36), Assurance (β = 0.34), and Tangibles (β = 0.34). We reached the conclusion that there are generally no significant differences between the respondents controlling for gender and education level, but there are significant differences between the declared importance and the calculated importance that respondents associate with the size of the package for the level of satisfaction [47].
As a result, Hypothesis 5 is validated. Roman et al. [48] underlined the importance of cluster analysis for tourism based on factors such as spatial diversity of tourism. Albania is a country where the studies on tourist satisfaction are at a statistical level and less at the quantitative and/or qualitative levels. Albania has high touristic potential from the natural factors point of view, but this potential is diminished due to forms of spatial organization.
Our findings are also consistent with the findings of Jönsson and Devonish [49], Hammad, Ahmad, and Papastathopoulos [50] and demonstrate that the difference between female and male respondents exists in expectations and in perceptions of service quality.
Females have higher expectations regarding Tangibles, Responsiveness, Assurance, and Empathy, while male respondents have higher expectations regarding the Reliability of service quality. Education level influences the expectations of tourists, exemplified by the fact that respondents with a master's qualification are more demanding in terms of Reliability and Assurance; the respondents with a college degree are more demanding in terms of Responsiveness and Empathy.
Females have higher perceptions of Reliability and Assurance of service quality, while males have higher expectations with regard to Tangibles, Responsiveness, and Empathy in the service quality.
The results demonstrate that the tourist is first a rational decision-maker and our findings are consistent with the findings of Gnoth [18] and Goossens [51] who analyzed the sensitive line between the rational and affective nature of tourists' decision-making process. As Gnoth [18] points out, in tourism decisions, there are many situations when the affective nature of tourists is crucial for the choice of destination.
Conclusions
The number of tourists visiting Albania is growing year by year, with an average annual growth of almost 15% and the semester with the most tourists arriving in Albania is the third semester of the year, which also corresponds to the summer vacation period.
We arrived at the conclusion that rational dimensions have a decisive influence on the satisfaction of tourists visiting Albania; the emotional dimension (Empathy) is losing importance, but it has to be taken into account by hotel managers because they have to have well-trained staff to give customer attention 24/7/365 and to understand the specific needs of their customers. Responsiveness most influences the level of satisfaction of tourists and therefore managers would be advised to ensure that they have qualified staff to provide services and information to tourists in a timely manner.
Sustainable tourism research is an important topic in the sustainability domain and elements such as communication, methodological rigor, and integrity contribute to the practical approach to the tourism sector in crisis [52]. The social benefits of the research will be more visible if the researchers build a bridge between literature and practice. Therefore, our research fills the gaps concerning the practical research on the Albanian tourism sector taking into consideration the relationship between the expectations and perceptions of tourists.
The practical applicability of our research consists of providing managers with information on the expectations of tourists [53]. Therefore, the managers can act to reduce the weaknesses, improve the quality of tourism services, and train the staff to inspire more confidence among tourists and provide them knowledge about the opportunities that the hotel and region can offer them for a pleasant stay [54][55][56].
A limitation of our research is that we analyzed the expectations-perceptions relationship of the clients at three-and four-star hotels and we neglected the other hotel categories from the region. Additionally, the survey was conducted in English and for some respondents, English is a second language and this limitation may affect their understanding of the questions.
Future research should focus on the relationship between tourism and national economic, political, and social aspects [57], taking into account the high degree of economic and political uncertainties after the pandemic COVID-19 [58,59].
|
2021-04-21T13:12:51.844Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "e292a8707eb79008dbb051f55197170aa7b94081",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/7/3928/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1c71546e654d546ace2e7ba068eb5593171ea208",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
114142130
|
pes2o/s2orc
|
v3-fos-license
|
High power RF system for transverse deflecting structure XFEL TDS INJ
The high power RF system (HPRF) is designed for RF feeding of the transverse deflecting structure of the transverse deflecting system XFEL TDS System INJ of the European X-ray Free Electron Laser. The HPRF system includes klystron, waveguide ceramic windows, directional couplers, waveguide vacuum units, spark detector and waveguide line. Operating frequency is 2997.2 MHz. Peak input power is up to 3 MW. The HPRF system has been developed, manufactured and assembled in the XFEL Injector building. The total length of the waveguide line is 55 m from the klystron at the -5 floor to the transverse deflecting structure at the -7 floor. All designed RF parameters have been obtained experimentally at low RF power level.
Design and manufacture of the HPRF system
There are three transverse deflecting systems in the X-ray Free Electron Laser (XFEL) for monitoring the longitudinal phase space and the emittance of the accelerated electron beam. The first TDS System INJ is located in the injector. The transverse deflecting structure is located in the injector tunnel at the -7 floor at longitudinal coordinate z=53 m from cathode. There is no space for the klystron and the modulator close to the structure. Therefore they are located at the -5 floor. It means the waveguide line connecting the klystron and the transverse deflecting structure is long (55 m) and quite a bent. Design of the transverse deflecting systems XFEL TDS INJ is shown in Fig. 1. The HPRF system includes klystron, waveguide ceramic windows, directional couplers, waveguide vacuum units, spark detector and waveguide line [1,2]. The filling of the waveguide line is following: a) Nitrogen at pressure up to 3 bar abs. from klystron to window 1; b) air at atm. pressure from window 1 to window 2 (the second option is technical vacuum in case of break-down problems); c) ultra-high vacuum from window 2 to transverse deflecting structure. The first Nitrogen filled part of the waveguide line includes H-bend with spark detector and gas filling port, E-bend, directional coupler 1and window 1. The second air filled part of the waveguide line includes window 1, straight waveguides, E-bends, H-bends, four waveguide vacuum units for connection of the ion pumps and window 2. The third vacuum filled part includes window 2 and 4 To whom any correspondence should be addressed.
Klystron
CPI klystron VKS-8262HS is used for the RF power generation (Figure 2). Main parameters of the klystron are: 3 MW peak power, 2997.2 MHz operating frequency, 110 kV voltage, 72 A current.
Window
Special waveguide double-mode ceramic window has been designed for the TDS systems [1]. The picture of the window is shown in Figure 3. Main parameters are a) electric field on the ceramic surface is 4 times less than in the regular waveguide, b) bandwidth is 70MHz@30dB and 40MHz@40dB.
Directional coupler
The directional coupler has been designed for the TDS Systems specially. The filling of the directional coupler is to be air or vacuum. Therefore it has been designed as vacuum tight unit. It includes ceramic disk brazed to the waveguide separating inner volume of the directional coupler. The directional coupler is to monitor forward and reflected power. Main parameters of the directional coupler are: coupling -65 dB, directivity 34 dB. Directional coupler is shown in Figure 4.
Waveguide vacuum unit
The waveguide vacuum unit consists of straight waveguide with set of transverse slots on the smaller side of the waveguide and ConFlat flange for ion pump connection. The waveguide vacuum unit is shown in Figure 5.
Waveguide line
The waveguide line includes straight waveguides, E-bends, H-bends (see Figure 6-8). One of the Hbend is provided with the port for connection of the spark detector and with the gas filling port. All connections of the waveguide units are realized with high vacuum DESY type rectangular flange with copper gasket.
Load
Sendust coated waveguide load is used as a dummy load at the deflecting structure output [1] ( Figure 9). Measured reflection is S11=-47 dB.
Assembly and test of the HPRF system
The whole RF power supply system has been assembled and tested at low RF power level. Assembled system is shown in Figure 9. RF test at low RF power level shows that reflection from input flange of the waveguide line with the transverse deflecting structure is S11=-42 dB at operating mode ( Figure 10).
|
2019-04-15T13:06:48.317Z
|
2016-09-01T00:00:00.000
|
{
"year": 2016,
"sha1": "650f6dd469fbc62e11e4c622591d294cc6243930",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/747/1/012081/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "202f53301652f58a5553cbe5149796da6e370510",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
}
|
115321391
|
pes2o/s2orc
|
v3-fos-license
|
Exactly solvable model of sliding in metallic glass
At low temperature, T ->0, the yield stress of a perfect crystal is equal to its so called theoretical strength. The yield stress of non-perfect crystals is controlled by the stress threshold of dislocation mobility. A non-crystalline solid has neither an ideal structure nor gliding dislocations. Its yield stress, i.e. the stress at which the macroscopic inelastic deformation starts, depends on distribution of local, attributed to each atomic site, critical stresses at which the local inelastic deformation occurs. We describe exactly solvable model of planar layer strength and sliding with an arbitrary homogeneous distribution of local critical stresses. The macroscopic stress threshold of the athermal sliding is found. Kinetics of thermally-activated creep of the sliding layer is described. The rate of the thermally activated sliding is tightly connected with parameters of the low temperature strength. The sliding activation volume scales with the applied external stress as ~ \sigma ^-\beta, where \beta<1. The proposed model accounts for mechanisms and the yield stress of the low temperature deformation of polycluster metallic glasses, since intercluster boundaries of a polycluster metallic glass are natural sliding layers of the described type.
Introduction
The yield stress of a crystalline solid, cr Y , was first treated by Frenkel [1] who approximated the periodic force that is required to shear a perfect crystal by a sinusoidal function. He found that the maximum available stress, what is termed theoretical shear strength, is where cr is the shear modulus of the crystal. At th cohesion between atomic layers breaks down. Later an improved estimations of th were obtained, see e.g. [2]. It appears that the reasonable estimate for th is given by cr th 1 10 (1) It turned out that values of cr Y measured experimentally are close to the theoretical strength (1) only for the defect-free crystals, e.g. whiskers. The yield stress of real crystalline materials containing dislocations is determined by the stress threshold of dislocation mobility.
This parameter was first estimated by Peierls [3]. For metals it reads cr P 3 4 10 10 Equations (1) and (2) give the characteristic range of local critical stresses of a single crystal.
The low temperature yield stress of a metallic glass (MG) has been found to be also proportional to the macroscopic shear modulus [4], but the proportionality factor differs from those in (1) and (2) MG MG Y 2 10 While in a crystal the macroscopic value of is equal to its microscopic value [5], the shear modulus in MG is a random quantity. Its local value i depends on the atomic configuration at site i. Thus MG is a mean value. Usually the macroscopic shear modulus of an amorphous alloy is typically up to 30% lower than the shear modulus of the crystal of the same composition [4].
MGs possess noncrystalline disordered structure which can be characterized by a random potential relief with randomly distributed atoms in the potential minima [6]. To find the yield stress of MG one has to consider the problem of strength and thermally activated inelastic rearrangements of atoms within a shear layer where each atomic site is characterized by a random local critical stress which is needed for the inelastic rearrangements.
Different models of the inelastic shear strain in MG were developed. The first model of this type belongs to Argon [7][8][9]. According to Argon at low temperatures (below 0.8 g T , where g T is the glass transition temperature) MGs are deforming due to inelastic shear rearrangements of atomic groups composed of about 10 atoms. Later these carriers of plastic deformation were termed shear transformation zones (STZs) [10]. Taking into account that STZs are carriers of the plastic deformation, a set of equations of motion (roughly analogous to the Navier-Stokes equations for fluids) were deduced [10][11][12][13]. STZs velocity, density and orientation are dynamic variables. During deformation STZs appear and annihilate persistently. It has to be noted that in the Argon model MG contains STZs but its other structural properties are not specified. They are implicitly accounted for in the model parameters. To connect STZs with shear band formation it was assumed that the free volume [14][15][16] is properly redistributed and created in the deformed MG.
Later these ideas were utilized in similar approaches. For example, the cooperative shear model (CSM) [17] postulated that STZ involves ~10 2 atoms and that the shear transformation is a cooperative process similar to the -relaxation in liquids. In computer simulations [18] the size of STZs was estimated to be about 1.5 nm. It should be noted that CSM is focused on initial stage of anelastic and inelastic shear transformations. [19,20] revealed a strong size effect when the specimen size is less than 100 nm. These results indirectly point out the structure heterogeneities of about 10 nm in size.
The polycluster model of MG structure developed in [23][24][25] is based on the idea that the majority of atoms possess "perfect" non-crystalline local order. Groups of atoms with different locally preferred configurations (subclusters) are associated in non-crystalline clusters with narrow and stable intercluster boundaries. Validity of this model is confirmed by many direct and indirect experimental investigations. Direct examination of the MG structure by means of the high resolution field emission microscopy shows that both rapidly quenched amorphous ribbons and bulk MGs, obtained at relatively small cooling rate, possess a fine polycluster structure with characteristic sizes of clusters and subclusters about 10 nm and 1-3 nm respectively [21,22]. The boundary width is comparable with the atomic size. The mean binding energy of atoms within the boundary is lower than in the cluster body by the tenth of eV. Since the boundary density was found to be extremely high, ~10 -5 -10 -6 cm -1 , they play a dominant role in the plastic deformation of MG.
The mean value of the shear modulus of the cluster body is comparable to the shear modulus of a crystal and the strength of the cluster body is nearly equal to the theoretical strength. The local values of critical stress required for inelastic relocation of an atom within the boundary layer is a random quantity, being less than that in the cluster body. Therefore the intercluster boundaries are natural STZs. Shear transformations under applied stress appear first of all due to sliding within the boundary layers.
The problem of dislocationless sliding localized in a layer with a random distribution of local critical stresses was considered in [23][24][25]. Approximate solutions of equations derived there were obtained under the assumption that the distribution function of the local critical stresses is a spatially-homogeneous piecewise constant function. The obtained approximate solutions gave qualitative and, in some cases, reasonable quantitative description of the sliding process and allowed to predict the mechanism of the shear band formation on a qualitative level.
Later this approach was applied for the description of MG hardening during partial crystallization [26].
In this paper we consider the problem of the sliding in a localized layer and strength of MG possessing the polycluster structure. The model [23][24][25]
Model of homogeneous sliding in disordered atomic layer
In a non-metallic glass the elementary inelastic rearrangements of atomic configurations can be described as a) splitting-recombination, and b) translation of the broken bonds, Figs. 1a and 1b, correspondently. Rearrangements of the potential relief related to these inelastic deformations are shown in Figs. 1c, 1d. This picture is also applicable to MG although in this case the potential relief is much shallower, because metallic bonds are not as strong as covalent ones. In a polycluster the intercluster boundaries are regions of weak cohesion therefore here plastic deformation may occur by the sliding of one cluster over another. Figure 2a shows schematically the structure of sliding layer consisting of coincident and non-coincident sites.
The potential relief in the sliding layer is shown in Fig. 2b. Due to strong alternation of sites of different types the formation of gliding boundary dislocations is mostly suppressed.
Basic equations
When a planar layer is subject to an external shear stress e , and atoms experience only elastic displacements then external stress is homogeneous in the layer. If some part of atoms underwent inelastic displacements this will cause stress redistribution i.e. stress relaxation at places of inelastic deformation and stress concentration at elastically deformed areas. Thus local stresses in sites are inhomogeneously distributed and time dependent. We assume that the local stress relaxation occurs by independent single-jump atomic rearrangements resulting in the external stress concentration at nondisplaced sites The sliding velocity within the layer is a macroscopic quantity defined by the average frequency of inelastic displacements under the external stress where d is the average site displacement during the elementary relocation, sl is the average time for displacement of all sites of the slip layer per interatomic spacing. Within a simple model of a particle in the two-level potential, see Fig. 3 and Appendix 1, the probability of thermally activated jump under external stress e can be expressed by is the critical shear stress when the atomic configuration goes from a site to a neighbor site without thermal activation. A physical meaning of parameter i is similar to the shear strength of a perfect crystal (1). Unlike to the crystalline lattice the amorphous solid is characterized by a wide spectrum of local critical stresses. Denote the distribution function of i by ) The value ) (t has an evident relation to the probability of local inelastic displacements: Equation (7) is an integral equation with respect to (4) and (6) .
holds true, site i has become displaced without thermal activation in a time as short as ~0 / 1 .
Therefore the integration in (7) is carried out not from zero, but from the lower limit Combining (7) and (9) we obtain the master equation
Athermal sliding
As the first step let us consider Eq.
is the single-valued function of x . Hence, the function is also the single-valued function of x . Equations (11) and (12) As an illustration, the solution of Eqs. (11) and (12) is shown in Fig. 5 for the case of a trial distribution function ) ( g depicted in Fig. 4. The points s are the solutions of the changes in a stepwise manner with increasing e . Using (9), (11) and (12) we obtain the equation for instability points s in the form: Because
Thermo-activated sliding
At low stress regime when e does not exceed the threshold of athermal sliding s 0 , Eq. (10) for the fraction of displaced sites is conveniently rewritten in the form . The parameter 0 Z is defined by relation (9).
The sliding time sl can be found from the equation To solve nonlinear integral equation (14), first, we transform it to the parametric form Integration of Eq. (16) gives the implicit solution for the function , Eq. (17) specifies the sliding time of inelastic shear strain on interatomic Here the functional ) (F Z is defined by Eq. (15). Equations (18) and (5) The activation volume of sliding is not a constant; it depends on the external stress. Using definition [27] e sl B act we can calculate act V by application of Eqs. (18) and (15). Stress dependence of act V is shown in Fig. 8. Evidently the activation volume at low stresses is much larger than the atomic volume a v and scales with external stress as The exponent depends weakly on temperature and lies in the range 0.5< <1. While stress where according to (15) the function ) (x Z is given by At low temperatures ( 1 ) this equation is simplified to where ) ( 0 x Z is defined by (22).
Upon integrating (21) subject to (24) and (25) we obtain Equation (26) was obtained under assuming that the distribution ) ( g is a smoothly varying function in the vicinity of o x . Using Eqs. (21) and (23) it can be shown that This result validates relation (20) for activation volume at low stresses.
Heterogeneous deformation of MG
When considering the boundary sliding in polycrystals and polyclusters the finite sizes of boundary layers between triple joints of grains have to be taken into account. Boundary sliding is blocked in the triple joints. The sliding layer can propagate into the neighboring grain when the stress at the boundary edge of the triple joint exceeds a critical value * c , as it is shown in Fig. 9. Its edge is a one-dimensional dislocation-like defect which we call dislocation-like edge of sliding zone (DLESZ). In polycrystals the dislocation network and rotational motion of grains control the macroscopic plastic deformations. In polyclusters formation of a similar dislocation network is impossible because the dislocation sliding layer possesses the structure similar to that of intercluster boundary. Therefore a DLESZ is a carrier of inelastic deformation only if it is moving within the cluster body. When it reaches the cluster boundary, the cluster body becomes divided into two parts. In other words, DLESZ cuts non-crystalline grain into two parts. Initial stages of inelastic deformation of a polycluster are shown in Fig. 9. A fragment of the two-dimensional section is depicted in Fig. 9a. Boundaries demarcate clusters. In Fig. 9b initial deformation stage of the polycluster is shown at . Shear transformation occurs in the boundary section of size l. The shear stress is in plane of the boundary section.
While the cluster bodies are elastically deformed, inelastic deformation takes place in the marked boundary section. It has to be noted that inelastic rearrangements take place at s 0 . In this case the fraction of the inelastically deformed sites is less than 1, see Eqs. l .
With that an overshoot on the stress-strain curve occurs; that is, the stress decreases after its maximum value is reached [4].
The second mode of the polycluster plastic deformation is connected with the initiation of cracks at the triple joints, as shown in Fig. 9d. Cracks are formed if the Griffits condition is the stress normal to the pining boundary layer in the triple joint; E is the Young modulus of the cluster; s and b are the surface energy and the boundary energy respectively.
Formation of cracks accompanies the rotational motion of a cluster. Both shear transformation and crack formation play a leading part in formation of shear bands.
Discussion
In the model of dislocationless sliding formulated above we have considered an infinite sliding layer. In this form the model can be also applied to description of the frictional force of a friction couple with known surface roughness and adhesion bond. Protrusions on rough sliding surfaces are distributes in a wide range [29]. Therefore to calculate the sliding velocity, see Eq. (5) [30,31]. We assume that at small G d the deformation mode is controlled by grain boundary sliding during grain rotations. In this case boundary reconstructions play a role of lubricant redistribution that diminishes inner friction of the grain boundary creep. The detailed description of this mode will be given elsewhere.
Conclusions
-Master equations of the homogeneous sliding in the layer with a random microscopic potential relief have been exactly solved.
-The characteristic mechanical quantities have been defined implicitly in the form of functionals of the distribution function of local critical stresses.
-Mechanisms of low temperature plastic deformation of polycluster MGs initiated by intercluster boundary sliding have been described.
|
2019-04-12T23:52:41.241Z
|
2011-06-30T00:00:00.000
|
{
"year": 2011,
"sha1": "313bb6c011a8914e8ee7a3cf4e85137c6483741d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "313bb6c011a8914e8ee7a3cf4e85137c6483741d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
117633328
|
pes2o/s2orc
|
v3-fos-license
|
Ray-Tracing studies in a perturbed atmosphere: I- The initial value problem
We report the development of a new ray-tracing simulation tool having the potential of the full characterization of a radio link through the accurate study of the propagation path of the signal from the transmitting to the receiving antennas across a perturbed atmosphere. The ray-tracing equations are solved, with controlled accuracy, in three dimensions (3D) and the propagation characteristics are obtained using various refractive index models. The launching of the rays, the atmospheric medium and its disturbances are characterized in 3D. The novelty in the approach stems from the use of special numerical techniques dealing with so called stiff differential equations without which no solution of the ray-tracing equations is possible. Starting with a given launching angle, the solution consists of the ray trajectory, the propagation time information at each point of the path, the beam spreading, the transmitted (resp. received) power taking account of the radiation pattern and orientation of the antennas and finally, the polarization state of the beam. Some previously known results are presented for comparative purposes and new results are presented as well as some of the capabilities of the software.
I. INTRODUCTION
Multipath propagation is believed to be the major cause of data transmission impairments in terrestrial line of sight microwave radio systems. Efficient antenna design requires the understanding of the propagation of individual rays across the channel and gauging the refractive index of the various atmospheric disturbances any given ray encounters during its propagation. Adopting a refractive index model for a given disturbance arising from spatial fluctuations in humidity, pressure or temperature (these fluctuations might be temporal as well, but we shall consider, for the time being, that the propagation time occurs on a time scale much smaller than the one associated with these fluctuations), we establish the ray propagation equations and solve them with several numerical techniques having a first, fourth and sixth order accuracy. The ray tracing equations are initially solved in two dimensions bypassing the effects of small and non-linear terms as explained in section 2. Later on, we switch to 3D in order to assess the effects the small and non-linear terms have on ray propagation. Several facts emerge from this approach: • The small non-linear terms lead to a breakdown of standard integration techniques. The ray equations which constitute a system of 6 ordinary coupled non-linear differential equations become stiff. This means the integration step becomes so small (because of the presence of terms that differ by several orders of magnitude) making the integration pro- * Present address: Laboratoire de Magntisme de Bretagne, UPRES A CNRS 6135, Universit de Bretagne Occidentale, BP: 809 Brest CEDEX, 29285 FRANCE cess so slow that any progress in seeking a solution of the system is virtually stopped.
• The relation between the launching and arrival angles for a given disturbance are profoundly altered. What was previously believed to be a "good" or "bad" launching angle might have gotten its true attributes from reasons different from what is currently known.
• A very high sensitivity is observed around certain launching angles: a very small uncertainty in the launching angle can induce the ray to take a path radically different from what is normally expected.
This report is organized in the following way: In section 2, we establish the ray-tracing equations (RTE). In section 3 we describe some of the problems encountered during the solution of the RTE, namely those related to stiffness and present the algorithms to cure them (Appendix A contains a description and an example of a stiff system). In section 4 we compare our approach to previous ones and present some illustrative new cases in section 5. This section also describes the potential applications of the software and its capabilities. Section 6 discusses some possibilities for future developments. Appendix B shows how to avoid stiff differential equations in two dimensions and turn the RTE into a set of recursion relations.
II. RAY TRACING EQUATIONS
In terrestrial microwave radio systems, the range of frequencies used and in comparison the range of length scales present in the channel allow us to use a geometric (or ray) approach to electromagnetic propagation. The fundamental equation of geometrical optics is the Eikonal equation : where n is the local refractive index and S is the local phase of the ray. Taking the gradient of both sides of the Eikonal equation gives the second order vector propagation equation: where R is the ray position and ds is a differential displacement along the ray path, i.e. ds = ||dR||, the norm of the vector dR.
This can be rewritten as a system of two first order equations: where T is a unit vector tangent to the ray path (The geometry is depicted in Fig.1). The advantage of solving a first order system rather than a single second order system is threefold: • Stability problems are easier to handle.
• Validity of the solution is easy to monitor since one has to have for all times ||T|| = 1 providing a simple means to check the quality of the integration procedure.
• Accuracy of the solution is controlled within certain tolerance limits depending on the selected integration step.
This is discussed in detail in section 5. The refractive index function of the atmosphere is written as: where N depends on the frequency used, humidity conditions and height above the Earth ground. Several models exist for the range of frequencies and heights we are dealing with and are generally expressed in N units. The following two models are of interest; the first for a normal atmosphere and the second for a disturbed one: where k is the refractive index gradient with height h. The atan() term above is due to a disturbance located at a height h 0 having an extent ∆h and a refractive strength ∆n. For a normal atmosphere (∆n = 0 in the Webster model) both models are linear in h (after expanding the exponential to first order). Nevertheless, their dependence solely on height does not account for the 3D nature of the atmosphere and its disturbances. Some models like the recent one introduced by Costa [3] mimics a 3D atmospheric disturbance by multiplying the refractive index along the vertical with a Gaussian function along the horizontal perpendicular to the ray path plane. Going beyond these approaches, we introduce a full 3D profile: where p x , p y and p h are the index profiles of the disturbance along the three directions in space x, y and h. N 0 is an average normal atmosphere index and k is the index gradient along the height. A profile function p(X), along direction X is typically taken as: where X 1 (resp. X 2 ) is the point where the hump starts growing (resp. decaying) and ∆X 1 (resp. ∆X 2 ) is a typical length scale for the growth (resp. decay). ∆n x is the refractive strength of the disturbance. This model, though realistically representing a localized anisotropic disturbance in the atmosphere is based on a separable model of the refractive index function.
While our methodology can handle any arbitrary 3D model of the refractive index, any of these refractive models have to be modified in order to take account of the curvature of the Earth by the inclusion of a term [2] equal to 10 6 h/R e where R e is the radius of the Earth.
III. STIFF DIFFERENTIAL EQUATIONS ALGORITHMS
Using [4], the ray-tracing system [3] is rewritten as: Two important features appear in the RHS of the second equation in the system: • The non-linear term in T.
• The wide range of orders of magnitudes in the denominator.
These terms can be eliminated with the following procedure: Replace equation [7-b] by another equation defining the curvature of the ray path r: where U is the normal to the trajectory. U is perpendicular to T and normalized: ||U|| = 1. The unknown ρ can be determined by taking the scalar product of both sides of [7-b] with U and using [8]; one gets: Substituting [9] in [8] gives the following system: In general, this system is not closed because it involves U besides R and T. In two dimensions, one can close the system by invoking [1] the orthogonality of U and T through: where x is the unit vector along the x direction. With relation [11], system [10] is now closed and can be integrated by any standard explicit integration method (Predictor-corrector, Euler, Runge-Kutta, Richardson etc...). This will be illustrated in section 4. In general, N is a function of the position vector R; when it depends only on the height, it is possible to further simplify the system and reduce it to a single scalar equation. In the case N depends only on height, gradN is along the vertical and if ψ is the angle T makes with the local horizontal, U being perpendicular to T will make the same angle with the vertical, [9] yields: Livingston [4] has derived an equation similar to [12]: 13] is equivalent to [12] when the right sign is used. We have integrated system [10] in two dimensions and recovered typical results found in the literature, avoiding the difficulty arising from [7-b]. In the three dimensional case, one has to deal directly with system [7] with all terms retained, for, in general, the T vector does no longer have to be confined to the transmitter (TX) receiver (RX) plane. In this case, all standard explicit integration schemes break down. In other words, the norm of the vector T tangent to the ray path is no longer conserved. In order to fulfill the condition ||R|| = 1, one has to take an integration step so small that the integration process is virtually stopped. This is called stiffness and an illustrative example is given in Appendix A.
Stiffness can be cured with the so called implicit integration schemes. In contrast to explicit integration schemes where a current system value depends only on the previous ones, implicit schemes couple present and past values of the system altogether. A price to pay is an increase in CPU time but the rewards are stability, accuracy and large integration steps. We have implemented two implicit schemes: • Generalized Runge-Kutta (GRK) method of fourth order [5].
In the first scheme, given a system of first order ordinary differential equations (ODE): one builds the vectors from the system values at step n-1: and evaluates the next value n of the system with: σ is the integration step and the a ij and b i are coefficients depending on the scheme m of the integration order. In the Rosenbrock case, one adds to [15] the term σ( ∂f ∂y ) d ij k i , where the d ′ ij s are order dependent coefficients and ( ∂f ∂y ) is the Jacobian of the system. The above equations are implicit since the unknown vectors k i needed for integration step n appear on both sides of [15]. In the GRK method, only the vector function f is needed whereas in the ROW case both f and its first order derivative (Jacobian) are needed.
Both methods have been proven to perform very well up to stiffness parameters (ratio of the highest to the smallest eigenvalue of the Jacobian) as high as 10 7 . Incidentally, our stiffness parameter has been observed (while testing ROW algorithms) to be generally around 10 4 . We have used GRK of order 4 and ROW of order 6 because they have been extensively tested for a wide range of systems and are thoroughly documented.
IV. VALIDATION OF THE APPROACH AND COMPARISONS WITH PREVIOUS TREATMENTS
In order to validate our technique, we started with a comparison against analytically known solutions. Three models were tested, the axial gradient refractive index case, the sine-wave optical paths and the classical Luneburg lens (see, for instance, reference 7). In all three cases our results compared very accurately with the analytical ones. Then we went ahead and proceeded to solve in detail a case well documented in the literature and investigated by Webster [2] for various launching angles. This model is two dimensional (2D) and extensively referred to in the literature. We use the 2D version of the system of equations [10] which is non-linear (N is a non-linear function of R and a power of U appears in [10-b]).
The integration, started by taking values of R and T as the initial location and launching vectors, is done with a first-order Euler and fourth order Runge-Kutta methods. The TX-RX configuration and propagation conditions are the same as those given in Table 1 of Webster's [2] paper. In Fig.2 we show the various ray paths between the TX and the RX for a series of launching angles (taken with respect to the horizontal) varying from -0.25 up to 0.5 degrees. The different launching angles, we use, are respectively, in degrees: -0.25, -0.20, -0.15, -0.10, -0.05, 0.0, 0.10, 0.20, 0.30, 0.40, 0.50. The refractive index profile used in the study is displayed in Fig.3.
While Fig.2 is based on a first order (Euler) integration method, some changes might occur if we rather use a fourth order Runge-Kutta method. In fact, the ray paths based on either scheme show no appreciable differences and compare well with the results found earlier by Webster in the same conditions. However, some discrepancies appear for positive launching angles and are probably due to the different levels of numerical accuracy between our treatment and Webster's. Let us recall that in our case the numerical accuracy is monitored by checking the conservation of the norm of T. In these simulations, it is conserved with an error smaller than 10 −7 . In order to compare our results to Webster's directly, we derive, in the same fashion, recursion equations for the ray radial distance R (taken from the center of the Earth) and the angle ψ that T makes with the local horizontal. Referring to Appendix B and Fig.4, we can write the following relations: where the radius of curvature ρ 1 is given by [12] with ψ = ψ 1 and dN/dh is taken at the height R − R e (R e is the Earth radius). For a given step ds, one starts the set of iterations [17] and [18] with the launching radial distance R 1 and angle ψ 1 . Using the same initial values as before we retrieve almost the same ray trajectories obtained in Fig.2. The validity of our results is monitored by the constancy of the modulus of T versus 1. Additionally, we compared our results (Euler and Runge-Kutta) to a very high accuracy integration technique based on the Butcher's [8] algorithm (seven-stage sixth-order Runge-Kutta scheme). The sixth order results are virtually identical to the fourth order's and Fig.5 depicts the ray trajectory obtained with the different levels of accuracy under the same atmospheric and launching conditions. Incidentally, the difference between fourth and sixth order trajectories in Fig.5 are on the order of a fraction of a millimeter.
In spite of the above agreement, which is basically relative, one still has to gauge independently the accuracy of the results for a selected order and integration step. This is done with the following method: Pick an order p and an integration step σ; integrate once with σ and twice with σ/2 in order to reach the same point; define a step ratio κ from the difference ∆ between the two results: and monitor the value of k for a given tolerance, during integration. Ideally, we should have κ ≤ 2. In Fig.6, we display κ versus the integration step number for the first order (Euler, p=1) case as well as the Runge-Kutta 4-th order (p=4) and Butcher 6-th order (p=6) for a tolerance of 1 millimeter. We use exactly the same condition as previously and a launching angle of 0.2 degrees. The figure shows clearly the superiority of 4-th and sixth order methods for the selected step when such a high accuracy is desired.
V. ILLUSTRATIVE RESULTS AND CAPABILITIES OF THE METHODOLOGY
We move on to the description of the 3D propagation case and show, with a simple example, how we evaluate the power from the antenna radiation pattern, the beam spreading and the state of polarization. We select a coordinate system such that the TX is somewhere on the z-axis whereas the y-axis is along the TX-RX line. The vertical plane is defined by the z axis and the TX-RX line. The beam spreading is evaluated by launching simultaneously several beams in the vertical and horizontal planes with angles differing by a small amount from those characterizing the main beam. The logarithm of the ratio of the surfaces swept by the different beams at the receiver location gives an estimate of the spreading loss. In order to account for the TX-RX antenna radiation pattern, we simply recall that the electric field radiated by a parabolic circular aperture antenna at a point defined by its distance r from the main lobe origin and making an angle θ with the lobe axis is given by: where a is the aperture radius, E 0 is a reference field, β = 2π/λ with λ the wavelength used, J 1 is the Bessel function of the first kind and j = √ −1. The antenna pattern is obtained after normalizing the value of |E(r, θ)|: Alluding to our choice of axes, if the main lobe is pointing in a direction defined by the angles β, γ (in the vertical and horizontal plane respectively) and we have a ray along β ′ , γ ′ , the angle the ray makes with the main lobe axis is: The power (in dB) is given by 20 log 10 f (θ). The polarization state of a ray rotates, during propagation, by an angle calculated with the help of the following formula: where A and B represent the two end points of the ray trajectory; τ , the local torsion of the ray is different from zero when the trajectory is not confined to a plane. Using the Frenet-Serret [1] formula: Taking the dot product with U on both sides of equation [24] and replacing the value of τ in [23], one gets: In order to evaluate the polarization rotation of the ray propagating from A to B with [25], a finite difference approximation B n − B n−1 is used for the differential dB, where the subscripts refer to the integration step. The final discrete formula for the polarization angle reads: where N is the number of integration steps between A and B. For illustration, we treat two 3D examples. In the first case, we take a refractive index model consisting of a refractive layer of finite length along the TX-RX line. The linear extent of the layer is taken respectively as 5, 10 15, 20 and 25 kms. Fig.7 shows the dramatic effect the extent has on the ray path. Incidentally, the refractive index model along the height is taken as the same Webster model as before and the ray launching is made in the vertical plane. In the second case, we take a refractive index model given by a Webster profile along z and a profile p y (y) given by [5]. Moreover we take an arbitrary 3D launching direction. The resulting 3D ray trajectory for the selected parameters listed in the corresponding caption is displayed in Fig.8.
VI. CONCLUSIONS AND FUTURE DEVELOPMENTS
We intend to use this technique to study the dynamics of microwave radio signals controlled by unstable atmospheric layers. The instabilities cause short error bursts lasting from many tens of micro-seconds to a few milliseconds [10]. Since, the error bursts have detrimental impact on communication networks [11], the future digital radio systems should be made immune to radio propagation degradations causing them. In order to develop defense strategies against the error bursts caused by atmospheric propagation instabilities, the physical characteristics of the instabilities have to be well understood. This 3D ray-tracing technique will be used to study the effects of dynamically changing atmospheric layers of limited size on microwave radio signals received simultaneously by a few parabolic antennas [12]. A propagation model simulating the recorded dynamics of received radio signals [10] will, not only, help understanding the physical causes of the error bursts, but it will also be used in the computer optimization of antenna designs capable of minimizing the frequency of occurrence of the propagation caused error bursts. Highly accurate numerical techniques are required since small fluctuations of the atmospheric conditions are believed to be responsible for the flat phase fluctuations impairing the digital demodulation of the received microwave radio signals.
The solution of the system is: where C 1 and C 2 are constants determined by the initial condition at x=0. In order to conform to our notation of Section 3, we define a column vector y whose components are y 1 , y 2 and write the system as: The eigenvalues of the Jacobian of the system: are solutions of: where I is the (2x2) unit matrix; they are nothing else than λ 1 and λ 2 . If one picks λ 1 = −1, λ 2 = −1000. and chooses an explicit integration method, one finds the integration step should be smaller than 2/|λ 2 | , which is 0.002. This is the origin of stiffness: even though the term exp(-1000 x) contributes almost nothing to the solution for x ≥ 0, its presence alone, virtually stops the integration process.
APPENDIX B
The geometry of propagation is shown in Fig.4. At any point along the ray trajectory the tangent vector T makes the angle ψ with the local horizontal. When the ray propagates between two nearby locations, one may write: where ds = ||R 2 − R 1 || . The radial distance R 1 (resp. R 2 ) is taken from the center of the Earth. The angle δθ between the two radial directions may be found by inspection: (37) which can be approximated by: In order to find the relation between the angles ψ 1 and ψ 2 , we use the relation defining the derivative of T, dT/ds = U/ρ in a discrete form: Taking the scalar product with U 1 on both sides of above, one gets: The inspection of Fig.4 provides the angle between T 2 and U 1 : Using the above result gives the relation sought: chronous Digital Hierarchy compatible digital radio systems" Presented to ICC 93 (Geneva).
[12] C. Tannous and J. Nigrin: Ray-tracing studies in a perturbed atmosphere: II-The boundary value problem (to be published). Equations [10] are used along with model [4-b] for a perturbed atmosphere N = 300. + kh + ∆n π atan(12.63 (h−h0) ∆h ) with the same parameters as those given in Table 1 of reference 2: k = −39, ∆n = −20 (both in N units), h 0 =175 meters, ∆h =100 meters, the transmitter height is 125 meters and the TX-RX separation is 60 kms. cases is 0.2 degrees in the vertical plane and the model considered for the refracting layer is the same as Figure 2. The fourth and sixth order results are virtually identical. Fig. 6: Comparative study of the behavior of the step ratio versus step number for the Euler (1st order), the Runge-Kutta (4-th order) and the Butcher (6-th order) methods when the step is fixed to its starting value. Ideally, this ratio should always be about 2. In the first order case, the bound is violated very rapidly (upper curve), whereas it is respected until almost the end of the trajectory in the 4-th (long dashed curve) and 6-th order (short dashed curve) cases. The tolerance is 1 mm and the step used is one hundredth the TX-RX distance. Fig. 7: GRK (Implicit, 4-th order, 3D) results for the ray trajectories when the extent of the layer is a variable. Starting with a launching angle of 0.2 degrees in the vertical plane, the layer spans, initially, the entire hop of 60 kms (lowest curve). Moving upward from the next lower curve, the layer extent (along the TX-RX line) is from 5 to 25 kms, then 5 to 20 kms, 5 to 15 kms and finally 5 to 10 kms. In all cases, the refracting layer model is the same as in Figure 2. http://arxiv.org/ps/physics/0104004v1
|
2019-04-14T03:11:59.414Z
|
2001-04-01T00:00:00.000
|
{
"year": 2001,
"sha1": "af3ef6c2435377f1d52bb092d7ce4d085212cca8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0d4cde49aa82596f1dafcf10e5a00978319b0ac1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
234523182
|
pes2o/s2orc
|
v3-fos-license
|
Managing the Interference for Down-Link in LTE Using Fractional Frequency Reuse
Long Term Evolution has developed a new radio technology called femtocell or Femto Base station; which is well-suited to improve cellular network capacity and mobile coverage to indoor user's areas. Providing additional capacity and coverage expansion through FBSs could lead to large interference in a cellular radio communication network. In this paper; we proposed an efficient resource allocation scheme based on Fraction Frequency Reuse (FFR) for interference mitigation; where the entire spectrum is shared among network entities. FFR mechanism aims to reduce co-tier and cross-tier downlink interferences by allocating non-overlapping sets of bands to the user equipment at different geographic locations. The main purpose of this work is to compare two main types of FFR schemes; respectively; Strict FFR and Soft Frequency Reuse with the proposed scheme. The three types of FFR schemes were explained and evaluated with Monte-Carlo simulation based on some performance metrics; namely; sum-rate; spectral efficiency; and outage probability. Simulation results showed that the impacts of the proposed scheme are significantly high in comparison with two other methods. The proposed scheme proved to enhance spectral efficiency; reduced the outage probability; and increased the sum rate for all the users. © 2020 Tim Pengembang Jurnal UPI Article History: Received 01 Sep 2019 Revised 18 Jan 2020 Accepted 25 May 2020 Available online 20 June 2020 ____________________
A B S T R A C T S A R T I C L E I N F O
Long Term Evolution has developed a new radio technology called femtocell or Femto Base station; which is well-suited to improve cellular network capacity and mobile coverage to indoor user's areas. Providing additional capacity and coverage expansion through FBSs could lead to large interference in a cellular radio communication network. In this paper; we proposed an efficient resource allocation scheme based on Fraction Frequency Reuse (FFR) for interference mitigation; where the entire spectrum is shared among network entities. FFR mechanism aims to reduce co-tier and cross-tier downlink interferences by allocating non-overlapping sets of bands to the user equipment at different geographic locations. The main purpose of this work is to compare two main types of FFR schemes; respectively; Strict FFR and Soft Frequency Reuse with the proposed scheme. The three types of FFR schemes were explained and evaluated with Monte-Carlo simulation based on some performance metrics; namely; sum-rate; spectral efficiency; and outage probability. Simulation results showed that the impacts of the proposed scheme are significantly high in comparison with two other methods. The proposed scheme proved to enhance spectral efficiency; reduced the outage probability; and increased the sum rate for all the users. © 2020 Tim Pengembang Jurnal UPI
INTRODUCTION
Long Term Evolution (LTE) and LTEadvanced give operators the potential to achieve higher peak data rates throughout systems with higher spectrum bandwidths. Work on LTE began in 2004; and an official work item began in 2006. A complete specification of LTE was developed in early 2009 (Krause;. The initial deployment of LTE started in 2010 with release 8. LTE-advanced was introduced by release 9 and beyond) started in 2011. According to the 3rd Generation Partnership Project (3GPP) specifications; LTE offers a significant improvement in spectral efficiency; latency; and multi-user flexibility; compared to older mobile standards. It supports heterogeneous cellular networks; including macrocells or macro-base stations (MBSs); picocells; Femto Base Stations (FBSs); and relays. FBS was first introduced by IEEE 802.16 SDD (System Description Document) to provide an advanced radio interface operate in licensed bands. Researches showed that 66% of calls and over 85% of data services occur indoors. Some surveys showed that 43% of households and 34% of businesses experience poor indoor coverage problems (Cullen;. An FBS is a very small base station that operates in a licensed spectrum to connect standard mobile devices to the service provider's network via broadband connections (such as Digital Subscriber Line (DSL); cable; or fiber) . Small base stations can be put in a residential setting. Thus; the FBS allows the mobile operator to extend mobile network coverage into the home by using the consumer's internet connection; it can improve the macrocells capacity and coverage simply and economically. Inefficient deployment of the FBS may lead to a degradation of the overall performance of the cellular system. One example of this performance degradation is coverage holes for indoor Macrocell User Equipments (MUEs) due to interfering transmissions by nearby FBSs. As FBSs are embedded inside a macrocell; both macro and FBSs should operate on a certain frequency. The operators need to specify the allocated frequency range for the macro and FBSs. This frequency allocation is a tedious job. A little mismanagement can lead to various levels of interference problems in a two-tier network 2009).
The network topology of a cellular network changed when the FBS is added. Therefore; the most important challenge to the deployment of FBSs is the problem of interference. The issue of interferences could occur from interferences related to Macrocell to FBS; FBS to Macrocell; or FBS to FBS. Many studies found that it is important to choose the location of FBSs carefully to have the greatest possible coverage area with the least positive number of FBSs entities (Bennis et al.;2011). Using this approach; it leads to a reducton of interference to acceptable levels at a lower implementation cost. While many other studies considered transmission power as a way of Interference Mitigation Technique; a dynamic power control algorithm was proposed by several researchers Shin & Choi; to reduce the interference probability in Maximizing Indoor Coverage Availability. In contrast; a decentralized resource allocation for the Hybrid wireless networks was proposed by Chu et al. (2010). In this scheme; the available spectrum is divided into two separate classifications based on time and frequency domains. All the spectral resources can be selected and utilized by the Macrocell. At the same time; only a subset of frequency bands is allowed to connect randomly to the FBS when it wants to transmit.
On the contrary; Mahmud and Hamdi (2014) There are several pieces of papers (Chang et al.;2009;Hassan & Assaad;2009) that focus on optimizing the FFR mechanism through the use of advanced methods such as the graph theory (Chang et al.;2009) and the convex optimization technique (Hassan & Assaad;2009) to maximize network performance. Additional work by Han et al. (2008) found the optimal resource allocation method to reduce Co-Channel Interference (CCI) and increase spectral efficiency. In this paper; a new frequency partitioning approach and sub-bands allocation scheme has been proposed to improve the system performance and increase system capacity using FFR.
In the next section; we explained the challenges to mitigate the interference in two-tier LTE femtocell systems and the basics of the FFR mechanism for interference management in the OFDMA based LTE femtocell system. Later; in the next step we presented a literature review of the main types of FFR methods. The proposed scheme was then presented and it was followed by the results of a comparative performance evaluation of the different FFR schemes. Finally; the conclusion was given.
INTERFERENCE OF MANAGEMENT AND FRACTIONAL FREQUENCY REUSE (FFR).
Due to the radio resource limitations; a two-tier macrocell and FBS network have to share the frequency spectrum; rather than splitting frequency between tiers (Chu et al.;. The sharing could lead to signal interference; especially in the dense deployment of FBSs. This interference arises because of the duplication of resources in the neighboring cells. It has the effect of degrading the service quality of the end-users. Hence; the two types of interference in two-tier Macrocell and FBS networks are 2009): Co-tier interference (Femto to Femto or macro to macro); and Cross-tier interference (Femto to macro or macro to Femto). Figure 1 depicts two scenarios of downlink co-tier and cross-tier interference.
In Co-Tier Interference; this type of interference occurs between elements in the same tier within a network. In this case; co-tier interference occurs between neighboring FBSs that belong to the same tier. As shown in Figure 1; both uplink (UL) and downlink (DL) interferences exist. In UL interference; Femto user equipment (FUE) interferes with another FBS. While in downlink interference; the FBS interferes with another FUE in Co-tier interference.
Regarding Cross-Tier Interference; this type of interference occurs between elements in the two-tier network. The interference occurs between macrocells and FBSs in different tiers as shown in Figure 1. In UL interference; an FUE close by to the macrocell base station (MBS) interferes with it other than MUE. While in DL interference; an MBS close by to the FUE interferes with it other than FBS in crosstier interference. The maximization of the network throughput with the consideration of crosstier and co-tier interference has become a big challenge. Researchers presented different schemes for interference mitigation in twotier macrocells and FBSs. These schemes considered uplink or downlink to be transmission. A resource allocation scheme aiming to reduce the co-tier interference is discussed and evaluated by Madan et al. (2010). Chandrasekhar et al. (2009) developed an algorithm based on power control to mitigate the cross-tier interference and increase system performance. Much work has been done based on cognitive radio technology; shared spectrum usage; partitioned spectrum usage; and modified FFR schemes to mitigate the interference in the cellular network.
In this paper; we evaluated the main types of FFR schemes proposed for mitigating interference in the two-tier Femtocell network; namely soft FFR; strict FFR; as well as a new FFR scheme; which is referred to as the proposed scheme. We performed a broad comparison of all these schemes; considering some performance metrics; including sum-rate; spectral efficiency; and outage probability in a two-tier LTE Femtocell system.
Fractional Frequency Reuse
FFR is an interference management technique to overcome the CCI and inter-cell interference (ICI) problems. The cell is logically divided according to distance into inner and outer regions; and the different regions are allocated different frequency reuse factors (FRF). Hence; the users are differentiated as cell-Centre users and cell edge ones. Cell-Centre region uses universal frequency reuse. However; the cell edge zone is divided into N FFR regions; and different frequency sub-bands are allocated to each region. By doing this; the neighboring cells' edges operate at different sets of subbands. This technique helps to mitigate cross-tier interference.
Soft Frequency Reuse
SFR has been established as a standard technique to control CCI in cellular systems. The cell area is divided into two regions; a central region where the major frequency band is available and a cell edge area where only a small fraction of the spectrum is available. The spectrum dedicated to the cell edge may also be used in the central region if it is not being used at the cell edge . A lack of spectrum at the cell edge may result in much-reduced capacity in that region. This is overcome by allocating high power carriers to the users in this region; thus improving the SINR and the capacity. Figure 2(b) represents the SFR deployment with a Reuse-3 on the cell-edge zone.
SFR divides the available spectrum into three sub-bands; f1; f2; and f3; assigned to the cell edge-zone. The celledge regions are confined to utilize only the cell edge band. The cell-center users have access to the cell-edge band; consequently; the center zone is allowed to use the same sub-bands used by adjacent cell-edge users. For example; if sub-band f1 is assigned to the cell-edge zone. Then the cell center zone is allowed to use sub-bands f2; and f3.therefore; SFR is more bandwidth efficient than strict. According to the FBSs location in the macrocell coverage area; FBSs can be divided into two main categories; namely center FBSs and edge FBSs. FBSs at any center zone are not allowed to use subbands allocated to the cell-edge zone of the same cell. Center FBSs are allowed to use only one sub-band; whereas edge FBSs will operate on the other two sub-bands. For instance; if sub-band f1 is allocated to the edge zone; then edge FBSs will use either sub-bands f2; f3.
SFR allows the base station to use the same sub-bands used by the adjacent cell edge-users to serve the cell-center users. The dominant interfering downlinks originate from the tier-1 macrocells. Consequently; cell-center users and celledge users will experience interference from the first tier. Therefore; a power control factor (β) is introduced for celledge users to reduce inter-cell interference . To accomplish this; the transmit power will be = for users located in the center-zone; and = for users located in the edge-zone; where β ≥1. This significantly reduces the cross-tier interference except for users near the boundary of the center and the edge zone. However; the co-tier interference would be reduced due to low FBS power.
Strict FFR
Strict FFR is an interference management technique. It splits one cell into two concentric regions according to distance and allocated different FRF to each region. The inner sub-cell uses universal frequency reuse. Outer sub-cell is divided into N FFR regions; and separate frequency is allocated to each region. By doing this; the neighboring cells' edges operate on different sets of sub-channels. This technique helps in the mitigation of cross-tier interference. In Strict FFR; the available bandwidth is divided into two parts; one part of them denoted by f1 is assigned to the center-zone; whereas the second part is divided equally into several sub-bands according to FRF of the edge zone. Therefore; the total number of sub-bands equal (N+1). Figure 2(a) represents the Strict FFR deployment with a cell-edge reuse factor of N =3. The reuse factor of 1 is reserved for center-zone; and with N=3; sub-bands f2; f3; f4 are applied to the edge-zone. The interference between inner and outer users is mitigated; due to the cell-edge users do not share any spectrum with cell-center user In adddition; in Strict FFR; FBSs can be divided into two main categories; center FBSs and edge FBSs. FBSs are allowed to use two sub-bands per cell. Center FBSs will operate on the same sub-bands that are allocated to the cell-edge zone. Likewise; Edge FBSs occupied the same sub-bands used by macrocells in the center-zone. For example; if sub-band f2 is allocated to the edge zone; edge FBSs will operate on sub-band f1. Only one sub-band is selected by edge FBSs; and three sub-bands are excluded to mitigate cross-tier interference.
However; co-tier interference between the FBSs may become sharp; especially in the edge zone since all the adjacent cell-edge FBSs use a limited number of sub-bands.
Additionally; cross-tier interference would be severe near the transition areas of the center and edge zones in a macrocell. The frequency allocation scenario between Macrocell and FBS entities comes at the expense of network spectral efficiency.
PROPOSED SCHEME
We propose frequency allocation schemes for hybrid macrocell-Femto networks by exploiting popular macrocell frequency allocation schemes. Our proposed allocation schemes enhance the coexistence of both types of networks. These proposed allocation schemes are assumed to be fixed as they require no coordination and no signaling between macrocells and FBSs. We compare the different proposed schemes in different FBS deployment densities using metrics such as spectral efficiency; outage probability; and average network sum-rate.
In this study; we consider that the network includes Macrocell and FBSs in the LTE system. The Macrocell is located at the center zone; and MUEs are uniformly distributed in the Macrocell. We also assumed that a large number of FBSs are deployed and configured. The number of FBSs varies between 0; and 40 and FBSs are uniformly distributed in the Macrocell. FUEs can only be located inside the coverage area of FBS. We finally assume that the available spectrum is shared between Macrocell and FBS.
In the proposed FFR scheme; the coverage of Macrocell is divided into two parts; namely one is the center zone; and the other one is the edge zone; each containing three sectors. The center cells are denoted by Z1; Z2; Z3; and the edge cells are denoted by X1; X2; and X3. In order to achieve segmentation of outer regions X1; X2; and X3; the Macrocell should use three sectorized antennas; each of which with a sector width of 2π/3.
The available spectrum band is separated into two equal parts. The first part is denoted by SA and the other part is further divided into three subsets denoted by SB; SC; and SD. The center zone (Z1; Z2; and Z3); will be assigned to the sub-band of the SA with reuse factor 1; whereas in the edge zone; reuse factor 3 is used. The sub-bands SB; SC; and SD are applied in X1; X2; and X3 regions. More specifically; as shown in Figure 4; when FBS starts working and estimated the received signal strength indication (RSSI) for all the sub-bands (Su). If RSSI value for subband SA is the highest then FBS is located in the center region. FBS excludes not only the sub-bands which are occupied by the Macrocell in the center zone; but also the ones which are occupied by the Macrocell in the same sector. When the RSSI value for sub-band SA is not the highest; then the FBS is located in the edge zone. The FBS selects the sub-bands; which are not occupied by the Macrocell in the same region. For instance; if FBS is in sub-area X2; it can use only the subbands SA; SB; and SD. And the sub-band SC is used by the Macrocell. However; if the FBS is present in the center cell; then only sub-band SB and SD can be used. Due to the typical feature of OFDMA; the Macrocell is intervening with ICI. FFR is used to mitigate that interference. To prevent interference from macrocells; FBSs utilize different sub-bands. The FBS reused bands in the coverage of macrocells as much as possible. As FBS has very small transmitting power; the interference among macrocells and FBSs is considerably reduced. In order to increase the throughput of consumers in the edge region; a larger number of sub-bands are allocated to the FBSs in that region. In our scheme; with a decreased MBS coverage area; wider parts of the spectrum are available to select from. Therefore; co-tier interference is significantly decreased in comparison with other schemes. Additionally; the cross-tier interference to an FUE may only be possible on the transition region around cell boundary or from an MUE in the center-zone subarea. The cross-tier interferencelimited only by 2 adjacent macrocells. 1) where ` are the transmit power of serving macrocell x and the neighboring macrocell x` respectively on subcarrier k. The set x` represents all the interfering base stations; i.e.; base stations that are using the same sub-band as user ; which depends on the location of the MUEs and the specific FFR scheme used. F is the set of interfering FBSs. Here; the adjacent FBSs are defined as those FBSs which are inside a circular area of radius 60 m centered at the location of MUE . No is the noise power spectral density; and f represents the subcarrier spacing.
PROPOSED MODEL
; Is the channel gain between macro user and serving macrocell on sub-carrier k; which is dominantly affected by path loss; the path loss for outdoor is modeled as Ho and Claussen (2007): where d is the distance from a base station to a user in meters. The channel gain G can be expressed as Similarly; ; is affected by both indoor and outdoor path-loss. In this case; d would be the Euclidean distance between an FBS f and the edge of the indoor wall in the direction of MUE m. The path loss for indoor is modeled as = 38.5+20 10 ( ) + L Walls dB (4) where Lwalls values are 7; 10; and 15 dB for light internal; internal; and external walls; respectively (Ho & Claussen;. After the wall; the path-loss will be based on an outdoor path-loss model. The practical capacity of macro user m on sub-carrier k can be given by. ; = . log 2 (1 + ; ) (5) where ∆ represents sub-carrier spacing and λ is the constant referring to the target bit error rate (BER) with = −1.5 ln(5 ) .
For an FUE communicating with the FBS on sub-band K; the received SINR of FUE on sub-band K is similarly given by where F` is the set of all interfering (or adjacent) FBSs and X is the set of interfering MBS. Here; ; represents indoor channel gain for distance d between the FUE and its serving FBS. On the other hand; ; corresponds to both indoor and outdoor path-loss model. Since the interfering signal is coming from the MBS; in the denominator; we include fading. Due to the fact that the transmission radius of the interfering FBS is small; fading is not considered for indoor propagation. Again; note that only FBSs within a certain range are considered as interference sources.
For evaluation; the average network sum-rate; spectral efficiency; and outage probability are considered.
In the Sum-Rate; The maximum achievable capacity for FUE is given by Accordingly; the average network sumrate Ravg is defined as.
and
; refer to the binary sub-bands. When ; = 1; it represents that the nth sub-bands are assigned to the mth users belongs to MUE. When ; = 1; it represents that the nth sub-bands are assigned to the mth users belongs to FUE.
In the spectral efficiency; The spectral efficiency is defined as the average data rate per unit spectrum. With the spectral efficiency defined as ; = log 2 (1 + ; ) and ; = log 2 (1 + ; ) for MUE and FUE ; respectively; the average network spectral efficiency; Sis gave by.
In the Outage Probability; outage probability affects network performance. It affects the data rate and throughput of the network. To find out the outage probability; we need a threshold value. If the outage probability is small; then the throughput increases; if the throughput increases; then the data rate increases. The outage probability Pout is given by Lee et al. (2010).
Where δ m;k indicates failed sub-carrier assignment for user m on sub-carrier k. Ifδ m;k = 1; then SINR of that sub-carrier is under the SINR threshold ( SINR m;k < SINR threshol ).
SIMULATION PARAMETERS
Simulations are done with the assumption that the network consists of 7 macrocells. In order to provide adequate variation in the simulation environment; we have varied the number of FBSs from 0 to 40. The simulated scenario considered is depicted in Figure 5. Here a regular layout with 7 sites is designed. The dots; in Figure 5; are the MUEs (red color) and the FUEs (blue color) that assumed to be randomly distributed; and the different colors indicate the different sectors. We assume that the FBS formed are non-hexagonal and operate in closed access mode (only registered FUE devices will be able to access the FBS).
All the network parameters (in Table1) will remain constant during a simulation run. To make the results accurate; the numbers of FBSs are increased from 0 to 40 in each simulation run. The simulation parameters that will be used are summarized in Table 1.
When FFR is applied; the Macrocell uses a part of the frequency bands given to the central zone; and the rest is given to the edge region. A two-dimensional antenna pattern is considered by the simulation. Both are omnidirectional; and sectorized antennas are set up on MBS. Each station uses three sectorized antennas with 22 W for the edge region and an omnidirectional antenna with 15 W transmit power for the center zone. For all the FBSs; the transmitting power is 100 mW. = 28+35 10 ( ) dB = 38.5+20 10 ( )+ dB Figure 5. The considered network scenario
SIMULATION RESULTS
Allocating more resources to edge users than interior users is optimal in terms of a sum-rate maximization. Therefore; it is intuitive that; if users are distributed uniformly; a smaller interior radius equates to classifying more users as cell-edge users; which provides them with the benefits of interference avoidance via FFR. Also; with the proposed scheme; the usable number of sub-bands per unit area increases when compared to the other FFR schemes; and consequently; the spectral efficiency increases. As shown in Figure 6; the average spectral efficiency for both edge UEs and all UEs improve if more FBSs are deployed within the network. The average spectral efficiency of all UEs is significantly higher than of edge UEs. In resource allocation terms; focusing on resource efficiency means optimization to the peak throughput of the cell; so the ability to allocate the sufficient number of sub-bands to users in high rate requirements can be achieved. Note that in the figure; a comparison of the spectral efficiency for the three schemes shows a higher reuse factor is used in the cell edge zone than the center zone. Results showed that; among the three schemes; for cell edge-zone UEs; the proposed scheme has higher spectral efficiency gains; coming at 41%; and 49% for strict FFR; and soft FFR schemes; respectively. In addition; for UE located in the center and edge zone; the average spectral-efficiency gains for the proposed scheme increases as the number of FBSs grows per macrocell coverage area; coming at 43% and 51% when compared to strict FFR; and soft FFR schemes; respectively.
The average sum rates of the network for different FFR schemes are given in Figure 7. For the proposed scheme; as the number of FBSs increased; the overall average sumrate grow since frequency bands are reused many times repeatedly. It is due to decreasing cross-tier and co-tier interference; thereby the SINR achieved by the proposed scheme is much higher than other schemes. However; for MUEs; the sumrate becomes worse; because of the interference introduced by FBSs. Among the three frequency reuse schemes; the proposed FFR offers the best overall average sum-rate; approximately 20-30 kbps higher than soft FFR. In contrast; Strict FFR provides the worst performance; where the gap between strict FFR and soft FFR becomes wider if the number of FBSs increases. For MUE; strict FFR consistently has the worst performance.
However; soft FFR outperforms proposed FFR; as shown in Figure 8. Figure 9 shows the outage probability for three types of FFR schemes placed on the two-tiers LTE system. For a given SINR threshold; the proposed scheme indicates a lesser outage probability than the schemes under comparison. It is due to; for the proposed scheme; inter-cell interference on cell-edge UEs is limited by two neighboring MBSs. While for soft FFR schemes; inter-cell interference is caused by 6 MBSs. When the sinr threshold increases; the outage probability becomes higher and reaches close to the other schemes. This means that the proposed scheme enables support to more users efficiently; regardless of the interference. Soft FFR and strict FFR have almost the same outage probability and higher than the outage probability of the proposed FFR. The gap closes when the sinr threshold is higher. It may also be noted that the proposed scheme decreases the outage probability even at a lower sinr threshold. Signal to interference noise ratio concerning outage probability shows that link performance is better.
CONCLUSIONS
LTE networks have become rapidly growing technologies in the 4th Generation (4G) cellular system; due to its high performance with respect to the data rate; delay; latency; spectral efficiency; and large coverage. However; it suffers from the ICI problem; especially for cell-edge users. Different methods are implemented to mitigate this type of interference. Interference mitigation using various fractional frequency reuse schemes is addressed in this work. We propose an FFR Technique to mitigate inter-Cell interference in the LTE femtocell system using fractional frequency reuse. Simulation results confirm that the proposed scheme is effective against co-tier and cross-tier interferences in twotier macrocells and Femto networks. The proposed scheme proved to enhance spectral efficiency; reduce the outage probability; and increase the sum rate for all users.
AUTHORS' NOTE
The author(s) declare(s) that there is no conflict of interest regarding the publication of this article. The authors hereby confirm that the data and the paper are free of plagiarism.
|
2021-05-16T00:03:59.242Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "bf9420efa6ae06145bda29d59a9fb6ae5347475a",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.upi.edu/index.php/ijost/article/download/25636/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a8e09884bed9397fa0839bd8e0df61d65b2ea25e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
227098770
|
pes2o/s2orc
|
v3-fos-license
|
Playworlds and Narratives as a Tool of Developmental Early Childhood Education
We shortly introduce some main ideas of a project of scientific research collective “School” (Shkola) led by academic V.V. Davydov. The collective elaborated a new project — “Concept of preschool education” [9] that would better meet the developmental and educational needs of young children and create the basis for learning activity at school. The project has inspired development of playworld pedagogy in Sweden and Finland. Now 30 years later, attempts to design systems of developmental early childhood education try to concretize central concepts of Davydov’s project. This article presents interpretation and elaboration of the main ideas of the project in playworld pedagogy developed in Scandinavian early childhood education. We propose a systematic transition from joint adult — children play, to independent children initiated play. Children’s personality development presupposes esthetic reaction and contradictory unity of affect and intellect in narrative role play. We have concluded that present attempts to design new developmental early childhood education programs cannot forget the ideas of the collective from the 1990’s.
Introduction
Theoretical elaboration of the problems of developmental early education has started in the 1980's under the direction of V.V. Davydov. He organized a scientific research collective "School" of 30 people for preparing a modern concept of early childhood education. The project plan of the "Concept of early childhood education" (Konceptsia doshkol'nogo vospitania) was published in 1988. Summary of the ideas and work process was published after two years [1]. A new approach towards early childhood education formulated by V.V. Davydov's research team has stimulated and initiated three projects in 1990's: in Russia, Sweden and Finland. Projects in three countries have an individually in-terpreted common cultural-historical framework and each of them tries to solve local problems of early childhood education. If we describe differences of emphasis shortly they are -creative imagination and general development in Russia, aesthetics of play in Sweden, and narrative play and learning in Finland. The project in Russia was partly motivated by the political and ideological changes, but also by the need to promote developmentally appropriate approach and curriculum for early age children from 1 to 6-yearsold. Creators of El'konin-Davydov system of school education emphasized, that program for primary school is not suitable for education of younger children (5-6-years-old children) in preschool classes [12].
Ключевые слова: игровая педагогика, игровой мир, нарративы, дошкольное образование, аффект и интеллект, нарративная ролевая игра, эстетическая реакция, рассказы и истории. Developmental preschool education should focus on preconditions of theoretical thinking and substantial generalizations. What are these preconditions? The concept presents the idea of future oriented early childhood education, which focuses on personality development of every child. V.V. Davydov interpreted personality development intertwined with creative imagination of the child [11]. A general requirement was the creation of "children's world" in preschool institution. Developmental early childhood education should introduce basic human values to children. The project plan emphasizes parallel development of universal human abilities and individual differences of all children. Joint play of adults and children was the main method of introducing human values to children. The project inspired two experimental early childhood education projects in Scandinavia in the 1990's. The first one focuses on the development of children's esthetic imagination in play [30] and the second -on narrative learning in playworld environment [24]. The first one was carried out in Sweden at the university of Karlstad and the second in Finland at Kajaani university consortium.
Lindqvist's aesthetics of play was based on Vygotsky's "Psychology of art" and his ideas on the development of play and imagination [38; 39; 42]. Both projects integrated creative drama pedagogy tradition of play guidance in early childhood education with Vygotsky's cultural-historical approach. Projects elaborated in cooperation methodological approach starting from tales and stories as a framework for developing joint playworlds. The aim of introducing human values presented in tales and stories in esthetic form was to stimulate children's self-initiated play activity. Following the idea of Davydov's team's project, adult providers participated in play as partners.
Several versions of playworlds have been designed and carried out in Sweden and Finland in thirty years. We have separated following playworld types constructed in Finland: 1) imaginary playworlds developed by children independently (long-term peer play) [26]; 2) playworlds aiming at children's personality development (emphasis on moral issues) [19]; 3) narrative playworlds aiming at child development and creativity [3; 5]; 4) playworlds preparing transition to school learning [21]; and 5) playworlds as learning environments of school subjects [25]. Playworld pedagogy has been integrated to master's degree studies in early childhood teacher education at the university level and further education studies in playworld pedagogy are organized for in-service early childhood educators [22]. All playworld types can be understood as attempts to influence on child development and learning.
The problem of analyzing play and development into units
There are several interpretations in the history of cultural-historical approach on the relation between play and development. In Elkonin's [14] classic periodization model continuity of stages between leading activity types was explained with the help of the division of each stage into two functional parts: motivational and practicaltechnical. Motivational function in play was associated with a new type of human relations. A different idea about the character of leading activity in cultural development was presented in the elaboration of the general stage model by Slobodchikov and Tsukerman [36]. Vygotsky's general genetic law was taken as the basis of periodization: after socio-cultural formation of new collective abilities starts individual appropriation of psychological states and processes (internalization). It was supposed that different contradictions are guiding children's developmental efforts of attaining something new collectively or individually. Additionally, the products of leading activities in this model were interpreted using Erikson's [16] idea that the critical contradictions at each stage can never be finally resolved in a person's lifetime [44].
Vygotsky did not elaborate in his play lecture the relation between play and child development in detail. But he proposed that narrative role play (sjuzhetno-rolevaja igra) creates the zone of proximal development. The zone was defined in terms of future challenges of children's psychological development. Instead of joint problem solving with adults or competent peers here are listed some future potentials, which bases are formed in role play: "Action in the imaginative sphere, in imaginary situation, the creation of voluntary intentions and the formation of real-life plans, and volitional motives -all appear in play and make it the highest level of preschool development" [39, p. 96]. Compared to his other definition of the zone in problem solving situations these domains have a longer time perspective and another system character, which requires elaboration of the idea of unit. It might be better to name this longterm zone instead of proximal.
How these phenomena are created in social relations of play and internalized as psychological new formations in individual mind? Vygotsky's explanatory sketch starts from genetic contradiction of play between visual and sense fields and moves to play rules, which adopts strong affective power "forcing" the child to follow them because stronger affect gets its power from the emotional satisfaction play brings (Spinoza). What kind of developmental unit Vygotsky might have in mind in this analysis? Kinship of emotional reactions in play and art forms or play as a source of emotional reaction [30] encourages to search Vygotsky's analytic unit from esthetic-emotional reactions in arts.
We may conclude that the unit of play would be analyzed using the genetic contradiction of play between visual and sense field. Unit of development has in Vygotsky's elaboration contradictory character. Analysis into units: "must find holistic characteristics of the whole in which they are presented in a contradictory form and with help of which appearing concrete questions are tried to be solved" [40, p. 16]. In Vygotsky's play analysis we have two alternative candidates of the unit: 1) genetic contradiction between visual field and sense field or 2) affective movement in sense field carried out 'as if' with realistic objects. Vygotsky looks for a holistic unit of verbal thinking and ends up to word meaning as internal side of a word. Where are social relations and co-construction of sense meaning?
A fresh attempt to solve genetic contradiction of play unit was made by Kravtsov and Kravtsova [27]. They proposed two simultaneous positions of the subject of play: 'outside play' and 'in play'. 'Outside play' would be visible play behavior in front of other players and 'inside play' the child acts in sense field of his/her imagination. The authors use coincidence of the two positions as the criterion of play: if the positions do not coincide the activity is not genuine play. They write: "On the basis of our analysis equal parallel positions of the subject in play activity and outside it will be the criterion of play activity" [27, p. 52]. The analysis of two positions of the subject becomes more complicated if we apply it to collective play of several children or joint play world of adults and children.
In our experimental joint playworlds of adults and children we create collective play. In vertically integrated groups (4-8 years old) preparing children for school transitions about 30 children all participate in play with 3-4 adults (basic team is elementary grade teacher, day care teacher, and helper). Collective play based on carefully selected tales and stories started from a problem or obstacle in the story line dramatized for children during play world session.
In "Rumpeltiltskin" playworld the king visited children's class. He proudly demonstrated his gold-broidered cloak and other golden symbols of the majesty. He made a comment: "I am wondering why my wife has stopped to spin golden thread. This was the reason I took her to my wife. Perhaps her fingers are sore." Children slipped the truth: "She did not spin golden thread from straw. It was that creature "Rumpeltiltskin". The king was stunned and cried: "I'll throw her to the jail if she has lied to me! But she is the mother of my daugh-
ter. What shall I do? Children can you help me and propose what I should do? Write to me!"
In children's self-initiated play there often are 10 to 15 participants. Each player 'in role' participate in role character and is aware that he/she is directing the character. It is possible to interpret that me-subject is final object of play in the social network of players. Participants have to estimate the others' sense fields on the basis of visual play actions and give feed-back through their own role behavior. We have supposed the existence of two layers of 'inside' play: on the level of collective and individual subjects. This is visible in joint play of adults and children participating in narrative play adventures. Analysis of different narrative play episodes with participation of several children and adults revealed, that in spite of mutually agreed theme and active participation in construction of play events, each participating child is developing his/ her own play script. For example, four children Хаккарайнен П., Бредиките М. Игровые миры и нарративы как средства развивающего образования в дошкольном возрасте Психологическая наука и образование. 2020. Т. 25. № 4 decided to build a ship and sail in search of the pirates who stole the king's crown. Children start together to build the ship and sail to the sea. As soon as the ship leaves from the port, each child finds a space and starts developing own play 'subtheme'. A 4.10-years-old girl is putting her baby to sleep in a quiet corner, 5-years old boy starts repairing small cars on the deck of the ship, another 3.4-years-old boy becomes a sea policeman and 4.6-years-old girl starts preparing a soup in a big kettle on the deck. There is little interaction between children, and they all develop separate scripts but as soon as captain (5-yearold boy in role) announces that he can see a pirate ship approaching, all children come together and start developing joint play event again.
We argue that both levels of subjects have to be included into the analysis and construction of play activity -individual and collective. This means that we have to enlarge the unit in the analysis of play. In the Slobodchikov-Tsukerman [36] model of development a unit covers three steps: 1) interactive social play; 2) individual internalization of psychological processes; and 3) next interactive social activity. It might be difficult to decide when a collective subject of play has occupied an 'outside' position, but involvement of all playworld participants can be analyzed from mutual contacts between role characters [23; 35].
The problem of contradictory unity of affect and intellect in preschool play
The principle of unity of affect and intellect was central in Vygotsky's theory on psychological development of the child, but the argument was derived using general inclusive logic: "thinking and affect are parts of a unified whole -human consciousness" [41, p. 251]. The methodological challenge of studying and analyzing personality is its specific character as the object of study. Living object (personality) cannot be studied using methods of natural sciences because they destroy the object by dividing it into elements. Personality as the object of study cannot be divided -personality as a whole is the unit and living contradiction.
Vygotsky proposed that the relation of affect and intellect is dynamically changing in different ages and each step of the development in thinking has its corresponding step in affect development. This trajectory of unity is connected to growing consciousness and will. Vygotsky wrote: "Things do not change from the fact that we think about them, affect and functions connected to it are changing when we become conscious of them. They form another relation to consciousness and other affect. Accordingly, the relation to the whole and its unity changes" [41, p. 251]. For us this relation is important in play at preschool age. In culturalhistorical play research a special function and role in play-based development has been inscribed to affective-motivational domain of play [15].
It is important to keep in mind that the unity of affect and intellect in Vygotsky's analysis is between poles of contradiction, which children have to solve in their play construction. Vygotsky himself gave a concrete example of contradiction in his play lecture: between pleasure of play in child and pain caused by illness. But this is an example of two affects, not affect and intellect. An example of contradiction between children's lack of understanding imaginary play situation and affective state can be found in a research project of Zaporozhets' group [29]. Insufficient cognitive capacity of understanding imaginary play situation influenced on activation of (affective) brain functions in part of children in this study. Affective-motivational characteristics of play situation did not work without comprehension of total situation in this experimental study and these children were not able to solve the contradiction between affect and intellect.
It seems that contradiction between two poles (in play -outside play position; educator role: "mama" -pedagogue) are lacking the dynamics of Socratic contradiction: the contradiction produces a third alternative. Davydov [11] speculated about possible driving contradiction in children's joint construction of narrative role play. The idea, schema (zamysl) of a future, non-existing play is a whole without details. Its driving contradiction is between the idea and structure of play (content and theme in El'konin's analysis). The idea resembles one of the criteria of personality development by Davydov (whole before details). We have emphasized likeness of children's play and art forms. They both use imagination which combines emotion and cognition. In both affects Hakkarainen P., Bredikyte M. Playworlds and Narratives as a Tool of Developmental Early Childhood Education Psychological Science and Education. 2020. Vol. 25, no. 4 are experienced as if they were real ones according to Vygotsky [38]. Vygotsky's explanation of esthetic reaction has dual aspects:1) Work of art always depends on a conflict between content and form and effect is achieved when form destroys its content, 2) "Explosion" that destroy nervous energy. He writes: "Another peculiarity of art is that -while it generates opposing affects in us -it delays the motor expression of emotions (of account of the antithetic principle and -by making opposite impulses collide -it destroys the affect of content and form, initiating an explosive discharge of nervous energy" [38, p. 206].
Trials of solving theoretical and methodological problems
Attempts to develop children's play in early childhood education are mainly focused on individual play skills and motivation during last decades in spite of Vygotsky's general genetic law of development (from interpsychological to intrapsychological). Still more seldom are organized joint play activities of adults and children, in which adults are genuine play partners and children accept them as companions. Introduction of separate children's play planning sessions [2] or story-line planning [33] have not yielded permanent results enhancing quality of play and its developmental impact.
There are a few interesting experimental projects, which have enlarged "standard" approach to preschool play and early childhood education. Two examples demonstrate the character of enlargement. "Golden Key" experimental program is not just play enhancement attempt. It aims at changing children's institutional life as a whole. A family life model is adopted, joint happenings (sobytiya) deviating from daily routines are systematically organized and learning activities do not follow "school model". Another example can be found from Italy in "Reggio Emilia" approach claiming that a whole city is needed for the education of a child. Here education in institutions is expanded to the city and all citizens are educators. But what should be the core expanded educational unit?
Corsaro [8] constructed his sociology of childhood on Vygotskian ideas emphasizing interpretive reproduction in which innovative and creative aspects of children's participation in society is central. According to Corsaro, "Children are not just internalizing society and culture, but actively contributing to cultural production and change" [8, p. 18]. This has been formulated as "creative dominant" or "culture -creating function" of developed childhood and developing child in Kudryavtsev's [28] sketch on developed childhood. There is a difference in approaches because Corsaro focuses his analysis on collectives and phenomena of peer culture. Creativity and culture creation are not analyzed in terms of collective social interaction in peer cultures. In our research projects constructing children's playworlds we have tried to reinterpret Vygotsky's idea of "unit of development".
Playworld as method of developing children's play culture
We propose a multistage holistic process of playworld construction instead of traditional teacher task. Play is not just a simple cognitive assignment, but complex activity led by child's genuine emotional involvement and motivation. Advanced social role play is disappearing from the whole world and children lack experience and necessary skills to initiate and carry out imaginary role plays of a peer group or multi-age group. There are not attractive shared play ideas partly because of information flood from corporate 'educators' of peer culture. In last five, ten years information technology has changed children's social interaction and family life in Scandinavia. Screen time has extended. Children use mobile phones and laptops several hours each day. Family interaction has shrunk because adults also are hooked on modern technology. Addiction to technology starts at preschool age or earlier. A new phenomenon during school breaks is a paradox: children are together each alone connected to smart phone chat. Face to face contacts have transformed to virtual ones and peer culture is constructed around digital media use., which unites and separates at the same time. Popular kinderculture has a great impact on children's peer culture and more and more education takes place through peer culture in social places other than preschool or school [37]. Хаккарайнен П., Бредиките М. Игровые миры и нарративы как средства развивающего образования в дошкольном возрасте Психологическая наука и образование. 2020. Т. 25. № 4 Playworld as a tool of developmental early childhood education might look as a paradox. The adults construct a joint imaginary world with children in order to stimulate children's own initiatives and motivation? Play is children's own activity and adults as play partners should be aware of it. According to our observations most children play alone or with one partner only in early education institutions. Theoretical value of collective whole group play is underestimated because one child's play often is understood as the basic unit in theoretical analyses. Adults seldom are serious play partners in children's groups. Playworlds are joint play activities of adults and children aiming at creation of children's play culture in the classroom.
Main components of playworlds
Playworlds are based on 'narrative logic' described by Fisher [17; 18] and Bruner [6], who proposed the use of 'the narrative construal of reality'. According to Bruner, people do not only present to each other rational, scientific arguments, but tell stories about themselves and their worlds. Narrative has another important function in human development: "It is trough narrative that we create and recreate selfhood, and self is a product of our telling and retelling. We are, from the start, expressions of our culture. Culture is replete with alternative narratives about what self is or might be" [7, p. 86]. Both Fisher and Bruner think that stories and storytelling is the basis of social interaction and method of expressing and transmitting meaning and sense.
In play children are using narrative mode to construct their knowledge and understanding of the world and phenomenon. Their own interpretations and wishes are reflected in play. In play children follow narrative rationality, which is based on consistency and credibility of the story. A consistent story has enough details, several levels and believable characters. Meaningfulness of the play-story can be evaluated from the correspondence between role actions and the general habitus of role characters. Credibility of the story line of play under construction children estimate using their own experience and familiar stories as a standard of comparison.
Playworlds are based on cultural stories -folk tales, fairytales or good contemporary stories, reflecting human values and aspirations. Young children cannot adopt values and ideal forms without special elaboration. They have to be given aesthetic form argued Lindqvist [30] following Vygotsky's idea how play corresponds to the imaginary process, or the aesthetic form of the fairy-tale. When joint playworlds of children and adults are constructed two types of aesthetic forms are used as tools: 1) lyric-musical form that can be compared with poem, music and dance, and 2) dramatic -literal form follows folk tale trajectory [34]. This kind of play is like blues scheme, which variations children improvise [31]. Aesthetic forms effectively concretize Vygotsky's basic contradiction of play between visual and sense field according to our experience.
Playworlds require genuine adult participation in play. The idea of adult's partnership with children in early childhood education is a challenge. Quite often educators understand their participation in children's play as advisor or controller and do not accept play roles that children propose. This is partly truth, only. Our studies have revealed [3] that two positions necessary in adult play participation. Adults have to be able to be genuine children's play partners and at the same time play guiders. This necessary ability to capture both adult and child's point of view simultaneously might be more difficult than expected. We argue that the boundary between the two positions is important in the creation of children's worlds. Davydov joked that a sure failure will follow from the selection of the wrong position.
Organization of playworlds in early childhood education
In the following we describe how we proceed in playworld construction from a joint motivating theme to children's independent self-initiated play through intermediate stages preparing children's 'own' play in successful pedagogical interventions. We have observed how difficult it may be for children to start a joint role play even on the basis of well-known story plot. If children have not formed a joint idea of the play, attempts of the teacher to guide play events are vain. Simply giving children a task to start play after reading the fairytale and dividing roles to them often leads to conflict [24].
Hakkarainen P., Bredikyte M. Playworlds and Narratives as a Tool of Developmental Early Childhood Education Psychological Science and Education. 2020. Vol. 25, no. 4 We think that emotional reaction is essential in order to wake up children's own initiative and self-initiated play. This is why we use a longer time to find really good story, which not directly tells about values and ideals behind our theme but creates mysterious atmosphere and emotional tension on several levels. The teacher has to transmit this emotional tension to children and demonstrate her/his own emotions. We have found oral storytelling and/or dramatization of the story to be effective methods of creating necessary emotional tension and raise children's motivation [22]. As Zaporozhets [43] points, dramatization of the story is necessary to some children. Sometimes we might dramatize the whole story by inviting the characters (teacher in role) visit the classroom and tell the story from a character's individual point of view. Repeating the story several times, emphasizing contradictory positions and individual nuances of characters create dramatic collisions, which stimulate children's self-initiated play construction. Touching joint feelings are able to wake and stimulate shared play ideas. Shared emotional 'perezhivanie' of a tale or story is a necessary precondition for joint self-initiated play on a theme.
The following stages how to proceed to the construction of the playworld: Stage 1. Selection of an interesting theme (fabula) for the narrative framework of a playworld. The selection is based on observation of children's free play and other joint activities, pedagogical documentation and educational goals. The theme is selected from basic human values best suiting for child group's needs (e.g. safety and danger, helping and deceit, friendship and hate, honest and dishonesty etc.) Stage 2. Giving moral and esthetic form to the theme. Classic tales and stories are used to explain and clarify the selected theme [30; 43]. A classic story raises questions and aggravates moral contradictions. In a good story moral lesson is hidden between the rows and never told directly [38]. A good story always has dramatic collisions and attractive events, to which children react. A story unites experiences: esthetic form creates a frame 'imaginary world', situation for the events and background for play. Carefully selected story rouses emotions, motivates and creates a safe environment for exploring scary phenomena. On the basis of children's feed-back the most attractive story among alternatives is selected to be used as playworld framework.
Stage 3. Selection of most attractive events and characters in a story. Children draw, write and tell about their impressions about the story and why they like them. New events, role characters and dramatic collisions are planned and added to playworld adventures based on continuous evaluation of children's initiatives and play behavior after each joint weekly playworld session. New playworld elements are added by dramatizing characters and play events, staging environments and preparing symbolic transition to playworld (e.g. traveling with time machine, opening a magic door).
Stage 4. Constructing concrete playworld environment. Environments can be constructed with minimal elements. Few hints waking children's imagination is enough (e.g. disorder of children's tables can arouse whole series of speculations). Symbolic transitional rituals (singing the adventure song, dressing adventure t-shirt, moving to near-by forest etc.) move children to narrative logic and daily environment is interpreted differently (play substitutions!). After transition of symbolic boarder children's imagination starts to build the space anew.
Stage 5. Projects. A typical playworld project may start from children's products at the stage 3. Children have proposed to make imaginary animals or other props described in the story (e.g. dragons from mesh and pulp). Another type of project has been to construct specific stages for play (e.g. caves of subgroups). A specific project was children's reinterpretation of TV series -Pokemon figures and their adventures were transformed to softer bunny play adventures [26]. Stage 6. Self-initiated free children's play and play culture. The ultimate goal of playworld approach in Finland and Sweden has been to stimulate creation of children's own play culture. Our main criterion has been self-initiated children's play, which continues and reflects values and moral tensions of joint playworld themes. All stages are not always necessary and self-initiated play may start early and proceed parallel with playworld adventures. Six stages do not always Хаккарайнен П., Бредиките М. Игровые миры и нарративы как средства развивающего образования в дошкольном возрасте Психологическая наука и образование. 2020. Т. 25. № 4 proceed linearly and strictly separated from each other. Boundaries between stages are flexible and linear proceeding between them is not a must. Children's self-initiated play has sometimes started after the introduction of the story and evolved along with the new events in a playworld [20]. Playworld play can move the boarders of the zone of proximal development only if children feel play to be their own activity. This is why in playworld at some stage free, child-initiated play on the theme is obligatory. Playworld can be understood as a tool to produce children's joint selfinitiated play.
Discussion
Both Scandinavian experimental play projects in Sweden and Finland chose relevant ideas from the "Concept of preschool education", which became the leading ones in organizing experimental activities. The idea of introducing basic human values to children seemed quite traditional, but the way and method appeared to be very innovative and drastic. A requirement to create "children's world" resonated with Mouritsen's [32] concept of "children's culture". Still the idea of joint play of adults and children was very revolutionary to the existing culture, where child's play was always considered his/her sacred space and adults were not allowed to step into it. At the same time, these ideas revealed theoretical and methodological problems that are still not solved today, but our play projects, at least partially, addressed these issues on a practical level. We might say that playworlds -is an attempt to solve theoretical and analytical problems raised by Davydov's scientific group.
Attempts to use tales and stories in early childhood education has often been understood in the west as a teaching and learning task -how children learn narratives, language and "good" or moral behavior. Children's 'natural' interest to narrative form stimulate the use of books and other material made for children. The idea that story/ narrative might be a starting point for the children and adults to explore and experiment with basic cultural values and norms in the form of joint play was really new in Scandinavian context. Construction of playworlds start from traditional folk or classical stories. Tales and stories are carefully selected because their esthetic quality only can stimulate children's motivation and self-initiated play. Dramatic collisions of the story line fire dual emotional reaction, which is necessary contradiction in esthetic reaction according to Vygotsky [38]. Contradictory esthetic reaction in children starts experimenting with the idea (zamysl) of self-initiated play. Children's self-initiated play has always two parts: one -coming from a story and another -from child's experiences in real life situations. In play, children try to unite these two parts creating a simple story line, for example, evil force (Gnome, witch, etc.) has kidnapped the princess and a rescue team is ready to free her. Participating children choose the roles, construct play events and develop play script.
We argue that realization of the 'concept' of early childhood education proposed by Davydov's team is not possible without narrative logic and children's exploration of collective self-initiated narrative role play. The project demanded construction of 'children's worlds' as the site of early childhood education. Children's worlds of the project have same characteristics with our playworlds:(1) both propose joint imaginary play of adults and children (adults as play partners in roles); (2) cultural ideals and values mediated through tales and stories; (3) personality development of each child as the goal of education. These general traits are transformed to alternative teacher education programs and experimental educational practices going on over thirty years.
Davydov's team's project operates in the landscape of possibilities and is based on experts' thought experiments. Ideas are now taking the form of early education programs and materials [4]. Scandinavian experimental projects have focused on the use of narratives and drama pedagogic methods of developing children's play activity and early childhood teacher education programs. An encouraging general result from narrative approach has been return of sociodramatic make-believe play, which many researchers have observed disappearing around the world. Groups of 20-30 children play together, analyze problems of play characters and help solving them. In vertically integrated classes age differences of children has not been a problem because helping happens in imaginary environ-ments. There also are several examples of children's independent self-initiated play based on joint narrative play of adults and child groups. A promising fresh attempt to explain the transition from play to learning activity by assisting the change from narrative logic to rational logic is offered by Zuckerman's team [45].
In conclusion, it must be acknowledged that ideas formulated in the project are only partially realized in practice and only on a small scale. The theoretical and methodological problems have not been solved as well. The concept of developmental early childhood education created by Davydov's scientific team remains relevant today.
|
2020-10-28T18:33:02.597Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "b1be76558a6df42b75f6a6fa2dcaaa496d90e75a",
"oa_license": "CCBYNC",
"oa_url": "https://psyjournals.ru/files/115446/pse_2020_n4_Hakkarainen_Bredikyte.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fcf5fdd1730e1c26d2fa3e80fb132a5f3ae10aaa",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
256560053
|
pes2o/s2orc
|
v3-fos-license
|
Dust-scattering rings of GRB 221009A as seen by the Neil Gehrels Swift satellite: can we count them all?
We present the first results for the dust-scattering rings of GRB 221009A, coined as the GRB of the century, as observed by the Neil Gehrels Swift satellite. We perform analysis of both time resolved observations and stacked data. The former approach enable us to study the expansion of the most prominent rings, associate their origin with the prompt X-ray emission of the GRB and determine the location of the dust layers. The stacked radial profiles increase the signal-to-noise ratio of the data and allows detection of fainter and overlapping peaks in the angular profile. We find a total of 16 dust concentrations (with hints of even more) that span about 15 kpc in depth and could be responsible for the highly structured X-ray angular profiles. By comparing the relative scattered fluxes of the five most prominent rings we show that the layer with the largest amount of dust is located at about 0.44 kpc away from us. We finally compare the location of the dust layers with results from experiments that study the 3D structure of our Galaxy via extinction or CO radio observations, and highlight the complementarity of dust X-ray tomography to these approaches.
INTRODUCTION
Gamma-Ray Bursts (GRBs) are the most energetic transient phenomena in the Universe. The prompt phase of the burst consists of intense gamma-ray flashes, and it can last up to hundreds of seconds in the case of long-duration events. While the exact mechanism for the production of the prompt gamma-ray spectrum is still under debate, it is commonly accepted that the prompt emission is produced within a relativistic collimated plasma outflow launched by the rotating central engine (for a review see Kumar & Zhang 2015). As the plasma propagates in the interstellar medium (ISM) it sweeps up material, causing its gradual deceleration on timescales much longer than the prompt phase duration. This long-lasting emission, which is known as the afterglow, is observed over a wide range of energies (typically from X-rays to radio waves) and it is thought to be produced by synchrotron radiation of relativistic electrons accelerated at the external shock wave (Rees & Mészáros 1992;Chiang & Dermer 1999). Inverse Compton scattering of low-energy photons by relativistic electrons is typically put forward to explain the recent very high-energy (E > 100 GeV) photons detections from a handful of GRB afterglows (for a review see Miceli & Nava 2022).
A very bright GRB was observed on October 9, 2022 by E-mail: georgios.vasilopoulos@astro.unistra.fr various instruments, including the Fermi Gamma-ray Burst Monitor (GBM) and the Large Area Telescope (LAT) (S. Lesage et al. 2022;R. Pillera et al. 2022). The Burst Alert Telescope (BAT) of the Neil Gehrels Swift satellite detected a hard X-ray transient at T BAT = 59861.59 MJD, i.e. about an hour later than GBM (Dichiara et al. 2022). Overall, the prompt emission of GRB 221009A lasts about 330 s (S. Lesage et al. 2022). Preliminary gamma-ray light curves from KONUS-Wind (D. Frederiks et al. 2022) and AGILE (A. Ursi et al. 2022) show a precursor followed by two bright pulses (covering a period of about 100 s), and a fainter pulse starting at ∼ 200 s after the end of the bright episode. Observations of the afterglow with X-shooter at ESO's UT3 of the Very Large Telescope led to the determination of the burst's redshift z = 0.151 (de Ugarte Postigo et al. 2022). Moreover, according to de Ugarte Postigo et al. (2022) multiple spectral features caused by the ISM of the Milky Way were detected, suggesting a large column density of Galactic material along our line of sight. The extreme brightness of this event complicates detailed spectral analysis with instruments like Fermi-GBM and KONUS-Wind due to pile-up effects. Nonetheless, D. Frederiks et al. (2022) estimate the isotropic gamma-ray energy to be E iso ∼ 2 × 10 54 erg using the GBM fluence reported by S. Lesage et al. (2022). The combination of the proximity to us and the large energy output make this burst an extraordinary event (for comparison see Fig. 18 in Ajello et al. 2019). X-ray imaging of the afterglow with Swift-XRT captured several bright rings around the burst's position (Tiengo et al. 2022). These are formed by scattering of the X-ray burst emission by dust layers in our Galaxy in the direction of the source (for a recent review on dust scattering and absorption, see Costantini & Corrales 2022). Dust scattering rings and halos have been used to study the ISM in the direction of bright X-ray transients with modern observatories (e.g. Heinz et al. 2015;Vasilopoulos & Petropoulou 2016;Heinz et al. 2016;Beardmore et al. 2016;Jin et al. 2017Jin et al. , 2018Jin et al. , 2019Lamer et al. 2021). While this is not the first time that dust scattered rings were observed from a GRB (see e.g. Klose 1994;Vaughan et al. 2004;Vianello et al. 2007, and references therein), the location of GRB 221009A on the sky (l = 52.96 o , b = 4.32 o in Galactic coordinates) and its large inferred isotropic gamma-ray energy offer a unique opportunity to study the Galactic dust via analysis of the ring structures. Here, we analyze publicly available data of Swift-XRT obtained within a few days after the GRB trigger. Our goal is to determine the location of dust layers in the line of sight to the burst by studying the temporal evolution of the dust scattered rings.
This paper is structured as follows. In Sec. 2 we outline the geometrical model used for the description of the X-ray dust rings. In Sec. 3 we present the data used for the construction of the angular X-ray surface brightness profiles, and describe the methods applied to the modelling of these profiles. We present our distance measurements in Sec. 4. We continue with a comparison of our results to those obtained from other probes of the dust content in the Galaxy, and with a discussion on dust grain properties in Sec. 5. We finally conclude in Sec. 6 with a summary of our main findings.
MODELLING OF X-RAY RINGS
Dust is ubiquitous in the interstellar space but the largest dust concentrations (dust layers) are found inside dense cold molecular clouds. X-rays can be preferentially scattered or absorbed (depending on their energy) by interstellar dust grains. In this work we are interested in the geometrical study of the ring structures formed by dust scattering. Therefore we limit our analysis to photon energies E ≥ 1 keV. We also neglect multiple X-ray scatterings by dust.
The geometrical principles of X-ray scattering by dust layers are illustrated in Fig. 1. We consider an X-ray transient occurring at time t b and at a distance d s . X-ray photons can be scattered in small angles by an intervening dust layer at distance d = xd s , where x 1 for an extragalactic transient (e.g. x = 10 −5 for a transient at 300 Mpc and a dust layer at 3 kpc from us). The scattered photons will be observed with a time delay ∆t with respect to the X-ray transient because of their longer path lengths, and θ is the angular size of the ring (corresponding to the ring radius). For small angles (θ 1) the time delay can be approximated (up to second order in θ) by the following expression ∆t ≈ d s 2c Photons scattered by the same dust layer but arriving with larger time delays will produce a ring of larger angular size. In other words, each ring produced by a single dust layer appears to ex- Figure 1. Schematic illustration (not in scale) of X-ray scattering by dust concentrated in layers located at different distances d from the satellite. X-ray photons emitted by the GRB, which is located at a distance d s d , travel a distance 2 before changing their direction due to scattering by dust at d . Then, scattered photons travel a distance 1 before reaching the detector. The scattering of X-ray photons by different dust layers that are observed with the same time delay with respect to the burst defines an ellipsoid (red dotted line) with the satellite and the source as its two focal points. The projected image is a smoothed version of the XRT data analysed in this work.
pand with time. Using the equation above, and assuming x 1, we find an expression for the time evolution of θ The surface of equal time delays is an ellipsoid with the telescope and the X-ray source placed at the two focal points. Therefore, if multiple dust clouds intersect this surface will produce separate rings of different angular sizes by photons arriving to the observer with the same time delay (see Fig. 1). At any given time rings observed with smaller angular sizes are those produced by the more distant layers and vice versa.
Throughout the analysis we adopt a value of z = 0.151 for the GRB redshift, which corresponds to a luminosity distance 726.5 Mpc (or a light travel distance d s = 585.6 Mpc) based on WMAP9 cosmological parameters (Hinshaw et al. 2013). Eqs. (1)-(3) neglect redshift corrections, since the dust scattering layers are located in the Galaxy (see also Refsdal 1966;Vaughan et al. 2004).
DATA REDUCTION AND ANALYSIS
We use data from the Neil Gehrels Swift satellite X-ray telescope (Swift-XRT, Burrows et al. 2005). These were retrieved from the Swift science data centre 1 and analyzed using standard procedures as outlined in Evans et al. (2007Evans et al. ( , 2009. We use five XRT observations performed between MJD 59862 -59866 with obs-id numbers 01126853004, 01126853005, 01126853006, 01126853008 and 01126853009. From the cleaned images we selected events (grade 0-12) with energies between 1 and 10 keV and barycentric corrected times.
Our analysis relies on radial profiles of X-ray photons. Determination of the source's position in the image (i.e. the actual center of the rings) is therefore crucial. Another important effect is the quite rapid expansion of the rings; their angular diameter can evolve significantly on timescales of less than a day -see Eq.
(3). We thus split observations into groups of events obtained within a time window of less than 20 ks. We end up with 10 useful subsets of data. We perform source detection and localization in each subset of Swift-XRT data, and compute the respective exposure maps. Upon correcting each data subset with the appropriate exposure map, we compute radial profiles of X-ray surface brightness (in units of counts s −1 arcmin −2 ).
Modelling of radial profiles
To model the radial profile of the X-ray surface brightness (in units of counts s −1 arcmin −2 ) we use the updated point-source function (PSF) for Swift-XRT 2 , where W = 0.075, σ = 7.42 arcsec, r c = 3.72 arcsec, and b ∼ 1.31.
In the fitting procedure we leave the power-law index b free to vary and introduce an additional normalization parameter A to account for possible pile up in the detector. We also add a constant B to account for possible contribution of the background. Each distinctive peak in the angular profile, which corresponds to a ring in the XRT image, is modelled with a Lorentzian function where a L is the normalization, b L is the position of the peak, and 2c L is the full width at half maximum. The final fitting function applied to the angular profiles is where n is the total number of peaks.
Analysis of individual datasets
To identify significant peaks in the radial profile distribution we use an iterative process. We start with a radial profile and smooth it with a Savitzky-Golay filter to eliminate noise (Savitzky & Golay 1964). We then identify prominent maxima in the smoothened radial profile (in logarithm) above a certain threshold (i.e. 0.05 in dex) compared to local neighbouring values. As our goal is to identify prominent peaks we are conservative on the choice of the threshold level. In other words, a lower threshold would lead to a few more peaks that would be consistent with noise.
We then construct a model composed of the PSF and Lorentzian functions -see Eq. (6) -centered at the locations of the identified peaks. We optimize the model to the data (without any smoothing) with a least-square algorithm. We then construct a residual plot with the values normalised over the data uncertainties. Structures in the residual plot can help us identify secondary peaks. We repeat the procedure to search for secondary peaks in the data above a 3σ level (i.e. 3 times above the errors of each point). This step is crucial since some peaks might be missed initially because they are either very close to other prominent peaks or their peak is hidden by the decay in intensity of the PSF profile, leaving only the side lobes visible.
We then use the complete model (composed of the PSF and all peaks identified so far) and fit the profiles of each dataset once again using emcee (Foreman-Mackey et al. 2013), a python implementation of the Affine invariant Markov chain Monte Carlo (MCMC) ensemble sampler. This allows us to better estimate the uncertainties in model parameters and to explore possible degeneracies in this multi-parameter problem.
The iterative procedure described above is applied only to the first dataset with the highest photon statistics. The optimal model is then used as an initial guess for the MCMC sampling of the next dataset. All parameters are sampled from uniform distributions in linear space, except for the background B which is sampled from a uniform distribution in log-space. We produced a chain with 200 walkers that were propagated for 2500 steps each; after testing we concluded that this is an optimal number of steps for the convergence of the walkers. We also discard the first 1000 steps of each chain as burn-in.
We present in Fig. 2 the angular profiles for 10 individual datasets with the MCMC fitting results overlaid, and list the optimal model parameters for the Lorentzians in Table A1. In the first angular profile we clearly identify 5 prominent peaks. The fourth ring can be described by two Lorentzian functions. However, we neglect this substructure since these two distinct components are not observed in the following datasets. As the time progresses the rings are expected to grow apart thus allowing us to to see more structures in the angular profiles, i.e. secondary rings -see e.g. the bump appearing in the lower panels of Fig. 2 at smaller angular distances than the first ring. Meanwhile other rings, like the fifth one, can move outside the field of the CCD camera as they expand. It is also possible that some of the dust scattering rings disappear as their intensity faints or due to changes in the ISM properties as each snapshot maps dust scattering at different locations. The spread in the modelled angular profiles becomes larger around peaks at large angular distances where the statistical errors become larger (see e.g. last panel from the left in the top row of Fig. 2). This spread is also suggestive of the presence of substructure in the outer rings. A complementary stacking analysis of XRT data, which is presented in the next section, can help us search for such features in the combined angular profile.
Stacking analysis of all data
An alternative method to identify dust echoes is to stack all XRT images in order to increase the signal to noise. However, this is not as simple as adding the images because of the dynamic nature of the problem. Assuming each and every photon above 1 keV was scattered once in an intervening dust layer, we can scale its position on the image at an arbitrary time based on the expansion law of Eq. (3) and the time the photon was recorded. We define the position of each photon in the image using polar coordinates (r, φ) centered at the location of the GRB. Using the time of arrival of each event we re-scale the r coordinate to r rs = r (∆t rs /∆t event ) 1/2 , where ∆t rs is the reference time for the re-scaled stacked image and ∆t event is the time delay between the detection of the photon and the burst. As an indicative example we select ∆t rs = 2 d and use the GBM trigger time as reference time for the GRB, i.e. 13:16:59.99 UT on Figure 2. Time evolution of the angular X-ray surface brightness profile constructed using X-ray photons with E ≥ 1 keV. For each observation the optimal model (solid red curve), and its decomposition into the various components (dashed blue and orange lines), is overplotted. The peaks of the most prominent (primary) rings identified in the observations are indicated with numbers in each panel. The fourth ring is fitted with two Lorentzians only in the first dataset, since these could not be securely identified in the following datasets. The grey shaded band in each panel indicates the 68 per cent confidence interval. The stacking procedure increases the signal to noise in the outer regions, thus enabling us to extend the radial profiles up to a radius of ∼ 25 arcmin, as illustrated in Fig. 3. We also use an adaptive binning for the stacked angular profile with denser sampling for the inner part (i.e. ∼4 versus ∼20 arcsec) for a clearer presentation. After correcting the stacked radial profiles using the individual exposure maps of each snapshot, we follow the same procedure described in the previous section to identify features that could be related to X-ray rings. The analysis of the stacked image, which is shown in Fig. 4, leads to the identification of 16 Lorentzians (see dashed lines in Fig. 3) that will be discussed further in the follow-ing section. A model based on Eq. 6 was fitted to the radial profiles with a similar procedure as the one described in the previous section, so all parameter quoted are based on the MCMC modelling.
LOCALIZATION OF DUST LAYERS
We fit the temporal evolution of the angular radii of the five most prominent rings identified in individual XRT images (Fig. 2) using emcee and the expansion law of Eq. (3). The statistical uncertainties of the Lorentzian centers (see Table A1) typically underestimate the uncertainty introduced by our model selection (e.g. PSF with 4 or 5 Lorentzians) and the poor knowledge of priors. Thus, when modelling the ring expansion, we add a term ln f to the likeli- hood function to account for the systematic scatter and noise not included in the statistical uncertainties of the estimated angular radii (see similar application Karaferias et al. 2022), Here, the total variance is defined as where σ i are the errors of the Lorentzian centers b L,i . Our optimal expansion model for each ring is shown in Fig. 5 (see coloured lines), the corner plot with the posterior distributions of all layers is presented in Fig. A1 and the dust layer distances are listed in Table 1. The derived time of the burst is t b = MJD 59861.53 ± 0.02, which is about one hour and a half earlier than the BAT trigger time T BAT = 59861.59 MJD and consistent within errors with the GBM trigger time T GBM = 59861.55 MJD (S. Lesage et al. 2022). Therefore, the rings imaged by XRT are produced by X-rays emitted in the prompt phase of the GRB and scattered by dust in our Galaxy. This demonstrates that X-ray photons with energies down to 1 keV are produced during the prompt phase of GRB 221009A, even though they could not be detected by BAT and XRT simultaneously with GBM. Extension of the MeV gamma-ray spectrum to soft X-rays is a common prediction of radiative models, but the prompt X-ray fluence depends on the model details (see, e.g., Rudolph et al. 2022, for lepto-hadronic radiative models of GRB 221009A).
In regard to the stacked analysis we have demonstrated that by appropriate rescaling of the XRT images we can maintain the information of the peak locations and increase the signal to noise, enabling us to identify more structure in the data. For example several features that appear only in a few snapshots (see Fig. 2) are enhanced in the stacked profiles. In Fig. 3 we can identify at least 8 prominent humps, with one of them being clearly double peaked (composed of peaks #3, #4) and some of them being quite broad (i.e. #9, #10 and #12). The angular sizes of all identified peaks and the distances of the corresponding dust scattering locations are summarized in Table 2. If we consider that the sizes of the rings are just a projection effect, we need to use the estimated distances in order to ascertain if two nearby rings may be associated with the same dust layer and appear as separate due to inhomogeneities in the dust distribution of a single cloud. In fact the four innermost rings that appear to overlap the most in the angular profile are those that are physically the most detached, since the relevant dust layers are located at distances of about 14.7 kpc, 9.07 kpc, 4.4 kpc and 3.4 kpc. Thus, they cannot be associated with the same production site.
The innermost peak of the stacked data is also seen in individual snapshots (see e.g. the last two panels in the bottom row of Fig. 2), but its structure does not remind that of an extended halo. To check if these innermost peaks follow the √ ∆t expansion law, we performed an additional fit to the last 4 individual datasets by adding two more Lorentzian functions. However, the Lorentzian centers do not seem to follow the expansion law. Given that our results are limited by the Swift/XRT angular resolution, the origin of these features should be revisited with follow-up analysis of Chandra data.
Another interesting feature is seen at the residual plot of Fig. 3 close to the locations of peaks #8, #9 and #10. First, the residual structure around #8 indicates multiple peaks that are not resolved. Second, large residuals are found before and after the peaks #8 and #10 respectively. These residuals are caused by the width of Lorentzian profiles used for describing peaks #8 and #10 that lead to excess emission over the data. Clearly the mathematical description could be improved by inserting two more Lorentzian lines. Higher resolution instruments like Chandra could potentially identify more peaks in this range of angles that would correspond to layer distances between 0.4-0.7 kpc. We finally note that the outermost rings translate to layers at distances of only 74 pc. This is intriguing and highlights the power of X-ray tomography in providing distance measurements to dust layers even in regions of the Galaxy that cannot be mapped as accurately by other techniques.
Comparison with other probes of dust
The dust content in our Galaxy is typically studied via reddening of starlight and CO emission from cold gas, while dust-scattering rings offer a new dimension to the above. After estimating the location of the dust layers we can compare their position with the Galactic extinction profile along the line of sight due to dust attenuation. We first use the data from Bayestar19 3D maps, i.e. the latest version of the Dust Map based on Gaia, Pan-STARRS 1, and 2MASS data (Green et al. 2015(Green et al. , 2019. Given the probabilistic nature of the maps we extract 1000 random samples for the direction of our source and estimate the median and 68 per cent confidence Posterior distribution of reddening based on Bayestar19 3D extinction maps (Green et al. 2015(Green et al. , 2019. Bottom panel: Posterior distribution of the mean extinction at 5495 Å based on IPHAS photometry (Sale et al. 2014). The reliability range of the extinction estimates is indicated with grey shaded areas. Vertical lines indicate the location of the dust layers found in the stacked data (top and middle panels) or in the individual data (bottom panel). range for the differential reddening value. We note that the output values of the 3D map are given in arbitrary units; we refer the reader to Green et al. (2015Green et al. ( , 2019 for a description of the conversion to E(B − V) or extinction A in a specific pass band. We also extract the mean extinction (at the reference wavelength of 5495 Å) along the direction of the burst from Sale et al. (2014) who derived the 3D map of extinction in the northern Galactic plane (|b| < 5 o ) using IPHAS DR2 photometry. The IPHAS map provides cumulative extinction values which for the direction of the system correspond to about 4 magnitudes (up to a distance of ∼ 6 kpc where the results are trustworthy). The extinction can also be used as a proxy for hydrogen column density according to N H = 2.21 × 10 21 A V cm −2 assuming solar metallicity (Güver & Özel 2009). The estimated column density is N H ∼ 0.9 × 10 22 cm −2 (assuming A 0 ≈ A V ). Both extinction maps discussed so far have low resolution to smaller distances (within 1 kpc). Therefore, to obtain a better picture of the local extinction profile we use the updated Gaia-2MASS 3D maps of Figure 7. Velocity-integrated spatial CO sky-map with brighter colours corresponding to larger values (Dame et al. 2001). Inset plot shows a zoomin version of the central image. A circle with angular size of 12 marks the location of GRB 221009A, its size is comparable to the radius of the observed rings, the size is also comparable to the CO map resolution (i.e. pixel size).
Galactic interstellar dust (Lallement et al. 2022), which are available via the G-TOMO online tool in the EXPLORE website 3 . The results are shown in Fig. 6 where the vertical lines indicate the locations of the dust layers derived from the analysis of individual XRT datasets (bottom panel) and of the stacked image (top and middle panels). There is some agreement between the inferred distances for the nearby layers ( 1 kpc) and the positions of larger A 0 (and thus N H ) values. Estimates for the amount of dust from extinction measurements are limited to smaller distances, since the amount of stars and the accuracy of photometry decreases as we move to the outskirts of the Galaxy. For instance, the extinction estimates from IPHAS are not trustworthy beyond ∼ 6 kpc (see shaded regions in panels of Fig. 6). Meanwhile, X-ray scattering by dust closer to us produces rings with larger angular sizes that are more difficult to detect due to e.g. lower intensity. Overall, performing an X-ray tomography of the Galaxy via dust scattering echoes favours the detection of layers at larger distances (the scattering angle is smaller and the ring intensity larger), thus complementing photometric techniques for dust mapping.
To better visualize the direction of the source compared to the Galactic plane we show in Fig. 7 its location in the sky on top of the velocity-integrated spatial CO map (Dame et al. 2001). The map provides radial velocities that could be de-projected and translated to distances. However, this is far from an easy task, which does not always result in a unique solution for the distance of the CO emitting gas, but can yield instead a near and a far distance solution. Rice et al. (2016) used a dendrogram-based decomposition of the Dame et al. (2001) survey and constructed a catalog of 1064 massive molecular clouds throughout the Galactic plane. These massive cold clouds are another tracer of dust concentrations in our Galaxy. In Fig. 8 we project the catalog of the molecular clouds 3 https://explore-platform.eu (blue points) onto an illustration of the Milky way and compare those with the dust layers as inferred from the rings at distances of ∼1. 03, 1.18, 1.96, 3.44, 4.40, 9.07 and 14.7 kpc (magenta points). We did not identify any dust layers between 5 and 9 kpc through the ring analysis, which agrees with the paucity in the molecular cloud distribution and the gap between the Sagittarius and Perseus spiral arms (Fig. 8). We note that the molecular clouds are confined to the Galactic plane (|b| 2 o ) with radii of the order of 100 pc, while our line of sight probes dust distributed above the plane (b = 4.32 o ). Even though a direct connection of the cloud and layer distributions cannot be made, it is plausible that the dust extending above the plane follows a similar distribution as the one probed by the clouds.
Scattered X-ray intensity
The evolution of the X-ray scattered intensity with time (or angular size) is associated with the dust grain properties. The X-ray flux of a ring with angular size θ, which is produced by scattering of a infinitesimally short duration burst of X-rays with fluence S X (E) by dust at distance x i , can be written as (for details see Vasilopoulos & Petropoulou 2016) where C i is a normalization constant that depends on the metallicity and mass density of dust in layer i and is of order unity for typical parameters (e.g. Vasilopoulos & Petropoulou 2016) and N d,i is the dust column density of the i-th layer. The integral of the differential scattering cross section, which is modelled using the Rayleigh-Gans approximation (e.g. Mauche & Gorenstein 1986), is performed over a power-law grain size distribution with slope q (Mathis et al. 1977); hereã is the grain size in µm and Θ is the typical angular size of a ring produced via scattering of 1 keV photons on grains with radius 0.1 µm, For photon energies E > 1 keV, as those considered in this paper, Eq. (9) is valid forã 1. Most photons in the analyzed XRT images have energies between 1 and 2 keV. We therefore integrate the flux given by Eq. (9) over this narrow band and perform a qualitative comparison to the scattered fluxes derived from the optimal angular-profile models of the rings (see Fig. 2). We model the prompt X-ray fluence as S (E) ∝ (E/E pk ) −Γ+1 , where E pk = 1060 keV is the observed peak energy of the prompt spectrum as estimated from KONUS-WIND (D. Frederiks et al. 2022) and Γ = 3/2 is the photon index of the prompt GRB spectrum, assuming a fast-cooling synchrotron spectrum extending down to 1 keV (Rudolph et al. 2022).
The theoretical expectations for indicative dust parameters are shown in Fig. 9. In all cases, we assume a power-law size distribution of grains with slope q = 4 extending from a min to a max . Solid lines correspond to a max = 0.3µm, a min = a max /10, dashed lines to a max = 0.1µm, a min = a max /10, and dotted lines to a max = 0.3µm, a min = a max /3. We do not determine the normalization parameter for each dust layer,C i = C i N d,i , as we are interested in the relative ratio of the fluxes. Even without fitting the model to the data we can draw some useful conclusions. First, the maximum grain size cannot be much smaller than 0.3 µm. For Figure 8. Location of massive molecular clouds (blue circles) in the Galactic plane as obtained from CO measurements (Rice et al. 2016). The size of the markers corresponds to the actual cloud size. The direction of GRB 221007A is marked with a magenta dashed line. The most prominent dust layers at distances of ∼1. 03, 1.18, 1.96, 3.44, 4.40, 9.07 and 14.7 kpc are marked with magenta circles, and Arabic numbers corresponding to the rings 7, 6, 5, 4, 3, 2, 1 respectively (see Table 2). The Sun's location is marked with a yellow circle. Background illustration of the Milky Way reflecting the Galactic structure [Image credit: NASA/JPL-Caltech/ESO/R. Hurt] example, F sc (θ) would be almost constant for θ 10 arcmin if a max = 0.1 µm in contradiction to the data (see dashed lines). The smooth turnover of F sc (θ) is related to the exponential cutoff in the scattering cross section (see Eq. (9)), and occurs approximately at Θ(Ē, a max ), which is 2.5 arcmin for a mean photon energȳ E = 1.5 keV and a max = 0.3 µm -see Eq. (10). Second, the minimum grain size cannot be easily constrained because of the small dynamic range of the ring angular sizes. In general, the scattered X-ray flux follows a power law in angle, with a slope depending on q, and an extent determined roughly by Θ(Ē, a max ) and Θ(Ē, a min )see e.g. green and red solid lines. As a min approaches a max , however, the power-law segment of F sc (θ) becomes shorter, till the point that we start seeing the exponential cutoff of the scattering cross section for grains of typical size a min ∼ a max (compare solid and dotted lines). Grain distributions with a min a max or a min ∼ a max are compatible with the data for rings I, II, and V. In fact, the scattered flux of the fifth ring would be better described by a model of grains with similar size instead of an extended power-law distribution (compare purple solid and dotted lines). Third, grain distributions with q ∼ 3.5 − 4 and a min a max can produce the observed power-law decline of the scattered flux with angular size for rings III and V. Lastly, the relative normalizations for the dust layers arẽ C I :C II :C III :C IV :C V = 1 : 0.15 : 0.9 : 1.4 : 0.3. The relative normalizations can be used to order the dust scattering production Figure 9. Scattered flux of X-ray rings (in arbitrary units) integrated over the angular extent and plotted as a function of the angular size θ (coloured symbols). Theoretical expectations based on the simplest grain model are overplotted for indicative parameter values: a max = 0.3 µm, a min = a max /10 (solid lines), a max = 0.1 µm, a min = a max /10 (dashed lines), and a max = 0.3 µm, a min = a max /3 (dotted lines). In all cases, q = 4. Theoretical curves (for each parameter set) are normalized to the same value at 1 arcmin. sites in terms of increasing optical depth or amount of dust contained in each layer, with the fourth layer (at 0.44 kpc) being the one with the largest dust content.
Prompt X-ray scattering by dust in the GRB host galaxy can also be imprinted in the X-ray afterglow emission (e.g. Klose 1998;Shao & Dai 2007). For instance, the strong hard-to-soft evolution of the X-ray emission observed in the afterglow of the ultra-long GRB 130925A could be explained by this phenomenon Evans et al. (2014). The X-ray echoes of GRB 221009A are instead produced via scattering of prompt X-ray photons by dust in our Galaxy, as demonstrated in Sec. 4. Still, spectral softening with time is also expected. However, the X-ray afterglow of GRB 221009A shows no evidence for strong spectral evolution with a photon index close to -2 for about two decades in time 4 . In the small-angle scattering approximation, the scattered flux shows a shallow decline with time, i.e. t −1/4 -see e.g. Eq. (3) in Shao & Dai (2007). A steeper decline approaching t −2 is expected after t 1.6 × 10 5 s (E/1 keV) −2 (a/0.1 µm) −2 d /100 pc (1 + z s ). Therefore, a transition from a shallow decay to a steeper decline in the X-ray scattered flux would be expected somewhere between 6.5 × 10 4 s and ∼ 1.5 × 10 6 s for layers at distances between 0.4 kpc and 9.6 kpc, respectively. The XRT light curve shows no evidence of such transition, and its flux decays almost as a single power law (with slope ∼ −1.6 for t 10 4 s after the GBM trigger. Comparison of dust-scattering models to the XRT afterglow light curve might help to constrain the dust column density of each layer and estimate the contribution of the scattered flux to the intrinsic nonthermal emission from the GRB blast wave.
CONCLUSIONS
In this paper we have analyzed publicly available Swift-XRT data that were obtained within a few days after the detection of 4 https://www.swift.ac.uk/burst_analyser/01126853/ GRB 221009A. We constructed angular profiles of photons with energies above 1 keV from individual XRT images, and identified the most prominent peaks. By modelling their temporal evolution over a course of several days we were able to determine the time of the X-ray burst and the distances of five intervening dust layers. Complementary analysis of the stacked XRT image (scaled to a reference time of two days after the burst) revealed a richer angular structure with 16 peaks due to the increased photon statistics. The main conclusions of our work are the following: • The expansion of the five more prominent peaks in the timeresolved angular profiles yields the time of the X-ray burst, which is consistent with the GBM trigger (i.e. the prompt X-ray spectrum should extend to 1 keV).
• Analysis of the stacked image reveals extra features and increases the number of potential dust concentrations along the line of sight to at least 16, spanning from 0.07 kpc to 15 kpc. This is this the largest distance range probed by X-ray scattering echoes so far.
• Locations of dust layers are generally consistent with local maxima of the radial extinction profile, while the absence of dust layers between 5 and 9 kpc coincides with the gap between the Sagittarius and Perseus spiral arms.
• The evolution of the scattered X-ray flux (for the five more prominent rings) with angular size is consistent with scattering by dust grains having a power-law size distribution with slope q ∼ 3.5 − 4 and maximum grain size of 0.3µm. For the closest layer to us, the minimum grain size could be comparable to a max . This paper has been typeset from a T E X/L A T E X file prepared by the author.
|
2023-02-04T16:17:40.359Z
|
2023-02-02T00:00:00.000
|
{
"year": 2023,
"sha1": "89af6650c02f20592df3fd5f167e1d86549f85e2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6037d3dc613557e35f23a1494f82ef315695c796",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
237619159
|
pes2o/s2orc
|
v3-fos-license
|
Brain Organoids to Study SARS-Cov-2 Infection of Developing CNS
Early reports from Wuhan suggested that 36% of COVID-19 patients show neurological symptoms, later European studies showed as much as 60%;cases of viral encephalitis have been reported. This suggests that the virus might be neurotropic under unknown circumstances. This is well established for other coronaviruses. Many questions remain with regard to the current pandemic, including the influence of SARS-COV-2 on the developing brain. In order to understand why some patients develop such symptoms and others do not and whether developing brain might be more susceptible than adult counterpart, we addressed the infectability of the central nervous system (CNS). Reports that the ACE2 receptor - critical for virus entry into lung cells - is found in different neurons support this expectation. We employed a human induced pluripotent stem cell (iPSC)-derived BrainSphere model. A short-term infection of the BrainSpheres with SARS-CoV-2 led to infection of a fraction of neural cells with replication of the virus evident at 72 hpi. Virus particles were found in the neuronal cell bodies extending into apparent neurite structures. PCR measurements corroborated the replication of the virus, suggesting at least a tenfold increase in virus copies per total RNA. Immature and more mature cultures have been compared. 12week BrainSpheres were more sensitive to infection than 5-week ones, suggesting that maturation processes (such as synaptogenesis and network formation) might render more sensitive to the infection. These findings were supported by others in similar brain organoid models. These recent findings will be summarized to understand the advantages and limitations of brain organoids in infectious diseases in particular for the developing nervous system, as brain organoids mimic embryonic stages of development.
S69
Abstracts / Toxicology Letters 350S (2021) S1-S276 virus, suggesting at least a tenfold increase in virus copies per total RNA. Immature and more mature cultures have been compared. 12week BrainSpheres were more sensitive to infection than 5-week ones, suggesting that maturation processes (such as synaptogenesis and network formation) might render more sensitive to the infection. These findings were supported by others in similar brain organoid models. These recent findings will be summarized to understand the advantages and limitations of brain organoids in infectious diseases in particular for the developing nervous system, as brain organoids mimic embryonic stages of development.
SOC04-07
Evaluating new approach methodologies for consumer-based risk assessments: challenges and future perspectives
Unilever, Bedford, UK
Using the risk assessment of 0.1% coumarin in a face cream and body lotion as an exemplar case study, we recently demonstrated how new approach methodologies (NAMs) can be applied in Next Generation Risk Assessment (NGRA) to assess the safety of consumer product ingredients . While this study helps build confidence in the use of NAMs for consumer-based risk assessments, there is an on-going need to demonstrate that these approaches can be used to define low-risk consumer exposures for a wider range of chemicals and scenarios. To that end, we are evaluating a potential toolbox for systemic toxicity, which comprises several NAMs for characterising bioactivity (high throughput transcriptomics, in vitro cellular stress [Hatherell et al., 2020] and the Eurofins Safety44 ® Screen), together with and computational models for estimating relevant human exposures (physiologically-based kinetics (PBK) modelling [Moxon et al, 2020], skin penetration and free concentration models). These tools can be combined to estimate a margin of safety (MoS) for a given chemical exposure. In this presentation we will discuss the overall strategy for evaluating the toolbox, namely to generate data for at least forty chemical-exposure scenarios that are either known to be either associated with adverse systemic toxicity effects, or are known to present a low risk to humans. These data will be used to develop a Bayesian statistical model for characterising uncertainties in the MoS distribution, which in turn could be used to identify appropriate low risk exposures as part of an overall safety assessment for novel consumer ingredients. Preliminary toolbox results for twelve chemical-exposure scenarios (generated as part of a pilot study) will be used to illustrate the overall concepts and future perspectives of the work. in lungs, kidneys and brain, was used as a reference compound to test this approach.
Cytotoxicity after PQ exposure (24/48h) was determined by MTT or ATP assay for each system. Then the models were exposed to 2-4 concentrations (at least one below IC10 value for cytotoxicity) for 24 or 48h. Samples were collected for TempOSeq™ analysis with a set of 3565 probes. Raw data were subjected to probe and sample filtering and were carefully quality controlled. Differentially expressed genes (DEG) were detected using DESeq2 R package. Pathway analysis using the full list of DEG per model and the database CONSENSUSPathDB® showed the disruption of "Oxidative stress induced gene expression via nrf2 markers", as expected since it is the well-known PQ mechanism of action. Furthermore, two other pathways previously involved in PQ toxicity, "ESR-mediated signaling" and "Photodynamic therapyinduced unfolded protein response", were also deregulated. Genes belonging to these pathways, such as MAFF (MAF BZIP Transcription Factor F), PPP1R15 (Protein Phosphatase 1 Regulatory Subunit 15A), ATF4 (Activating Transcription Factor 4) and GDF15 (Growth Differentiation Factor 15), showed various levels of expression in the distinct models, suggesting cell type-or organ-specific ability to respond to PQ exposure.
This strategy allowed to determine known mechanisms of PQ toxicity, although we used a restricted cost-effective, number of probes in TempoSeq analysis. The main advantages of this strategy are to assess chemical toxicity on multiple organs in parallel, exclusively in human cells, and on cell-type-or organ-specific models derived from the same donors, eliminating the interspecies and genetic background biases, and allowing a better evaluation of the differential sensibility of the diverse organs. Furthermore, although we focused on the common mechanisms of action of PQ, this strategy would allow at the same time for organ-specific toxicity testing, by using an increased number of probes for TempoSeq analyses. In conclusion, we believe this strategy will participate in the further improvement of chemical risk assessment for human health.
|
2021-09-25T13:13:19.003Z
|
2021-09-01T00:00:00.000
|
{
"year": 2021,
"sha1": "60331341360c0f56ff787866f24810bc793d246d",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/s0378-4274(21)00409-4",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "60331341360c0f56ff787866f24810bc793d246d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212653313
|
pes2o/s2orc
|
v3-fos-license
|
Single-cell mutational profiling enhances the clinical evaluation of AML MRD
Acute myeloid leukemia (AML) is an aggressive neoplasm characterized by multiple molecular abnormalities often occurring in a complex combination of related subclones. AML is primarily a disease of the elderly and affects ;20000 adults annually in the United States. Patients are usually treated with aggressive induction chemotherapy followed by either consolidation chemotherapy or allogeneic hematopoietic cell transplantation. After induction, the majority of patients achieve a complete remission (CR) as defined by normalization of peripheral blood counts with ,5% blasts detected in the bone marrow. However, patients often relapse, with resistant disease resulting in a 28.3% 5-year overall survival (OS). Increasing evidence suggests that minimal/measurable residual disease (MRD), which is considered persistent leukemia below the 5% threshold seen by morphologic evaluation, is an independent risk factor for relapse and could therefore guide disease management.
Introduction
Acute myeloid leukemia (AML) is an aggressive neoplasm characterized by multiple molecular abnormalities often occurring in a complex combination of related subclones. 1,2 AML is primarily a disease of the elderly and affects ;20 000 adults annually in the United States. Patients are usually treated with aggressive induction chemotherapy followed by either consolidation chemotherapy or allogeneic hematopoietic cell transplantation. After induction, the majority of patients achieve a complete remission (CR) as defined by normalization of peripheral blood counts with ,5% blasts detected in the bone marrow. However, patients often relapse, with resistant disease resulting in a 28.3% 5-year overall survival (OS). 3 Increasing evidence suggests that minimal/measurable residual disease (MRD), which is considered persistent leukemia below the 5% threshold seen by morphologic evaluation, is an independent risk factor for relapse and could therefore guide disease management. 4 MRD can be assessed by using multiparameter flow cytometry or molecular assays. Flow cytometrybased assays use leukemia-associated aberrant immunophenotypes to detect MRD; they can be highly variable and operator dependent. Regardless, immunophenotype-based MRD detected at a 0.1% threshold has been associated with significantly shorter relapse-free survival (RFS) and OS. 5,6 Of interest, most cases of AML contain genetic mutations that can serve as clonal markers for MRD. Molecular techniques including real-time quantitative polymerase chain reaction (PCR) and nextgeneration sequencing (NGS) can therefore provide more specific assays for MRD detection. Furthermore, because the mutational landscape of AML is diverse, NGS can identify personalized MRD markers for potentially all AML cases by using a panel of AML-associated mutations. Large cohort studies using variant allele frequency (VAF) cutoffs between 0.02% and 2.5% have identified an association between MRD detection and RFS and OS. [7][8][9] One complication of NGS MRD detection is that common mutations in DNMT3A, TET2, and ASXL1 may occur in preleukemic clonal hematopoiesis that persist in remission but do not reflect relapse-causing leukemic cells. Indeed, exclusion of common mutations associated with clonal hematopoiesis can enhance the detection of clinically relevant MRD and the predictive power for RFS. 9 In addition, bulk NGS is unable to resolve clonal architecture, particularly with rare variants detected in remission, which can impair the ability to identify relapse-causing MRD. Moreover, characterizing changes in clonal heterogeneity or diversity is important for studying tumor evolution and its association with treatment resistance or relapse. 10 To address these limitations, we used single-cell sequencing (SCS) to evaluate the clonal dynamics of AML from diagnosis to remission to relapse. We defined clones as cells containing the same mutations, and MRD as clones observed at remission that expand into the dominant clone at relapse. SCS was not only able to recapitulate bulk sequencing VAFs but was also able to determine the clonal architecture at each time point, providing insight into the clinical relevance of cooccurring clonal mutations. Indeed, SCS detected and quantified both pre-leukemic clonal hematopoiesis clones and frankly leukemic clones that eventually dominated at relapse. We observed complex patterns of clonal heterogeneity and evolution that may predispose patients to relapse after undergoing conventional chemotherapy and/or allogeneic hematopoietic cell transplantation. Our findings provide preliminary clinical validation of the utility of high throughput SCS for MRD evaluation.
Patients and cell samples
Human AML samples were obtained from patients at the Stanford Medical Center with informed consent, according to Institutional Review Board-approved protocols (Stanford Institutional Review Board Nos. 18329 and 6453). Collection occurred between 2011 and 2015, with samples of bone marrow and peripheral blood obtained from 14 patients with de novo AML, aged 22 to 71 years. Mononuclear cells were isolated from patient samples by using Ficoll separation (GE Healthcare Life Sciences) and cryopreserved in liquid nitrogen in 90% fetal bovine serum and 10% dimethyl sulfoxide. Analysis was performed on freshly thawed cells. To be included in the analysis, patients had to either have achieved CR or CR with incomplete hematologic recovery defined according to the 2017 European LeukemiaNet (ELN) guidelines. 11 All patients were treated with an anthracycline-and cytarabine-containing induction regimen.
Targeted NGS of leukemia-associated mutations
Targeted amplicon sequencing was performed as previously described on select cases. 12 VAF was defined as: (mutant read no.)/(wild-type read no. 1 mutant read no.). Read counts and primer pairs are available on request. Each locus was sequenced to .500-fold coverage for .99% of assays.
Single-cell sequencing SCS was performed by using Mission Bio's Tapestri AML platform, which assesses hotspot mutations in human AML (supplemental Figure 4), according to the manufacturer's protocol. Briefly, cryopreserved bone marrow aspirates or peripheral blood mononuclear cells were thawed and counted before loading ;150 000 cells onto the Tapestri microfluidic cartridge. Cells were emulsified with lysis reagent and incubated at 50°C before thermally inactivating the protease. The emulsion containing the lysates from protease-treated single cells was then microfluidically combined with targeted gene-specific primers, PCR reagents, and hydrogel beads carrying cell-identifying molecular barcodes using the Tapestri instrument and cartridge. After generation of this second, PCR-ready emulsion, molecular barcodes were released in a photocleavable manner from the hydrogels with UV exposure, and the emulsion was thermocycled to incorporate the barcode identifiers into amplified DNA from the targeted genomic loci. The emulsions were then broken by using perfluoro-1-octanol, and the aqueous fraction was diluted in water and collected for DNA purification with SPRI beads (Beckman Coulter). Sample indexes and Illumina adaptor sequences were then added via a 10-cycle PCR reaction, and the amplified material then underwent SPRI purification a second time.
After the second PCR and SPRI purification, full-length amplicons were ready for quantification and sequencing. Libraries were analyzed on a DNA 1000 assay chip with a Bioanalyzer (Agilent Technologies) and sequenced on an Illumina MiSeq with 150 bp paired-end chemistry. A single sequencing run was performed for each barcoded single-cell library prepared with our microfluidic workflow. A 5% ratio of PhiX DNA was used in the sequencing runs. Sequencing data were processed by using Mission Bio's Tapestri Pipeline (trim adapters using Cutadapt, sequence alignment to human reference genome hg19 [GRCh37.p13], barcode demultiplexing, cell-based genotype calling using GATK/Haplotypecaller). Data were analyzed by using Mission Bio's Tapestri Insights software package and R software (R Foundation for Statistical Computing). In detail, the following quality metrics were used to filter for high-quality cells and variants: genotype quality score (default .30), reads per cell per amplicon (.10), mutant genotype VAF (.20%), germline variants as confirmed according to the ClinVar database (false), and variants mutated ,1% in all samples in a series (diagnosis, remission, and relapse). 13 These filters affect different parameters such as variant quality score, read depth per variant per cell, and limit of detection. Only variants with clinical implications known from databases (ClinVar and dbSNP) or verified from previous bulk NGS sequencing are selected to identify groups of cells that can be aggregated as arising from a single clone. The number of clones can vary depending on parameter selections during filtering. In all the selected clones, variants assigned a heterozygous genotype must have a VAF between 40% and 60%, ,1% to be wild-type, and .95% to be homozygous. The allele dropout (ADO) rate was estimated with data generated from ADO amplicons. ADO amplicons span polymorphic regions of the genome with a minor allele frequency of 50% (5 heterozygous variants). If at least 3 of 10 ADO amplicons were called heterozygous in at least 75% of all cells, the average fraction of cells with homozygous calls (reference or mutant) represents the ADO rate. In addition, during secondary analysis, potential ADO clones were identified and removed by using variant-specific performance metrics, including reads per cell and genotype quality scores.
Genomic landscape and mutation cooccurrence analysis
Data were initially analyzed by using the Tapestri Insights software package, which grouped cells with unique mutations. After filtering for high-quality variants, groups of cells with unique mutations were labeled as a distinct clone, creating 97 clones across all 38 samples. These clones were subsequently analyzed as individual observations, or "patients," and the mutation landscape across all clones was evaluated by using maftools R package. Cooccurrence analysis was performed among these clones, and results were compared with perpatient VAFs inferred from SCS and bulk NGS from The Cancer Genome Atlas (TCGA). 14 Only mutations identified from the targeted sequencing panel were included. Spearman correlation coefficients were determined by using the corrplot R package and displayed with adjusted P values in terms of the false discovery rate.
Mutation order analysis
The clonality patterns for each sample (n 5 38) were determined by using previously described methods. 15 Ancestral (d1) and descendant (d2) driver mutations were identified for each sample, and an edge between these 2 driver pairs was constructed (outgoing for d1 and ingoing for d2). In-and out-degrees for each driver event were counted, and hypothesis testing was performed by applying 2-tailed binomial tests to infer whether a driver event was early (greater number of out-vs in-degrees). Q values were determined in terms of the false discovery rate to account for multiple hypothesis testing using the qvalue R package.
Clonal diversity and evolution analysis
Shannon, Simpson, Menhinick, Margalef, and Richness indices were determined by using the clonal composition for each sample. Values at diagnosis and remission, and changes in values from diagnosis to remission, were calculated for each patient. Random forest regression was performed against RFS; it identified the Menhinick richness index at diagnosis and the change in the Simpson diversity index as the most influential features. Cox proportional hazards models were determined by using the Menhinick index at diagnosis, change in Simpson index, age, sex, and 2017 ELN risk category against RFS. The significance for each variable was assessed by using the Wald statistic. Median values for richness and change in diversity were used as split points for patient classification. End points (eg, RFS) were defined according to standard criteria. The Kaplan-Meier method and log-rank test were used for unadjusted analyses of time-to-event end points. Analysis was performed by using vegan, randomforest, randomForestExplainer, tree, survival, and survminer R packages.
Statistical methods
Statistical comparisons were performed by using the Wilcoxon rank sum test for continuous variables and Fisher's exact test for categorical variables, unless otherwise noted.
Results
Fourteen patients with de novo AML who achieved a CR after combination induction chemotherapy were consented for institutional tissue banking and included in our cohort. Consecutive peripheral blood or bone marrow samples were sequenced at diagnosis, remission, and relapse for 10 relapsed patients and at diagnosis and remission for 4 nonrelapsed patients (n 5 38 samples). SCS was performed by using a microfluidic, dropletbased platform (Tapestri). 16 The Tapestri AML panel included hotspot regions of 19 recurrently mutated genes (supplemental Figure 4), and the study cohort was selected for those patients with mutations in these genes as determined by bulk NGS. A total of 310 737 cells were sequenced (average 8177 prefiltered cells per sample), with an average of 2829 reads per cell (supplemental Table 1). Cell capture rates were between 5% and 10%; however, the microdroplet cell capture process is not selective or size dependent, and should not introduce biases in resolving the underlying clonal architecture. 17 The limit of detection of the platform is 0.1%, which has been reported with cell line spike-in experiments prepared at different ratios. 13 Baseline patient characteristics are provided in Table 1, and additional clinical information is provided in supplemental Table 2. Targeted SCS identified additional variants that were not detected by bulk NGS but was unable to detect variants in ASXL1 for 3 patients. A majority of patient samples contained an NPM1 mutation (23 of 38 [60.5%]), which was identified in 51% of clones across all patient samples. FLT3 mutations were identified in 31.6% (12 of 38) of patient samples and were present in 26% of all clones. The mutational frequencies were generally similar to those identified from the TCGA cohort (supplemental Figure 1). 14 The mutation landscape from SCS is illustrated in Figure 1A for all unique clones identified (n 5 97) across all patient samples.
Using SCS data, we accurately resolved the clonal dynamics during treatment and studied the timing of mutations in driver genes by analyzing their cooccurrence patterns. We systematically annotated ancestor-descendant relationships for each pair of mutations that cooccurred in at least 1% of sequenced cells in a sample. Mutations were classified as occurring early or late by comparing frequencies of ancestral and dependent mutations, and mutation order was inferred with statistical significance by using the binomial test (as discussed in "Methods"). This analysis identified DNMT3A and IDH2 as early mutations, although there were cases in which IDH2 seemed to have been acquired after NPM1 ( Figure 1B). Variants in NPM1 and FLT3 were acquired at intermediate stages, whereas mutations in RAS and KIT were predominantly late acquisitions. SCS provided increased power to identify cooccurring mutations compared with inference from bulk VAFs and with NGS results from the TCGA cohort ( Figure 1C-E). For patients who eventually relapsed, there was a greater number of clonal cooccurring variants at diagnosis (2 vs 1; P 5 .01) ( Figure 1F).
In comparing the results of serial specimen SCS from individual patients, we identified clonal mutations in remission for 8 of 10 patients who eventually relapsed (80%) and for 3 of 4 patients who never relapsed (75%) ( Table 1). Furthermore, the predominant clone detected at relapse was identified in 4 of 9 evaluable remission samples (44%) for patients with relapsed disease. Thus, SCS detected relapse-causing MRD in primary patient specimens at a higher frequency than published results using NGS (64 of 340 [19%]), albeit in a smaller cohort. 9 The lower limit of detection by SCS was similar to published results using the Tapestri platform. 17 We have previously reported on the heterogeneity of relapsed AML, showing multiple distinct patterns of clonal evolution, including relapse with the predominant clone from diagnosis, with a minor subclone from diagnosis, or with further clonal evolution. 18 Individual cases from the current cohort also display diverse patterns of clonal evolution with similar patterns as our initial findings. Case SU067 (Figure 2A) revealed a clonal switch from a predominant NPM1/PTPN11 mutant clone at diagnosis to one with NPM1 and WT1 mutations at relapse. The relapse-initiating clone was detected at 0.24% (10 of 4136 cells) in remission. From our previous analysis of this case, we were not able to detect this variant at remission using targeted NGS, indicating the potential utility of SCS for MRD detection. Case SU291 ( Figure 2B) revealed a clonal switch from an NPM1/IDH2 mutant clone to an NPM1/ IDH1/FLT3-tyrosine kinase domain cooccurring clone. SCS of remission samples at days 26 and 359 did not detect any MRD. Case SU320 ( Figure 2C) revealed that the dominant IDH2/NPM1/ KRAS mutant clone at relapse was a minor clone at diagnosis, and day 30 SCS did not detect any MRD. Case SU353 ( Figure 2D) relapsed with the same major clone identified at diagnosis, and sequencing of the remission sample identified MRD at 1.3% (82 of 6442 cells). Our previous analysis also revealed that this patient had MRD as measured by targeted amplicon sequencing of the flowsorted CD34 1 compartment at remission. Although this patient was in hematologic CR, the high level of MRD was prognostic, as the patient relapsed early at day 62. Case SU654 ( Figure 2E) also relapsed with the same dominant clones from diagnosis. Sequencing of the remission sample identified MRD at 0.12% (10 of 7840 cells), as well as clonal mutations in DNMT3A and IDH1. In contrast to patient SU353, patient SU654 relapsed later on day 248. Case SU674 ( Figure 2F) relapsed with a minor clone from diagnosis, and no MRD was detected according to SCS of the remission sample.
SU320, SU654, and SU674 illustrate cases in which pre-leukemic clonal hematopoiesis can be distinguished from MRD by using SCS. Figure 2G-H illustrates 2 cases of patients who remained in remission. In case SU564 at diagnosis, only one mutation in KIT was detected, although it is likely other mutations were present not covered by the Tapestri panel. Regardless, the KIT mutation was measured at 0.08% (6 of 7179 cells) at remission. In case SU290, there were 12 unique clones identified at diagnosis, and the GATA2 mutation was measured at 0.02% (2 of 8726 cells) in the remission sample by using SCS. Of note, we observed AML clones at remission for 2 cases that continue in remission, SU380 and SU218 (supplemental Figure 2), which was likely due to early sampling. The remaining cases are illustrated in supplemental Figure 2. All together, these cases illustrate the wide diversity of clonal architecture and responses to treatment observed in patients with AML and illustrate the utility of SCS to directly identify MRD in some patients who subsequently relapse.
Clonal evolution in AML can be influenced by both anti-leukemia therapies and the microenvironment; however, AML evolution during treatment and its clinical relevance are not completely understood. Modeling clonal heterogeneity using richness and diversity metrics can provide insights into cancer evolution and its association with treatment resistance and disease relapse. 10 Richness indices such as the Menhinick index quantify the number of different clones in a sample. Diversity indices such as the Simpson index account for not only the number of observed clones but also the relative abundance of each clone. A sample with equal clonal frequencies would be considered more diverse than another sample containing a dominant clone within the same total number of clones. To evaluate the importance of clonal richness and diversity in AML, we characterized the cellular composition of patient samples at diagnosis and remission by using standard ecosystem metrics ( Figure 3A; supplemental Figure 3). Among these metrics, random forest regression against RFS identified the Menhinick richness index at diagnosis and the change in the Simpson diversity index from diagnosis to remission as the 2 most influential features.
Cox proportional hazards analysis identified the change in the Simpson diversity index as the most significant measurement associated with RFS (hazard ratio, 0.077; P 5 .03) compared with the Menhinick richness index, age, 2017 ELN risk category, and sex ( Figure 3B). The 2017 ELN molecular risk stratification 11 was determined by using mutation data from both SCS and bulk NGS. Stratifying patients based on changes in clonal diversity (supplemental Table 4) showed an RFS benefit for patients who had a greater decrease in diversity at remission (median not reached vs 224 days; P 5 .008) ( Figure 3C). These findings suggest that although patients with AML have a similar degree of clonal richness at diagnosis, a greater decrease in leukemia diversity at remission may be associated with longer RFS. Stability or increase in AML diversity may therefore be a measure of leukemia fitness and treatment resistance. Overall, this analysis is limited by small patient numbers, the size of the genetic panel, and possible ADO; however, the results suggest that clonal diversity and mutation cooccurrence are clinically relevant in AML.
Discussion
Over the past 30 years, significant advances have been made in defining the prognosis of AML patients based on clinicopathologic features, cytogenetic aberrations, and somatic mutations. Increasingly, the heterogeneity of AML at the molecular level has become apparent. Although molecular and cytogenetic profiling continues to provide the framework for risk stratification used to guide management of AML, 11 there has been inconsistency in NGSbased classification systems used in clinical practice. Here we show that SCS of AML samples at diagnosis, remission, and relapse allowed for quantification of cooccurring mutation variants, differentiation of pre-leukemic clonal hematopoiesis from relapsecausing clones, identification of clinically relevant MRD, and investigation of evolutionary trajectories during treatment.
In our data set, persistence of clones with multiple variants during remission was associated with increased risk of relapse. This finding is similar to previously published work using bulk NGS assays with a sensitivity of 0.2% that showed that persistence of $2 lesions was associated with significantly reduced leukemia-free survival and OS. 19 This finding raises the possibility that identification of complex or multiple clones during remission increases the risk of resistant disease and future relapse. Multiple groups are exploring personalized digital droplet PCR assays for MRD tracking to leverage this finding. [20][21][22] However, SCS offers the opportunity to avoid needing a personalized approach, while also allowing for identification of de novo or previously undetectable somatic variants. SCS also provides direct quantification of clonal diversity, and modeling clonal evolution may be relevant for understanding AML outcomes (Figure 3). Additional research is needed to verify these observations in a larger cohort of patients.
Our results suggest a possible increased sensitivity of SCS compared with NGS for identifying persistent mutations, with 80% of relapsed cases having $1 mutation identified at the remission time point. Previous studies have reported 40% to 51.4% of patients with persistent somatic variants at time of remission according to bulk NGS sequencing. 8,9 This is in part due to the higher limit of detection of previously implemented NGS techniques, which usually detected VAFs down to 2%.
Considerable interest exists regarding the use of MRD status in AML to help inform escalation or de-escalation therapeutic strategies (eg, initiating or intensifying treatment of patients with MRD to lessen risk of relapse). Furthermore, assessment of MRD status as a surrogate end point for clinical trials is also being actively explored. SCS may add to this field by allowing for unequivocal resolution of clonal structure at time of remission, as well as allowing for identification of previously difficult-to-detect emerging resistant clones. 17 This approach may allow for better risk stratification and lead to proactive treatment of persistent or emerging treatmentresistant clones ( Figure 3D). We also note that single-cell analysis can provide unambiguous resolution of persistent pre-leukemic clonal hematopoiesis from leukemic clones at remission. In addition, our observations support the need for serial sampling to assess MRD, as cases SU380 and SU218 had detectable variants at remission but remained in remission. In fact, the recent consensus document by the ELN MRD working party recommends serial measurements of MRD during treatment. 4 Current limitations of SCS for MRD detection include limited singlecell throughput, relatively small panel size, ADO, and inability to multiplex DNA with other analytes. Addressing these limitations may allow for SCS to further increase the limit of detection and improve specificity to allow for routine application of this technology in clinical research and practice. Already, decreasing costs of sequencing have allowed for expansion of the current panel from 19 genes (50 amplicons) to 47 genes (330 amplicons) in the next iteration. In addition, multi-omics capabilities of the current platform have been shown, possibly allowing for further characterization of MRD clones beyond just the DNA mutational signature. 23 Finally, the recent ELN MRD Working Party recommendations suggest that at least 10 000 cells, if not upward to 500 000 cells, are needed to accurately detect MRD lower than the 0.01% threshold. 4 Improvements in microfluidics technology have allowed for increased throughput to .50 000 cells, 24 theoretically decreasing the limit of detection into the range of most error-corrected NGS and digital droplet PCR technologies. 25 However, significant improvements are needed in SCS throughput to accurately quantify clinical MRD.
In conclusion, SCS-based evaluation of MRD during CR may allow for identification of AML patients at high risk for relapse. It specifically enabled the differentiation of pre-leukemic clonal hematopoiesis from leukemic clones responsible for relapse. In addition, greater clonal complexity was associated with reduced elimination of all malignant clones with standard chemotherapy regimens. This observation was associated with a higher risk of resistant clones persisting and eventually causing clinical relapse. Based on these results, SCS MRD assessment may be useful for informing treatment decisions in first remission and for following clonal evolution during and after conventional therapy in AML.
|
2020-03-11T13:10:28.422Z
|
2020-03-10T00:00:00.000
|
{
"year": 2020,
"sha1": "40adc06142d2e77edee8acff1edf16f1fd735ba4",
"oa_license": null,
"oa_url": "https://ashpublications.org/bloodadvances/article-pdf/4/5/943/1720126/advancesadv2019001181.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9c597b43087bf46306a84a7877b97255d2e520f7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
2414052
|
pes2o/s2orc
|
v3-fos-license
|
Definition of the Gene Content of the Human Genome: The Need for Deep Experimental Verification
Based on the analysis of the drafts of the human genome sequence, it is being speculated that our species may possess an unexpectedly low number of genes. The quality of the drafts, the impossibility of accurate gene prediction and the lack of sufficient transcript sequence data, however, render such speculations very premature. The complexity of human gene structure requires additional and extensive experimental verification of transcripts that may result in major revisions of these early estimates of the number of human genes.
Introduction
Of all the justifications to sequence the human genome, the identification of the complete set of human genes is probably the most compelling. Certainly, it is the gene content, which is the facet of the genome, that is of the widest interest to both academic scientists and corporate research organizations alike. In this regard, there have been strong statements made about the gene content of the human genome, particularly in the press, following the completion of the draft human genome sequence. The general trend has been to draw attention to the conclusion that the human genome contains a surprisingly small number of genes that is not significantly removed from the number of genes present in the genome of lower eukaryotes that have been sequenced. The published manuscripts [18,29] describing this milestone in the evolution of science are somewhat more cautious, however, and the truth is that at the present time we have no real idea of the number of human genes, let alone what they encode and how they function. It is ironic that the essential completion of the human genome sequencing with the enormous investments of time and money that this has entailed, has not led to the most eagerly awaited portion of the information that it contains, the identification of human genes. The reason for this is that, although the human genome sequence is essential for the accurate description and cataloging of human genes, it is not sufficient. Human genes are highly complex structures and as yet we are not able to predict their presence with any certainty by inspection of genomic DNA sequence. Rather this absolutely requires direct experimental evidence in the form of transcript sequencing.
Identification of genes within prokaryotic and eukaryotic genome sequences
The paradigm of whole genome sequencing as a route to determine gene complement has proved robust in the context of prokaryotic organisms [6,12]. Bacterial genomes are highly compact suggesting a strong selective pressure to reduce genome size. Thus, genes are head to tail with one another and, crucially for gene hunters, uninterrupted by introns. Thus the standard procedure for gene identification, is to first identify open reading frames with an algorithm such as Glimmer [8]. Subsequently these ORFs are annotated, or assigned putative function, on the basis of comparison with known genes or proteins from other organisms using programs such as BLAST [2].
Even for bacterial genomes this is not fool-proof, however, as an arbitrary lower limit to the size of ORFs taken as real has to be imposed close to those that can be expected to occur by chance within noncoding DNA sequences. This is a serious limitation in the absence of any similarity between the putative ORF and known genes and proteins. Nevertheless, the identification of approximately one gene per kb of genome sequence has been possible for all bacteria for which the genome sequence is publicly available. The confidence level is high due to the combined evidence of long open reading frames and similarity with previously defined genes. Thus, although there may be some error in the precise definition of the initiation codon, in general there is no need for further confirmation of gene structure by transcript sequencing or microarray experimentation. In addition, it should be remembered that this kind of gene identification is based on complete, high quality sequences that contain no gaps and where all ambiguities have been resolved.
There are two fundamental differences between prokaryotic and eukaryotic gene structure that complicate the identification of eukaryotic genes within genome sequence. The first is that the relative proportion of the genome occupied by genes is considerably smaller in eukaryotes. Although there is approximately one gene per kb in bacterial genomes, there is only about one per 100 kb in the human genome [18,29]. Thus we are dealing with a structure where the genes are two orders of magnitude more widely spread. Far more importantly, however, is that eukaryotic genes are fragmented into exons separated by intervening introns. Thus the first step in gene identification, that of putative ORF detection, is not possible in the context of the human genome. This is the fundamental problem of human gene identification based on genomic sequence alone. Indeed, this problem is more acute in the human genome than in the other eukaryotic genomes sequenced due to the significantly greater sizes of human introns [18,29].
By aligning previously sequenced, complete cDNAs with human genomic sequence the general characteristics of human gene structure have been outlined. The comparison of these data with other eukaryotic genomes shows that the average overall length of coding regions in C. elegans (the worm), D. melanogaster (the fly) and for H. sapiens (human) is 1311, 1497 and 1340 bp respectively [18]. In addition, it reveals that in all three organisms internal exons are generally between 50 and 200 bp with the average exon sizes for the worm and for human being 218 bp and 147 bp respectively. On the other hand, intron sizes are found to be significantly larger in human. In the worm and the fly the averages are 267 bp and 487 bp respectively while in human the average is roughly ten times greater, 3300 bp [18]. Moreover, the variation in intron size is markedly greater in human. This highly dispersed and variable nature of human genes makes them simply impossible to detect with any accuracy by simple inspection of the human genome, even aided by the most sophisticated algorithms produced to date. Thus, when all is said and done, the sequencing of the human genome did not lead to gene identification, as was the expectation. Genes previously defined by cDNA sequencing could be aligned to the genome, allowing their precise mapping and the definition of intron-exon structure, but no new genes could be identified from genome sequence alone.
Gene identification in the human genome drafts
Although gene discovery did not feature in the completion of the draft genome sequence, both the International Human Genome Sequencing Consortium (IHGSC) and Celera projects catalogued the position of those genes that are already well defined by comparison of high quality, full-length mRNA sequences with the draft genome [18,29]. This allowed the intron-exon boundaries of the corresponding genes to be defined for the fist time in many cases. Importantly, however, this exercise also permitted the suitability of the draft genome for novel gene identification to be assessed.
In both projects the RefSeq database [22] was used as the source of high quality full-length transcript sequences. RefSeq is a carefully, manually curated, non-redundant data set that contains most genes for which a reliable full-length mRNA sequence is available [22]. At the time of the genome annotation, RefSeq contained 10 271 human mRNAs. When these were compared to the IHGSC draft it was found that of the RefSeq
170
A. J. G. Simpson et al.
sequences, 92% showed high stringency alignment over at least some portion of their length, 85% could be aligned over at least half of their length but in only 52% could an essentially complete alignment be achieved [18]. Thus almost half of known genes were only found to be partially represented in the genome sequence demonstrating the rudimentary nature of the draft genome sequence and hence its lack of present suitability as a basis for novel gene identification. Even if it were possible to accurately predict genes based on sequence data alone, the draft at the time of publication is arguably simply too fragmented to make this a worthwhile exercise. Indeed, on examining the largest 10 genes in Ref Seq, Aach et al., found that only six had both ends in the same contig of the human genome assembly, two genes had ends in different contigs and the remaining two had only one end that could be found within the genome sequence [1].
In the case of the Celera [29] sequence it was possible to identify only 6538 of the genes corresponding to the RefSeq sequences on the basis of a match against the genome for at least 50% of their length with >92% identity. Again, this rather small number reveals the extent of fragmentation and error in the sequence and the difficulty therefore of using it for novel gene prediction.
Thus in both cases we have to take the highly fragmented nature of the draft sequences into account when assessing estimates of gene numbers. Clearly over the coming months an essentially finished sequence will become available that will circumvent this problem. The question then remains as to how to identify genes in this high quality sequence.
Human gene prediction
Despite the shortcomings of the draft sequence as a source for gene discovery, efforts were made by the IHGSC in this direction by building what they term an initial gene index (IGI) [18]. This was produced firstly by using the Ensembl system that involves a prediction program together with confirmatory evidence from ESTs, proteins, protein motifs and sequences from other organisms. In addition, a second approach was taken whereby attempts were made to extend EST and mRNA matches using statistical approaches. As a result of these studies a total of 31 778 protein predictions were made of which 14 882 represented known genes [18].
The limitations of this approach were assessed by comparison with newly discovered genes arising from independent work that were not used in the gene prediction effort. Of 31 such genes, only 19 (68%) were represented in the predictions. Furthermore, of each gene predicted an average of 79% was detected [18]. In a less direct, but larger scale approach a set of 15 294 full-length mouse cDNAs was examined and again only 69% showed any similarity with the predicted human genes [17,18]. Moreover, of 817 mouse hypothetical transcripts for which there were no corresponding human genes in RefSeq, Human Unigene or Ensemble database, only 174 perfectly matched GenScan predictions and 322 sequences did not hit any exons predicted by GenScan. The remaining 311 showed partial matches because GenScan did not predict one or more exons [17]. Although detailed calculations of sensitivity, fragmentation and prediction rates are tempting, the best conclusion is that these approaches are so inexact that there is little point in extrapolating from such theoretical exercises to the number, structure or function of human genes.
A combination of predictions and comparisons with proven transcripts were also utilized in order to identify human genes within the Celera draft [29]. In addition to the genes identified on the basis of RefSeq comparisons, a further 11 226 genes were predicted using a novel system named 'Otto' that attempts to reproduce in an automated way the kind of assessment of transcript evidence that a human annotator undertakes. In addition, 8619 genes detected on the basis of at least two confirmatory lines of evidence (ESTs, protein, mouse genome matches) for separate de novo gene predictions. This latter number increased to 21 350 if only one line of confirmatory evidence was taken as sufficient [29]. Thus the overall numbers that result from this analysis are very similar to those obtained from the IHGSC project [18]. Although, the kind of detailed assessment of the limitations of the predictions provided in the IHGSC paper was not presented, the numbers provided suggest that a similar level of accuracy and completeness is probable. Indeed, in this regard the data of Aach et al. conclude that the quality of the two draft sequences are of a similar quality as judged by sequence gaps, continuity, consistency between the Gene content of the human genome two sequences and patterns of DNA-binding protein motifs [1].
Both studies leave us at a very preliminary stage as judged by the similarity of the numbers of genes found and the detailed assessment of the lack of accuracy of the methodology utilized as detailed in the case of the IHGSC manuscript. This lack of precision of prediction of human genes has been amply documented elsewhere and the data in the genome papers are entirely consistent with the overall position of this field [5,13].
Estimates of the number of human genes
The final overall estimates of the number of genes in the human genome are 30-40 000 in the case of the IHGSC and 26-38 000 in the case of Celera [18,29]. These estimates were made despite the shortcomings outlined above. This does not mean that the estimates are wrong only that it is too early to be sure. They are consistent with extrapolations of gene numbers from the published chromosome 21 and 22 sequences [10,14]. In addition, the human genome papers quote recent independent estimates as supporting evidence for these low numbers. One of these papers involves the calculation of the gene number by comparing the number of known genes and ESTs and arrives at estimates of approximately 34 000 [11]. The known genes used were those for which we have a full-length mRNA or those annotated on chromosome 22. The estimate depends on these sets being representative of all genes particularly in terms of expression level. At least in terms of the full-length mRNAs this is clearly not the case and thus the assessment may be flawed. For example, if we take Unigene cluster size as a rough estimate of expression level, we can find 38 789 clusters in Unigene Build 128 composed of two to 10 sequences (representing rarely expressed genes) of which 1985 (5.1%) contain a full-length cDNAs whereas there are 4572 clusters of 100 or more sequences (representing highly expressed genes) of which 4249 (92.9%) contain a full-length cDNA (unpublished observations).
The other paper involves comparison between human and Tetraodon nigroviridis (a pufferfish) DNA as the basis of exon identification [7]. This estimate arrives at the similar number of 28-34 000 genes. Again, however, this estimate crucially relies on the relatedness of the fully characterized human genes and pufferfish sequences reflecting that of the yet to be defined human genes and pufferfish sequences. It should be noted that a companion paper of those cited above that simply depended on the very careful clustering of available EST sequences came to the conclusion that there are in the range of 120 000 human genes [21]. This paper did not have the benefit of the human genome sequence to aid clustering and may certainly have overestimated gene number due to the complexity of alternative splicing and polyadenylation. Nevertheless, it serves to show how essentially the same data can lead to very different conclusions when analyzed in different ways using different assumptions.
The need for further transcript sequencing
In the closing sections of their paper the Celera team admit: 'As was true at the beginning of genome sequencing, ultimately it will be necessary to measure mRNA in specific cell types to demonstrate the presence of a gene' [29]. We wholeheartedly agree with this statement. A pervasive view is that the sequencing of the genome of other species may also be a strategy for gene identification [3,7]. Certainly, comparison with organisms at an appropriate evolutionary distance is a valuable way of identifying probable genes. The more genomes there are to compare the better such predictions will be. We believe, however, that this will never substitute for transcript sequencing due to the difficulty in identifying the exact start and stop of each exon not to mention the added value of alternative splicing and expression patterns that transcript sequencing provides.
That is not to say that transcript sequencing is not without its shortcomings. Firstly, it is clear that the amount of transcript data that will be required to find all human genes will be enormous. At the time of the annotation of the draft human sequences, around 10 000 putative full-length sequences were available and in the order of 3million ESTs. This was clearly woefully deficient given the huge uncertainties in finding human genes that are alluded to above. It may ultimately be necessary to obtain the full sequence at least one example of every transcript in every cell type and all developmental stages to identify all genes (the attraction of gene prediction is that these daunting requirements are circumvented). In addition, it will be necessary to cover each gene several times in different tissues in order to identify splicing alternatives that are often tissue specific. One approach to this multiple coverage is to adopt a highthroughput approach to transcript sequencing in a shotgun like format. This can now be effectively achieved using a combination of 3k and 5k EST sequencing together with our own Open Reading Frame EST (ORESTES) approach that tags the central portions of transcripts [9]. ORESTES is also a more realistic approach for searching for rare, tissue specific transcripts. The ORESTES methodology strongly normalizes and uses only minimal amounts of mRNA permitting such surveying to be contemplated. One could contemplate using ORESTES to provide the initial evidence of a transcript followed by a planned experimental strategy such as cDNA library screening or RACE to find the rest of the transcript. Alternative splices could then be sought by RT-PCR analyses.
The other principle problem with transcript sequencing is its technical difficulty that is significantly greater than that of genome sequencing particularly in relation to template preparation. In this regard trace amounts of genomic DNA are often incorporated into both ORESTES and conventional ESTs. Thus, careful analysis has to be undertaken and confirmatory evidence such as the presence of a splice site or the generation of the same putative transcript fragment from distinct libraries always sought.
We take the view that a combination of exhaustive transcript sequencing together with the availability of a high quality genome sequence is an absolute requirement of the compilation of a meaningful human gene catalogue. The first steps in this direction are now possible by careful and complete cross-analysis between the transcript and genomic sequence data. Such mapping exercises give an idea of the complexity of the situation and the necessity of an extensive investment in further experimental analysis. Figure 1 shows an example of where we have mapped all available transcript data to a region of the X-chromosome. The example shows three regions of clustered ESTs. We suspect that those in the middle comprise a gene since a putative full-length sequence has been generated. Such sequences have not been generated for the other two clusters however. At the present moment, based on careful analysis of the 3ksequences and the likelihood that they represent authentic poly-A tails we predict that the left hand cluster corresponds to a single gene while that on the right actually represents two distinct, but closely positioned, genes.
Further in relation to the complexity of the relationship between transcript structure and genome sequence several situations well documented in the literature are pertinent. Firstly, there is the question of the generation of so-called antisense transcripts. It is well known that many genes are transcribed in both directions producing antisense transcripts that appear to play an important regulatory role [28]. However, there are many examples where these anti-sense transcripts also contain ORFs and are indeed transcribed [4,20,23,24,25]. These thus should be considered for all intents and purposes distinct genes that would be difficult ever to predict without transcript, and eventually, protein analysis. In addition there are intriguing examples of distinct genes being placed within the introns of other genes [15,16,19,27]. Again, this is a very difficult situation to predict without transcript data.
We are in the process of systematically closing EST clusters to generate full-length transcript sequences by making predictions from the genome mapping of the ESTs followed by RT-PCR experimentation. These is a powerful approach that permits the examination of the genome piece by piece and does not require the essentially chance generation of a full-length transcript from a cDNA library to confirm the structure of a gene. In the process of leading up to this project we have already compared the gene annotation on chromosome 22 with our own prediction of transcribed regions based on ORESTES sequences that have been generated in the FAPESP/LICR-Human Cancer Genome Project being currently concluded in Brazil [26]. This project has generated in excess of one-million human ESTs from the central regions of expressed human genes. When we mapped only those that correspond to human chromosome 22 from amongst our first 250 000 sequences we were able to identify based on stringent criteria a further 219 regions not described in the original annotation. We believe, but have not yet established, that these may correspond to more than 100 novel genes on this chromosome alone [26].
|
2018-04-03T05:50:34.068Z
|
2001-06-01T00:00:00.000
|
{
"year": 2001,
"sha1": "979ba94fcade93a9aa93b3d65266d12f6dbc8966",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijg/2001/273853.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ed1f94fc43e8d34a775c5094706fa27730007837",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
56072125
|
pes2o/s2orc
|
v3-fos-license
|
Learning in balance: Using oscillatory EEG biomarkers of attention, motivation and vigilance to interpret game-based learning
: Motivated by the link between play and learning, proposed in literature to have a neurobiological basis, we study the electroencephalogram and associated psychophysiology of “learning game” players. Forty-five players were tested for topic comprehension by a questionnaire administered before and after solo playing of the game Peacemaker (Impact Games 2007), during which electroencephalography and other physiological signals were measured. Play lasted for one hour, with a break at half time. We used the Bloom taxonomy to distinguish levels of difficulty in demonstrated learning—with the first five levels assigned to fixed questions—and “gain” scores to measure actual value of demonstrated learning. We present the analysis of the physiological signals recorded during game play and their relationship to learning scores. Main effects related to biomarkers of vigilance and motivation—including decreased delta power and relatively balanced fronto-hemispheric alpha power— predicted learning at the analysed Bloom levels. Results suggest multiple physiological dispositions that support on-task learning styles, and highlight the utility of the psychophysiological method for interpreting game-based learning evaluations.
PUBLIC INTEREST STATEMENT
We present a neuro-cognitive study of the way that people learn during game play. The electroencephalogram of players was measured while playing Peacemaker (Impact Games 2007), a serious game designed to teach a balanced view of the Israel-Palestine conflict. The brain imaging data was analysed, and related to the learning outcomes measured by a knowledge test presented before and after playing. The results show that there are multiple physiological dispositions that support on-task learning styles. Dealing with a complex stimulus environment such as this game, the most successful strategy seems to be one of balance: of balance between the brain's hemispheres and between activation and dissociation.
Introduction
Recent developments in technology enhanced learning (TEL) have opened new formats where students can "learn by doing" (Squire, 2006) in a virtual environment: defined as a serious game. However, despite the growing consensus (see background) that such games can provide learning given the correct marriage of game and pedagogical design, the field still lacks some key lines of evidence as to how this type of learning happens. Such evidence includes the learners' subjective and objective experience and its relationship to learning; the exploration of this relationship using psychophysiological methods is our main motivation.
In the fields of game research (Cowley, Charles, Black, & Hickey, 2008), learning theory (Gee, 2003) and even animal behaviour (Groos, 1898), it has been observed that learning and play are intrinsically (though not necessarily) linked. The type of learning to occur depends on the form of play. One productive line of research has suggested that action video games promote improved basic cognitive functioning (Bavelier, Green, Pouget, & Schrater, 2012), facilitating faster learning in general. Gamebased pedagogy can also induce other forms of learning. In his analysis of mammalian functional neurology, Panksepp (1998, p. 294) suggests that we possess dedicated and spontaneously developing "play neurocircuitry", with many non-social functions connected to forms of learning: play increases physical fitness, skilful tool use, and the ability to innovate and think creatively. The latter ability, like other higher cognitive abilities such as social skills, is particularly valued in adult and corporate training. Training such soft skills requires a different style of learning game, and research on the physiology of learning in such games is much less well served than the study of action games' effect on basic cognitive functions.
To study the psychophysiological underpinning of learning that occurs in such "soft skills" or knowledge-focused games, we must examine learners' tonic brain activity, not only event-related activity as is usually studied. We present results from an experiment designed to address this need, motivated by the opportunity to build greater understanding of how serious game play can induce learning. We focus mainly on the electroencephalogram (EEG), notably frequency band power and asymmetry, which relate to such important aspects of learning as vigilance and motivation.
Our results suggest that tonic EEG measures of oscillatory brain waves may serve as a predictor of topic learning. The paper also contributes to the evaluation of game-based learning (GBL) and TEL, and the growing field of applied neuroscience in education and learning, by demonstrating the application of psychophysiological methodology.
In the next section, we describe the relevant state of the art on educational games and measurement of players by psychophysiological methods. The Methods section then details the study methodology, including sub-sections to outline the experiment procedure, the relevant aspects of the test game, the assessment questions and our chosen psychophysiological methods. Section 4 details our results; Section 5 discusses our interpretation and future directions, and Section 6 presents the final conclusions.
Background and state of the art
It has been claimed that learning is almost always part of play (Sutton-Smith, 1997). Games generally involve skills (even games of chance can be played more or less skilfully by odds recognition) and Koster (2005), among others, claims that building repertoires of nested skills is the heart of game play progression. Indeed, it is known that skill learning literally has a transformative effect on the player (Scholz, Klein, Behrens, & Johansen-Berg, 2009). Thus, if the delivery is good enough and the games are effective, the serious game player stands to gain a lot.
The efficacy of educational games has been debated and supported in studies for over three decades (Egenfeldt-Nielsen, 2006;Guillén-Nieto & Aleson-Carbonell, 2012;Malone, 1981;O'Neil, Wainess, & Baker, 2005). Kirschner and Clark (2006) claim that discovery, problem-based, experiential and enquiry-based techniques are the main tools of games. Habgood and Ainsworth (2011) argue that intrinsic motivation is required to make effective serious games. Sung and Hwang (2013)'s study supports the value of collaborative learning in games.
Research to develop a supportive theory for general media and psychophysiology has advanced in recent years, partly due to the reduced cost and improved reliability of psychophysiological measurement equipment. This advance in theory and methods has reflected in a small but growing number of studies on the psychophysiology of learning in serious games (Cowley, Fantato, Jennett, Ruskov, & Ravaja, 2014;Pope & Palsson, 2001;Wamsley, Tucker, Payne, & Stickgold, 2010;Wang, Sourina, & Nguyen, 2011).
Psychophysiological methods
The psychophysiological method views the mind as being more comprehensible if its physical substrate is considered, in structural and functional terms (Cacioppo, Tassinary, & Berntson, 2000). The method involves using physiological signals, such as scalp potentials, respiration or electrodermal activity, to study psychological phenomena including frustration, mental stress/cognitive overload, approach/withdrawal motivation and attentional processes (Harmon-Jones, 2003;van Oyen Witvliet & Vrana, 1995). The value of psychophysiological methods for TEL evaluation is that the participant cannot give deliberately inaccurate physical signals and the acquisition of signals is non-intrusive, freeing the participant's attention onto the TEL task. Psychophysiology-enabled evaluation can then take place alongside other forms, such as self-report, entirely consistently. These attributes can potentially improve on-task attention during the protocol and reduce the measurement impact of participants' reactivity to being observed (e.g. often termed the "Hawthorne effect" in experiment or clinical trial settings).
A thorough review of psychophysiological methods for game-based experiments can be found in Kivikangas et al. (2011). As suggested there, psychophysiological measurements provide an innovative method for assessing player experiences by indexing emotional, motivational and cognitive responses to entertainment, education, therapy or other types of games (Mandryk & Atkins, 2007;Ravaja, Saari, Salminen, Laarni, & Kallinen, 2006). We next explain how, from existing literature, we derived a psychophysiological approach to studying GBL.
Psychophysiology and GBL
Tonic values of psychophysiological signals can be used to index various cognitive and emotional processes that can contribute to learning. For example, Chaouachi, Jraidi, and Frasson (2011) examined how EEG recordings obtained during learning tasks could index various aspects of a learner's state. We use tonic signals to fit the type of GBL we study: because learning in this type of game happens over long time periods, with players who can construct concepts from non-linear relationships in the data they are presented with, a more straightforward event-related approach to signal analysis would be less appropriate.
Learning in a broad sense requires environmentally prompted adaptation between states of sustained, focused attention and reflection and internalisation (Clark & Harrelson, 2002). More specifically, EEG should show changes in its feature profile dependent on this adaptation. Based on this, the variation in learning performance across a group can be modelled by features of individual EEG, which will predict the outcome if any group level causal relationship exists between brain oscillations and learning.
Cognitive performance
All frequency bands are also individually linked to signs of performance (i.e. learning prerequisites). For example, it has been observed that posterior alpha desynchronisation accompanies cognitive tasks (Klimesch, 1999).
Similarly, power in the β and γ bands is well known to vary in relation to task demands (Palomäki, Kivikangas, Alafuzoff, Hakala, & Krause, 2012)-indeed event-related synchronisation of higher frequency ranges of the EEG can be a powerful tool for analysing cognitive processing (Krause, 2000). In prior work, β band power has been associated with phase-synchronisation of remote areas of attention networks (Gross et al., 2004), while the ratio of β-θ power has been suggested as an index of task engagement (Kamzanova, 2011). The conceptualisation of γ has been suggested as a selectively distributed parallel processing gamma system (Başar, Başar-Eroğlu, Karakaş, & Schürmann, 1999), representing a universal code of central nervous system communication.
Other results suggest that δ power can be an active component during learning. Mathewson et al. (2012) show that for a "complex video game" (Space Fortress), δ activity from 250 to 600 ms after an important event was positively associated with game-score indexed learning rate. Karakaş and colleagues stated that δ is an integral component of task-relevant responding, "Delta response thus represents cognitive efforts that involve stimulus-matching and decision with respect to the response to be made" (Karakaş, Erzengin, & Başar, 2000, p. 48). Yet, because these findings are related to event-related paradigms, their applicability to tonic-data analysis remains unclear.
In the work on attention deficit disorder, tonic δ wave power has been associated with inattentive states (Markovska-Simoska & Pop-Jordanova, 2010). A combined functional magnetic resonance imaging and EEG study (Jann, Kottlow, Dierks, Boesch, & Koenig, 2010) has shown that all the resting state networks associated with higher cognitive functions such as self-reflection, working memory and language all displayed a positive association with higher EEG frequency bands, while negatively related to delta and theta. Knyazev's literature review on δ oscillations gives an explanation from an evolutionary perspective (Knyazev, 2012).
Vigilance
Further features available from EEG band powers include the vigilance model (Roth, 1961). Vigilance states with established EEG indices range from relaxed wakefulness, marked by posterior α, to sleep onset marked by the occurrence of K-complexes and sleep spindles. Low-voltage EEG, meaning increased δ and θ activity, is observed as vigilance wavers between low and wakeful and thus provides an indicator of baseline likelihood of task engagement (Minkwitz et al., 2011). Vigilance regulation, maintaining a task-appropriate level of attention and arousal, is the core feature of learning. It is worth noting that vigilance is not simply a scale of activation from awake to asleep, but also the readiness to deploy directed attention, which may change levels while the individual remains at the same level of arousal.
Motivation and arousal
Another EEG feature derived from band power is hemispheric asymmetry. The asymmetry between left and right fronto-hemispheric α power may signify motivational states, according to the model of Davidson and others (Harmon-Jones, 2003). Relatively greater right frontal activation is associated with withdrawal motivation, and relatively greater left frontal activation with approach motivation. Source localisation of frontal asymmetry in the alpha frequency band (i.e. the index of frontal asymmetry in EEG studies) has indicated that it reflects activity in the dorsal prefrontal cortex (Pizzagalli, Sherwood, Henriques, & Davidson, 2005). This area is primarily known for integration of sensory and mnemonic information and the regulation of intellectual function and action, which are key aspects of conceptual learning.
An optimal arousal level has been proposed to facilitate learning (Baldi & Bucherelli, 2005;Sage & Bennett, 1973), and indeed it is important to contextualise EEG signals by the arousal level of the individual. Arousal is most often measured with EDA (or skin conductance level; also sometimes called galvanic skin response) (Bradley, 2000;Dawson, Schell, & Filion, 2000), so EDA is an often-used physiological measure for studying digital gaming experiences (Mandryk & Atkins, 2007;Staude-Müller, Bliesener, & Luthman, 2008). The neural control of eccrine sweat glands-the basis of EDApredominantly belongs to the sympathetic nervous system that non-consciously regulates the mobilisation of the human body for action (Dawson et al., 2000).
Hypotheses
Following these models of band powers (Jann et al., 2010;Minkwitz et al., 2011), our first hypothesis is H1: lower vigilance/task engagement as indexed by relatively greater low-frequency (delta or theta band) EEG activity will predict worse learning performance as indexed by assessed scores from pre-to post-learning tests.
We consider that those in a low vigilance state should not evince approach motivation, so that we propose hypothesis H2: poor learning performance will be predicted by relatively higher right frontal hemisphere asymmetry accompanied by increased low-frequency EEG.
Additionally, since approach motivation in the context of learning suggests the probability of taskrelated synchronisation (i.e. deployment of neural resources), we propose H2a: high learning performance will be predicted by relatively greater left frontal asymmetry especially when beta or gamma synchronisation is high.
The physiological activation entrained by this neural activation should also show in participants' arousal, so we propose H2b: high learning performance will be predicted by relatively greater left frontal asymmetry especially when EDA is high.
Design
In our experiment, we wished to relate tonic physiological data to the learning outcomes of serious game players. Participants were recruited to play one hour of the Peacemaker serious game (Impact Games 2007), which aims to teach the player about the nature and causes of the Israel-Palestine conflict and has been quite successful (Burak, Keylor, & Sweeney, 2005). We tested learning outcomes using questionnaires delivered before and after play, and analysed these outcomes with respect to the psychophysiological state of the learner during play. With this approach, we aim to track the interplay between the players' physiology and the learning outcomes from GBL. Assessment of learning was controlled by splitting participants into two conditions, where the second condition had a mid-play period of discursive reflection in groups of two-three. Differences observed between groups demonstrate that their learning outcomes were not simply the result of test repetition, but that the inter-test intervention of game playing had an effect.
Participants
Recruitment of participants was conducted by advertising the study over student internet mailing lists. Potential volunteers were asked to respond "yes" or "no" if they had some prior exposure to the topic of the learning game: personal connections to Israel or Palestine or significant prior knowledge of the subject matter. These responses were used as exclusion criteria to prevent bias in the learning process.
A total of 45 participants (16 females, 29 males) volunteered in exchange for non-remunerable department store vouchers. Of the 45 participants, data-sets from 10 were excluded during the analysis due to corruption of the EEG data by artefacts, so that the final sample was 35 (15 females). In accordance with the declaration of Helsinki, participants were thoroughly briefed on the purpose and procedure of the study; each signed a written informed consent prior to the experiment. Participants were also reminded that they could withdraw from the study at any time without fearing negative consequences. As the study did not concern medical research, it required, in accordance with Finnish law, no formal ethical approval from the Ethics Review Board of Aalto University. Before testing, extra background information was obtained by means of a short questionnaire. Participants were mostly Finnish students or graduates, all non-native English speakers aged from 19 to 32 years (mean M = 24.7, standard deviation SD = 3.6), and had an average level of computer-game playing frequency (on a scale of "1: Not a lot" to "5: A lot", M = 3, SD = 1).
Procedure
The experiment procedure was divided into six main phases, as shown in Figure 1. First, participants answered 41 questions concerning the Israel-Palestine crisis, which took an hour (M = 56.2 min, SD = 18.7)-a time that did not significantly vary between conditions (t(43) = .85, ns.).
The second phase consisted of attachment of psychophysiological sensors (see details below). Each participant was seated in an electrically shielded laboratory for impedance inspection and game-play. This process took, on average, 72 min (SD = 32).
Next in phase 3, the participants were seated in front of computers and played a game tutorial (M = 7.4 min, SD = 1.7) and the first of two 30-min gaming sessions. For condition 2, participants played alone, physically and in the game.
The two game sessions were broken by phase 4. For condition 1, this consisted only of answering two quick experiential self-report questionnaires on mood and performance (not analysed herein). Condition 2 differed from condition 1 by the presence of a reflection period during phase 4: the players were brought into a group to participate in a guided discourse reflecting on their game experience, in addition to completing the self-reports. This discussion was the only point at which participants in condition 2 were not visually and aurally isolated from each other, so as to create a similar playing experience in both conditions. The lead experimenter directed the discussion period, so that it remained on topic, encouraging free discussion. In phase 5, the second 30-min game session was played. The monitoring equipment was removed, and total time attached to electrodes was M = 102 min, SD = 12. The sixth and final phase of the experiment was to answer the 41 questions a second time, taking on average 33.7 min (SD = 12.6) again without significant difference in time taken (t(43) = .95, ns.).
Proxy game
The Peacemaker serious game, shown in Figure 2, was designed to teach a peace-oriented perspective on the Israel-Palestine conflict. For a thorough study on the interaction effects between psychosocial personalities of players and their performance in Peacemaker, see Gonzalez and Czlonka (2010). It is a point-and-click strategy game, where the player acts as a regional leader and must choose how to react to the (deteriorating) situation, deploying more or less peaceful options from diplomacy and cultural outreach to police and military intervention.
Play is oriented around strategic management of conflict, taking governmental actions as shown by the menu on the left. Conflict is modelled by factions/stakeholders who each have approval ratings for the player-information can be obtained by clicking on a faction's icon. "Spontaneous" events are reported as news (marked on the screenshot by reticules), which drive the game narrative, and as player approval ratings with a particular faction vary, these events become more or less critical (in the screenshot, crisis is indicated by the colour of the reticule). Events and player actions are combined to drive approval ratings-winning is defined as achieving 100/100 on both Israeli and Palestinian ratings (see bottom left), while losing happens after scoring −50/100 on either.
Thus, players are expected to learn a new and more subtle perspective on the Israel-Palestine situation, as well as insights into the requirements of stakeholder management in a potential conflict scenario, and the capacity for dynamic decision-making (Gonzalez & Czlonka, 2010). The Peacemaker game supports these requirements; in fact, its benefit as a learning tool has caused it to be internationally used. 1 Thus, the fit to the TARGET requirements was good: Peacemaker may be played in a short duration without extensive pre-training, and imparts valuable insights into conflict resolution even in a short duration.
Questions and assessment
To assess learning, we chose a pre-post-test design using questionnaires with quantifiable accuracy. Certain criteria apply to such designs. The questions must be answered pre-game, so they could not reference too specifically the content in the game, but must be answered again post-game and also be able to elicit the participant's learning of the topic represented by that content. The questions also need to address all the (Bloom) levels of learning which the game provides scope for. The Bloom taxonomy of learning levels (Anderson, Krathwohl, & Bloom, 2001) describes the difficulty of attaining a particular level of learning-the levels themselves being represented (in Bloom's system) by descriptions of the kinds of content one would produce to show attainment of such learning.
The 47 questions (including four open questions) were generated by the authors mining the content of the game (accessed from the spreadsheets that store the textual game content). Questions were thus all designed to tap the knowledge which could be learnt from the game, and constrained to be valid by the method of sourcing from the game material. We assigned Bloom levels based on complexity of interactions between content in the question itself and the acceptable answers to the question. For instance, first-order interactions exist between a question such as "What is the religious capital of Israel?" and the answer "Jerusalem", which would place this question at the first Bloom level.
In the Appendix, Table A1 outlines the relationship between types of questions, the Bloom level assigned to them and the game data or experience which the question addresses-it also lists the number of such questions asked. Also in the Appendix are a sample list of questions and details of the assessment protocols, for readers with greater interest in the educational aspect of the study. Here, we describe the assessment sufficiently to understand the DV used in the analysis. Assessment protocols were developed for each Bloom level; for level 1-5, the protocols gave comparable scores and were thus combined to a final learning score, while a separate score was derived for Bloom level 6 open questions.
Open questions required a more qualitative approach, whose final quantification was not comparable on an interval scale to the level 1-5 questions. Unfortunately, the level 6 results did not have very high variance, since the majority of participants could not be considered to have demonstrated this high level of learning in their answers and therefore had a level 6 score of zero. The inter-rater reliability for the level 6 questions was also not good, mostly less than .4 "poor to fair agreement"; thus, level 6 results are not included in the analysis below.
For the first five Bloom levels, we derived a "correct" answer from the game documentation and data mining of empirical records (logs) of games played-i.e. a "truth" value in relation to each question was established by studying what the game had shown the players. Using these answers, we assessed fixedchoice responses by scoring the difference between the subject's first and second response with respect to how much more accurate (or inaccurate) they became, i.e. gain scores. Normalised gain scores were considered a non-prejudicial approach with high flexibility, in that the gain could be readily transformed for weighting or data exploration, as advised by Lord, French, and Crow (2009, p. 22).
Before summation to a final learning score, gain scores for each question were weighted by the Bloom level rating of the question associated (giving more weight to questions that theoretically indicated a higher level of learning) and then normalised. We used a non-linear weighting scheme to reflect the relatively greater importance of higher levels of Bloom learning; for example, higher level learning can be considered of parametrically greater importance than lower level learning, because mastery at each level requires mastery at all the lower levels first. Thus, the weights [1,2,4,8,16] were applied to levels 1-5 (for a rationale on linear vs. non-linear weighting, see Gribble, Meyer, and Jones [2003]).
Psychophysiological data acquisition and pre-processing
For data acquisition, we used the Varioport-ARM multi-amplifier biosignal-recording device (Becker Meditech). We recorded the psychophysiological signals EEG, ElectroOculogram (EOG), EDA and respiration. EEG was recording from six Ag/AgCl electrodes on a cloth cap following the 10-20 system (Niedermeyer, 2005, p. 140) at F3, F4, C3, C4, P3 and P4. AFz was used as ground and the reference montage was linked to ear clips. For eye-blink/saccade artefact correction, EOG was recorded by bipolar Ag/AgCl electrodes placed ~2 cm above and below the left eye for vertical saccade, and ~1 cm from the outer canthi for horizontal saccade. EDA was recorded from the proximal phalanges of the index and middle fingers of the non-dominant hand. Respiration was recorded using an adjustable belt transducer placed around the chest. All channels were recorded at a sampling rate of 1000 Hz and down-sampled online where appropriate. Impedance testing was carried out to ensure less than 5 kΩ resistance, and 8 min of baseline were recorded. For pre-processing, Variograf software was used to "read and reconstruct" binary data into vpd format files from which separate software was used for each signal.
EDA signal was pre-processed using the Ledalab (v 3.43) toolbox for Matlab 2010b in batch mode: signal was down-sampled to 16 Hz and filtered using Butterworth low-pass filter with cut-off 5 Hz and order eight. Then, the signal was divided into phasic and tonic components using the nonnegative de-convolution method (Benedek & Kaernbach, 2010).
For EEG analysis, Brain Vision Analyser v1.05.005 (BVA) was used to pre-process the vpd files in eight steps. We first applied Butterworth zero-phase filters, with time constant .3 s and 12 dB/ octave roll off, at high pass of .5 Hz and notch of 50 Hz. Second, pulse artefacts from heart rate interference were detected and corrected using BVA's MRI algorithm, taking the R-peak latency from the ECG channel with an average over 10 pulse intervals. The third step was ocular artefact correction using Gratton and Coles' algorithm (Gratton, Coles, & Donchin, 1983) with input from EOG channels commonly referenced with the EEG. The fourth step was segmentation into 1-s epochs (extracted from the trials of interest), followed fifth by BVA's Artefact Rejection algorithm testing 100 ms intervals for minimum/maximum amplitude of ±200 MV and lowest allowed activity (maximum minus minimum) of .5 MV. Sixth was Fast Fourier Transform power density calculation over 1-s epochs, with 10% Hanning window and resolution .5 Hz. In the seventh step, we corrected for myogenic noise, i.e. artefacts from gross motor interference by the participant, including jaw clenching and head scratching. Due to the low channel count, blind source separation methods were unsuitable for this correction-instead the power regression method was used, which Davidson described initially and again validated more recently with others (McMenamin, Shackman, Maxwell, Greischar, & Davidson, 2009). The regression method was implemented in Brain Analyser v1.05 by the authors, and compares power density between the alpha band and the high-frequency band 70-85 Hz. Finally, the eighth step was feature selection, described below.
• Power in the five EEG frequency bands δ, θ, α, β and γ was obtained from the mean of the six recording electrodes and band-pass filtered with settings as described above.
• Frontal (F) asymmetry of EEG was derived by taking the natural logarithm of the product of mean alpha power in F3 and with the reciprocal of F4, that is, ln(α:F3 ÷ α:F4). With odd numbered electrodes on the left-hand side of the head, this equation implies that relatively greater left asymmetry is denoted by positive numbers (i.e. α:F3 > α:F4 ≥ α:F3÷α:F4 > 1, and ln([1, ∞)) → N).
• Tonic EDA was obtained by the NND method as explained above.
Statistical analysis
To obtain IVs for statistical modelling, mean values of each feature were derived from 1 min epochs across the playing periods, giving a data-set with 60 rows per participant and one column per IV, DV or factor. This "tonic" approach allowed us to test for relationships that hold across trial duration but are not specific to individual, potentially non-repeating events. Data was examined to check for distribution characteristics. To achieve approximate normality, we rectified the data with an additional constant to achieve minimum bound of 1.0 and calculated the z-scores. After excluding any rows that had a z-score greater than 2.58 (i.e. any outliers plus the most extreme 1% of the distribution), data was transformed by taking the square root. With all data ≥1.0, this transform preserved relative values while helping to correct skew. Although the data was still not normal by Kolmogorov-Smirnov tests, this was not unusual for large data sets according to Field (2009, p. 139), whose visual criterion (histogram-to-normal curve matching) and z-score criterion (95% < 1.96, less than 1% > 2.58) were used to judge that the data showed a good approximation to normal.
The generalised estimating equations (GEE) procedure in SPSS was used to test all hypotheses, to support a repeated measures model over the 60-epoch rows. We specified participant ID as the "Subject" variable and trial number and minute as the within-subject variables. On the basis of the "quasi-likelihood under independence model criterion", we specified autoregressive as the structure of the working correlation matrix. We specified a normal distribution with identity as the link function. DV was gainExp, and IVs were epoched features of the physiological signals, as mentioned in the previous section: δ, θ, α, β and γ band power; frontal asymmetry; and tonic EDA.
Due to the natural variation between individual physiologies, psychophysiological data must always be baseline-corrected before analysis. This is done by adding one extra factor to each model for each IV, corresponding to the mean value of pre-play baseline measurement of the signal for that IV. The final factor in the models reported is Condition, which was added to all models as a control. Although the analysis resulted in multiple tests, multiple comparison testing was not performed because the comparisons were planned and the IVs based on band powers are not independent.
GEEs are an extension of the generalised linear model, and were first introduced by Liang and Zeger (1986) and Ballinger (2004) for a more complete introduction to GEEs for longitudinal data analysis. GEEs allow relaxation of many of the assumptions of traditional regression methods such as normality and homoscedasticity, and provide the unbiased estimation of population-averaged regression coefficients despite possible misspecification of the correlation structure. Where psychophysiology is modelled in several variables, the usual assumption of independent observations would be violated. Unless the model accounts for the "within" correlation, the result may inflate the Type II error; thus, GEEs suit well the analysis of time series psychophysiological data.
Results
The gainExp variable had M = 12 and SD = 12, ranging from −14 to 42. To illustrate that significant learning did occur, we performed a t-test on gainExp against a mean of 0, t(44) = 6.9, p < .001. Furthermore, we compared the learning scores for each condition to show that learning outcomes were not independent of the inter-test intervention. Scores were significantly different between condition 1 and condition 2, by independent samples t-test, t(43) = 2.8, p < .01, with the direction of difference-favouring condition 1. Thus, the group who did not have a mid-play reflection session had higher gainExp score; the effect was reported in more detail in .
The statistically significant psychophysiological results are summarised in Table 1, showing each physiological variable (IV) that predicted learning (DV) and associated statistics.
To explore the relationship between EEG band power and learning, we modelled gainExp scores with band power, by specifying one GEE for every EEG band. Covariates were the main effects of baseline band power and task-level band power (power during game play). Supporting H1, task-level http://dx.doi.org/10.1080/2331186X.2014.962236 δ-band power was negatively associated with gainExp scores, B = −.001, SE = .0003, Wald χ 2 (df = 1) = 3.9, p < .05. Thus, a potential indicator of reduced attentiveness and vigilance tended to increase as learning performance decreased.
The relationship in our sample between learning and power density in each EEG band is shown in Figure 3 below, where the top row is the grand average scalp-distributed power density of highscoring players (median split on gainExp) and the bottom row is the grand average of low-scoring players. Grand averages were derived from regression-corrected data in BVA. One can clearly see the difference between scoring levels, especially in δ as high scorers have low frontal power and low scorers have high frontal power.
Moving to the construct motivation, the claim of H2a was supported for both EEG bands: to wit, that greater relative left frontal asymmetry accompanied by β or γ synchronisation would predict higher learning scores. F-asymmetry × β-band power significantly predicted gainExp, B = .001, SE = .0004, Wald χ 2 (df = 1) = 5.4, p < .05, while F-asymmetry × γ-band power also predicted gainExp, B = .000, SE = .0001, Wald χ 2 (df = 1) = 4.7, p < .05. Each of these results used a separate GEE model with main effects of baseline F-asymmetry, baseline band power, task-level F-asymmetry, tasklevel band power; all as covariates; and the task-level F-asymmetry × task-level band power interaction.
To explore the role β and γ play in the interaction, we made a graphical examination of the levels of the interactions, see Figure 4. Each panel displays two levels of F-asymmetry on the abscissa, spilt at quartile 1; mean gainExp is on the ordinate; two levels of each IV are depicted by the ◊ (diamond) and • (ball) symbols. β is split at the median and γ is split at quartile 1. Thus, Figure 4
panels A and B
show that the effect of motivation on learning scores, for instance approach motivation indexed by Notes: Row A shows highscoring players (by median split); Row B shows low-scoring players. Scale is normalised between zero and one.
relatively greater left frontal asymmetry, may be modulated by task-related neural synchronisation. We can also see this in Figure 3, where high-scorers also show greater right frontal beta power than low scorers, mirroring Figure 4 panel A.
Finally, H2b claimed that motivation is a natural concomitant of physiological arousal, indexed by EDA. A link between them and learning performance was supported by the result of a GEE model with main effects of baseline F-asymmetry, baseline EDA, task-level F-asymmetry, task-level EDA; all as covariates; and the task-level F-asymmetry × task-level EDA interaction, which predicted gainExp with marginal significance, B = .001, SE = .0003, Wald χ 2 (df = 1) = 3.9, p = .05. To examine this interaction, again we performed visual analysis of the variables and selected the most informative panel to display in Figure 4, panel C, where the abscissa shows the median split of F-asymmetry and EDA is split at quartile 1. Panel C shows that the effect of frontal asymmetry was modulated by the relative arousal of the participants.
Discussion
In this study, we examined how psychophysiological indices of attention, arousal, vigilance and motivation during playing of a serious game help to clarify the players' likelihood to learn declarative knowledge.
The δ vs. gainExp result is novel for the type of learning measured, but in terms of interpretation, the role of δ oscillations described in the literature is not at all simple. However, it is relevant that results, which suggest that δ is linked to learning, have come from event-related analyses, whereas tonic studies of δ waves such as ours have tended to suggest that excess δ is a sign of inattention (Markovska-Simoska & Pop-Jordanova, 2010) or low vigilance (Minkwitz et al., 2011).
Fronto-hemispheric asymmetry and learning
The interaction of F-asymmetry with both β and γ-band power predicted learning, in both cases with positive relation. The graphical investigations of the relationship between F-asymmetry and gainExp showed that it is usually positive: mean learning scores are higher when left frontal power is relatively greater; this is regardless of the value of interacting variables-except at the levels shown in panels A and C in Figure 4. These two panels show the modulating influence of certain levels of β and EDA wherein participants with relatively right frontal power scored better. These were the only circumstances in which the relationship between F-asymmetry and gainExp is negative. The positive relationship is not a main effect (the GEE model F-asymmetry vs. gainExp was not significant), but it provides the background against which to consider the three interactions involving F-asymmetry.
Frontal asymmetry is suggested to index motivation, with relatively greater left activation signifying approach motivation and vice versa. Thus also of note is the range of F-asymmetry: it peaks at −.5, suggesting that right frontal power was dominant over left and, in general, motivation was more "withdraw" than "approach".
Figure 4. Three interactions involving F-asymmetry shown in panels: A-β, B-γ and C-EDA.
Notes: The mean of gainExp is on the ordinate, and error bars are 95% CI. F-asymmetry is split at quartile 1 in panels A and B; it is median split in panel C. β is median split; γ and EDA are split at quartile 1. Lower and upper split portions are shown by ◊ and • symbols, respectively.
(A) (B) (C) Figure 4 panel C shows that the overall positive relationship of F-asymmetry vs. gainExp is strongly reversed for the lowest quartile of EDA. And in the lower half of F-asymmetry, i.e. stronger withdrawal motivation, the two arousal levels show large differences in learning score. The combination illustrates a group of participants who were probably not well focused: perhaps due to boredom and fidgeting. From another perspective, when highly aroused it paid to be more approach motivated; when less aroused, the opposite was true.
Panel A shows F-asymmetry split at the lowest quartile and β split at the median; low β implies low scores when withdrawal motivation is strongest, but for participants with more balanced motivation, their scores with low β band power exceed those with higher power. Similarly for F-asymmetry × γ, there is an adjustment of the effect of low band power when F-asymmetry is more balanced. In both cases, this adjustment is more evident when these upper bands contain less power; the effect of F-asymmetry on learning is greater-it pays more to be in the middle "neutral" motivational state. When β or γ contain more power, the hemispheric power distribution is less relevant.
We stated that F-asymmetry was predominantly right, indicating withdrawal motivation. The participants' self-reports were generally positive (valence was 12% above neutral; positive affect was 16% above neutral), which warrants a closer look at this issue and at the possible interpretations of F-asymmetry. In , we showed how decreased mental workload (i.e. cognitive efficiency) and positive affect predict increased learning (in the same study). Rotenberg and Arshavsky (1997) showed that a mental imagination task can increase right hemisphere activity. Relevant to this is the fact that the Peacemaker game is an abstract simulatorin other words, it simulates a scenario but does not show explicit representations of the actors or events contained there; rather, it evokes these in the player's imagination using icons, news reports and narrative. Gable and Harmon-Jones (2008) state that the intensity of motivation determines the focal range of attention-low-intensity motivation, whether approach or withdrawal, results in broader attention.
Taking all observations into account, we suggest that the F-asymmetry result shows that playing the game induced a more imaginative cognitive approach characterised by greater right hemisphere activation. Figure 4 suggests that the highest learning scores were obtained by those who had either a) low-arousal and -withdrawal motivations or b) reduced high-frequency band power and more balanced motivation. Taking Gable and Harmon-Jones (2008) into account, the latter group b) suggests that lower intensity motivation-and thus broader attention-is positively modulated by lower β and γ, which, as indices of integrative attention networks, might indicate the benefit of reduced distractibility. Meanwhile, the former group a) appears to be a balance between intensity of motivation and level of arousal; especially recalling that those with relatively greater right F-asymmetry performed better when their high-frequency band power was higher, this group appears to represent those who were on-task and focused. The cognitive efficiency interpretation we earlier proposed supports these explanations. 2 Prior results on asymmetry mostly arise from classical event-related protocols, contrasting with our experiment. It is natural that if local hemispheric regions support distinct functions, then the more varied are the range of functions in a protocol, the more both hemispheres must be activated (McGilchrist, 2009, p. 26). Thus, we might say: the participants who displayed more task engagement in Peacemaker's continuous information integrating protocol were more likely to use their whole frontal cortex and evidence more balanced mean power.
The asymmetry results also seem to link to the vigilance result because the lowest scorers had the highest withdrawal motivation rating and highest delta values, suggesting their withdrawing and lack of vigilance sprang from the same source-perhaps one engendered the other, or both were engendered by dissociative mood.
General issues and future work
The results show us that various measures of the physiology can be predictive of learning, as measured by a self-report questionnaire. There are naturally several caveats as follows.
The learning measure itself must be understood as an imperfect and limited measure, because it is not possible to design a reasonable-length questionnaire to cover all things that can be learnt in a serious game. In the light of this, our claims should not be interpreted as over-reaching.
The seven protocol phases described were designed to help achieve a measurable learning outcome. Orientation of the participants to the topic by the pre-test was a concern: the long period of distraction during sensor attachment may have partially addressed it. The game session length was maximised with respect to the overall length of the experiment and the other periods, to enable better chance of learning by prolonged exposure. There was an impetus to minimise the total time of the learning exercises to reduce the discomfort of wearing the sensors. Nevertheless, we used a total playing time which was cohesive with that used in other Peacemaker studies (Gonzalez & Czlonka, 2010), where reasonable learning results were reported.
The complexity of the results, with many interactions, hints that one should not expect a simple linear relationship between learning and a given psychophysiological construct. It may be valid to use a single-trial analysis to look for such relationships, and some evidence suggests that such analyses can cluster events in the game play around stable and significant psychophysiological reactions . In future work, the study of these event reactions could give further insights into GBL.
In terms of experiment design, it would be ideal to increase the sample size. N = 35 is small compared to most learning studies; however, it is more than the usual sample size for psychophysiological experiments. Since our main focus is on the psychophysiological method, N = 35 is sufficient for reporting existing results. It is also apparent to the authors that repeated sessions of the same protocol would permit a more thorough analysis, while changing games every session to avoid practice effects.
EEG was used to characterise and measure attention, with the ATT index and others. However, the proper measurement of attention should include behavioural measure dependent variables. Unfortunately, these could not be explicitly included in our protocol task, as it was dedicated to learning. Nevertheless, we can assume with some confidence that such constructs as attention are included in the final performance scores from the game and questionnaire.
Conclusions
We reported on a study the psychophysiological correlates of learning in serious games. The learning test instrument was assessed in its two parts, a set of fixed-format questions and a number of open questions, all on the topic of the game. The significant results apply only to the fixed-format questions, mainly because many participants did not display Bloom level 6 learning.
In summary, we found that participants who displayed less δ-band power and had an elevated RR and ATT index, and those with more balanced F-asymmetry, were more likely to score highly. Some exceptions exist, such as that for the highest levels of RR, it can be beneficial to have increased δ-band power, or that those with low arousal performed better when F-asymmetry was more imbalanced. The implication of these results is that participants' learning styles are sub-served by differential activation patterns of the physiology. It may be useful to consider this result in designing similar games and their pedagogical application.
By dint of the detailed picture they presented, the psychophysiological methods used show their usefulness for experience analysis, which can be considered a bonus in the context of studies in the TEL field-perspectives on this argument from a similar study were also presented in Cowley et al. (2014). Notes 1. See for instance http://gaming.wikia.com/wiki/PeaceMaker_ (video_game) and also http://phe.rockefeller.edu/docs/ PeresCenterPressRelase.pdf. 2. There is an interesting link between these conclusions and the seminal work of Malone (1981), who observed that learning games worked best when evoking "curiosity" and "fantasy". 3. Competing but equally correct answers are not what was initially listed in the game documentation (which gave the original basis for forming the question), but were proven to be equally valid by empirical means (mining the game log files of participants
Quantitative questions
Below, we list a sample of the quantitative questions. The answer to the question is listed directly below it. Following that is the assumptions behind the question-these include any assumption supporting the validity of the answer, plus the necessary condition for the question to work in the experiment i.e. how the player learns the information. These weights were derived from the responses of the AI to actions corresponding to those named, in the games played by test participants We have estimated as follows: a = 3, b = 4, c = 1, d = 2, e = 3, f = 5, g = 1, h = 4, i = 2 (assumptions) We assume the correctness of the answer based on observation/play. Player can infer from observing relevant variables while trying this strategy-BLOOM 4
11
Which of the following regional countries share a border with the state of Israel? In the Israeli-Palestine conflict, as in the game, it is often the case that a particular action or policy by a leader will be disapproved by one side as much as it is approved by the other. This is known as the zerosum effect Now rate each sequence for how well it would please both sides at the same time The score indicates how pleased both sides are after all the actions in the sequence are done. So a score of 1 counts as "really displeases one or both sides" and score of 5 counts as "really pleases both sides' Only c) should be ticked, and the ratings given (x, y) are evaluated by (x − y) × w, where w is given below: (assumptions) We assume the correctness of the answer based on observation/play. Each strategy was tested three times. Player can infer from observing relevant variables while playing-BLOOM 5
19
Of all the interested parties (represented in the game as groups and leaders), [____________] are most opposed to your plans (i.e. have the lowest approval of you in the game) (answer) Militants-Name of any one should suffice e.g. Hamas This question should only be scored if Q32 was answered correctly
Assessment protocol
Note: gain scores are potentially negative-if answers go from right to wrong, they are given negative points. However, this "negative learning" score can be treated as zero in post-processing, achieving the same effect as an initial assumption of no negative learning in an exploratory analysis.
For questions (of level 1-5) that requested specific information but allowed open answers (free text input), we defined a synonymy set, that is, a set of answers which could legitimately be given in lieu of the "correct" answers.
Rating questions were assessed by a formula (explained below) that preserved the magnitude of the subject's response preference without giving an arbitrary "truth" value to the rating item.
All level 1-5 questions thus obtained a gain score. These were then weighted. Initially, weights were the product of the gain score and the number of the Bloom level, which gives a linear increase in importance over Bloom levels. Yet the "learning value" of the Bloom levels is not defined in a scalar sense, only as ordinals, so there is more than one option supported by theory for weighting each level. For instance, the importance of learning at higher levels could be considered parametrically greater than lower levels (because mastery at each level is considered to require mastery at all the lower levels first): applying this changes the weight values from linear scaling [1,2,3,4,5,6] to exponential scaling [1,2,4,8,16,32].
○ 1st and 2nd response are the same = 0 points. ○ 2nd response is correct and 1st response is not = 1 point.
○ 1st response is correct and 2nd response is not = −1 point.
○ For every response that is the same both times = 0 points.
○ For every correct response in 2nd answer (that is not in 1st answer) = 1 point.
○ Every correct response in 1st answer (that is not in 2nd answer) = −1 point.
○ Every incorrect response in 1st answer (that is not in 2nd answer) = 1 point.
○ Every incorrect response in 2nd answer (that is not in 1st answer) = −1 point.
• For single answer "open" questions (e.g. Q19)-the right answer, or a synonym, or a competing but equally correct answer, 3 is in 2nd response but not in 1st response = 1 point.
• For multi-answer "open" questions (e.g. Q13)-every correct answer, or a synonym, or a compet ing but equally correct answer 1 , in 2nd response that is not in 1st response = 1 point.
• Rating questions (e.g. Q10) are assessed by an objective formula: (y − x) × w, where x, y and w are defined as follows (refer also to question 10 above).
○ So, for example, in this one rating-type question we have these nine items, with a weight attached (either −1, 0, 1) which was derived from the data of game players by asking, for each rating item, what was the reaction in the variable of interest after the action that is cited in the rating item (in question 10, the variable of interest is the relationship between Israeli and Palestinian leaders, defined by a scalar in the game).
Thus, we do not pre-judge what score the rating should be, but rather only whether the action associated with the rating was positive, negative or neutral (with respect to the question asked). This is defined by our weights w.
By subtracting the first score from the second, we get a magnitude and a sign. Say in item 10.a (with weight −1), the subject responds first with 4, second with 2. Then, the calculation would be (2 − 4) × −1 = 2 The subject has downgraded his rating of that action (which was defined as a bad action for the purpose of building trust, based on the data), from more positive (4) to more negative (2), so his score is +2, preserving the magnitude of the change. If he had answered in the opposite way, first 2 and second 4, he would be upgrading his estimate of the quality of the (bad) action, and thus would get a score of (4 − 2)×−1 = −2 Thus, we preserve magnitude without giving an ad hoc "true" value to the rating item.
• The procedure for assessing open questions is detailed in the next section.
Open question assessment
From the 41 questions, 6 were open questions of the form: "What is your understanding of [a topic]?" or "Describe why you [responded to the antecedent quantitative question as they did]?" These open questions were analysed separately, since they were not held to be immediately comparable to the quantitative questions in terms of scoring. They represented opportunity for wider contemplation when answering and thus enabled responses that might (or might not) be evaluated as containing Bloom's "level 6" knowledge.
|
2018-12-05T17:29:06.076Z
|
2014-09-29T00:00:00.000
|
{
"year": 2014,
"sha1": "05c189bd8710b749d03e75ceac899c31b995309d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/2331186x.2014.962236",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "05c189bd8710b749d03e75ceac899c31b995309d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
271151176
|
pes2o/s2orc
|
v3-fos-license
|
Healthcare professionals’ perspectives towards the digitalisation of paediatric growth hormone therapies: expert panels in Italy and Korea
Introduction To analyse the perspectives of healthcare professionals (HCPs) regarding the acceptance of digital health solutions for growth hormone (GH) deficiency care. This study identified factors impacting HCPs’ intent to use and recommend digital solutions supporting recombinant-human growth hormone (r-hGH) therapy in Italy and Korea with a use case of connected drug delivery system (Aluetta® with Smartdot™) integrated in a platform for GH treatment support (the Growzen™ digital health ecosystem). Methods Participatory workshops were conducted in Rome, Italy, and Seoul, Korea, to collect the perspectives of 22 HCPs on various predefined topics. HCPs were divided into two teams, each moderated by a facilitator. The workshops progressed in five phases: introduction of the project and experts, capturing views on the current context of digitalisation, perceived usefulness and ease of use of Aluetta® with Smartdot™, exploration of the perception of health technology evolution, and combined team recommendations. Data shared by HCPs on technology acceptance were independently analysed using thematic analysis, and relevant findings were shared and validated with experts. Results HCPs from both Italy and Korea perceived Aluetta® with Smartdot™ and the Growzen™ based digital health ecosystem as user-friendly, intuitive, and easy-to-use solutions. These solutions can result in increased adherence, a cost-effective healthcare system, and medication self-management. Although technology adoption and readiness may vary across countries, it was agreed that using digital solutions tailored to the needs of users may help in data-driven clinical decisions and strengthen HCP–patient relationships. Conclusion HCPs’ perspectives on the digitalisation in paediatric GH therapies suggested that digital solutions enable automatic, real-time injection data transmission to support adherence monitoring and evidence-based therapy, strengthen HCP–patient relationships, and empower patients throughout the GH treatment process.
Introduction: To analyse the perspectives of healthcare professionals (HCPs) regarding the acceptance of digital health solutions for growth hormone (GH) deficiency care.This study identified factors impacting HCPs' intent to use and recommend digital solutions supporting recombinant-human growth hormone (r-hGH) therapy in Italy and Korea with a use case of connected drug delivery system (Aluetta ® with Smartdot ™ ) integrated in a platform for GH treatment support (the Growzen ™ digital health ecosystem).
Methods: Participatory workshops were conducted in Rome, Italy, and Seoul, Korea, to collect the perspectives of 22 HCPs on various predefined topics.HCPs were divided into two teams, each moderated by a facilitator.The workshops progressed in five phases: introduction of the project and experts, capturing views on the current context of digitalisation, perceived usefulness and ease of use of Aluetta ® with Smartdot ™ , exploration of the perception of health technology evolution, and combined team recommendations.Data shared by HCPs on
Introduction
Digital health technologies and the use of connected devices are progressing rapidly, becoming an integral element of healthcare delivery (1,2).Digital health technologies have paved the way for enhanced patient care and management of chronic conditions, especially with the advent of connected devices that facilitate the capture of objective data about patients (2).The global 5-year Easypod Connect Observational Study suggest that connected digital devices can significantly improve patient outcomes for recombinant human growth hormone (r-hGH) therapy in children with growth failure (3,4), thereby enhancing patient adherence.They also improve therapeutic monitoring and patient support provided by healthcare professionals (HCPs) (5-7), which is a key step in the patient's care pathway (8,9).Adoption of digital health solutions by HCPs is correlated with desired outcomes, warranting the need to understand the attitudes of HCPs towards prescribing their use (9).HCPs remain at the forefront of creating awareness, motivating, and providing family-centered, personalised care and management; hence, adoption of digital solutions requires participatory assessment of their perceptive (5)(6)(7).Understanding factors associated with willingness of HCPs to prescribe digital health solutions to patients is important (6,8,(10)(11)(12)(13)(14).However, limited information exists regarding barriers and enablers for the use of digital health ecosystems in long-term paediatric care (15).
An example of a digital health ecosystem supporting the monitoring and self-management of patients with growth hormone deficiency (GHD) is the Growzen ™ digital health ecosystem.This solution includes Aluetta ® with Smartdot ™ , a novel digitally connected, reusable, multi-dose injection pen device for administering r-hGH (Saizen ® , Merck KGaA, Darmstadt, Germany).Incorporating a smart attachment for data transmission, this innovative adherence sensor-based device combines the ease of use of the Aluetta ® manual pen with advanced capabilities, and its integration with a digital health ecosystem empowers HCPs with remote monitoring of patient adherence, enabling timely intervention and decision-making (16).This ecosystem currently includes Aluetta ® with Smartdot ™ , Growzen ™ Buddy [a mobile app for patients and caregivers to guide them for growth hormone (GH) therapy] and Growzen ™ Connect healthcare professional platform (used by HCPs to track treatment adherence and outcomes for GH patients) (Figure 1).
To assess the potential impact of digitalisation on the willingness of HCPs by integrating the connected GH injection pen into their clinical practice, participatory workshops involving an expert panel were conducted in Italy and Korea in 2022 to capture a broader view on acceptance of the solution across diverse healthcare ecosystems.This study aimed to understand the extent of acceptability and perceptive in countries with different level of readiness and cultural acceptance for a digital healthcare ecosystem enabled by the use of the connected injection pen for r-hGH.This qualitative study explored current attitudes towards the digitalisation of r-hGH therapy in the two countries through panel discussions, analysed HCPs' perceptions regarding potential acceptance of the connected device compared with other non-connected alternatives (e.g., pen and paper adherence diaries), and assessed factors affecting their intent to use and integrate digital health solutions supporting r-hGH therapy in clinical practice.
Experts and locations
Participatory workshops were conducted in Italy and Korea to explore the perceptions of two expert panels on the acceptance of connected devices and technological evolution, considering Aluetta ® with Smartdot ™ within the Growzen ™ digital health ecosystem as an example of digital solution.The workshops were spread over 4 h and conducted on 25 November 2022 in Rome, Italy, with eight HCPs (five paediatric endocrinologists and three endocrinologists) and on 2 December 2022 in Seoul, Republic of Korea, involving 14 paediatric endocrinologists.HCPs with experience in paediatric/transition/adult GHD treatment participated in the panels regardless of their previous digital health experience.Adult endocrinologists in Italy were asked to provide their opinion on patient care focussed on the transition from patients with childhood-onset GHD to adult patients.
Workshop structure, activities, and materials
Experts in each workshop were grouped into two teams to independently perform several activities based on professional expertise, age, and sex.Each team was moderated by a facilitator with experience in participatory methods.The two teams were initially together in the same room for introductions and explanations of the phases and tasks of the workshop and then performed these activities in separate rooms to capture information around perceptions and then finally combined for conclusive discussions and recommendations.Data from all experts were collated for qualitative analysis.
Sticky cards representing two contexts were provided to the experts to identify factors and share their opinions according to various predefined topics.The first context considered was technology acceptance encompassing self-administration, wherein the patients administered the therapy, although caregivers took care of patients' health and managed their treatments.The other context was the therapy administered by caregivers, wherein they took overall care of the health and therapy management with paediatric patients not being autonomous enough to be responsible for their own treatment.
A description of Aluetta ® with Smartdot ™ device within the Growzen ™ digital health ecosystem was provided in an introductory video.Additionally, experts had the opportunity to see and touch the device during the session.During the workshop, experts were asked to provide their opinions on Aluetta ® with Smartdot ™ orally, and the session was audio recorded to complement the notes from the moderators.Experts were prompted by various predefined topics based on their clinical experience.Each workshop progressed in five phases (Figure 2).
The first phase comprised an introduction of the project and experts with a description of the workshop structure and concrete tasks and activities to be performed.In the second phase, the views on the current context of digitalisation were captured, and the experts provided opinions and comments about several predefined topics such as the importance of treatment adherence, perceived usefulness of collecting patients' adherence data, current methods used to collect adherence data, use of digital health tools with a focus on HCPs' experience in using these solutions in their daily clinical practice, and perspectives on the patients' attitudes towards the use of digital health tools.Experts identified factors related to three entities (patients/caregivers, healthcare centres, and HCPs) in the template and the relationship between these entities (care services, facilitating conditions, and HCP-patient relationships).
The third phase assessed the perceived utility and ease of use of Aluetta ® with Smartdot ™ device (as an example of a digital health solution) considering both the defined contexts.Following the concrete instructions provided by the moderator, preceded by the introductory video of the device, experts discussed and identified Growzen™ digital health ecosystem.Aluetta ® with Smartdot™ is used to provide growth hormone injections to patients and transmit real-time injection data.Growzen ™ Buddy is a mobile app for patients and caregivers to guide them for growth hormone therapy.Growzen ™ Connect healthcare professional platform is used by healthcare professionals to track treatment adherence and outcomes for growth hormone patients.HCPs, healthcare professionals.relevant issues, strengths, and weaknesses of the digital device in the context of GHD management in their respective countries.The experts were provided with predefined study cases, namely, new pen users and Aluetta ® pen users (i.e., without the Smartdot ™ attachment), with the lack of adherence to guide the discussions.A set of predefined topics were pursued that included relevant issues such as ergonomics, perceived ease to configure the device, perceived ease to use the device, perceived learnability, perceived ease to teach the configuration process, flexibility (removing Smartdot ™ ), perceived usefulness for HCPs, perceived usefulness for patients, potential adoption for each study case, appropriateness for each study case, and potential risks associated with the use of the digital device.
In phases 2 and 3, predefined templates were given to the HCPs along with a set of sticky cards representing predefined topics to facilitate the activity.In the fourth phase, HCPs' perceptions on health technology evolution in paediatric/transition/adult GHD care were explored.Three scenarios representing different technological generations were introduced by the moderator.The first scenario represented the use of a pen without any digital capability and a manual diary to collect adherence data (non-digital alternative), whereas the second scenario consisted of the use of a pen without any digital capability and a mobile app to register adherence data through manual inputs (partially digital alternative).The third scenario represented the use of Aluetta ® with Smartdot ™ integrated in the Growzen ™ digital health ecosystem to collect adherence data (fully digital alternative).Templates used in phase 3 represented these three scenarios and included topics related to the corresponding activity focussed on the adherence data collection process, potential impacts of each scenario on daily practice, and patient self-management considering both contexts.Experts were asked to identify the strengths and weaknesses of the scenarios regarding the discussed topic.Additionally, study cases defined in the previous phase were used to guide the discussions.
In the fifth and final phase, all experts were combined for team recommendations in one room.Each team briefly presented the identified factors and discussed them along with the most relevant findings on the use and recommendation of digital health solutions, in particular, Aluetta ® with Smartdot ™ in the current healthcare setting reported during the previous activities.The moderator asked experts to describe their opinions about the relevance of factors and summarised the conclusions reached in each session.
Data collection and analysis
The participatory workshop sessions were audio-recorded and reviewed by the facilitators.Thereafter, relevant comments were transcribed, and information from the facilitators' notes and text included in the predefined templates was collated.The data collected in this study were evaluated using a qualitative approach similar to that defined in the thematic analysis.Relevant findings were shared and validated with experts.
All procedures performed in this study were in accordance with European and national ethical guidelines, the European Code of Conduct for Integrity in Research, the Universal Declaration of Human Rights, and the Helsinki declaration.HCPs were informed about the research topic and procedures before joining the expert panel.The experts provided their opinions based on their experience on this topic and were not the main subjects of the study.The experts' opinions included as quotes were pseudoanonymised.No sensitive information was used or collected, and the contributions of the expert panel had no impact on others.
ORR reviewed all collected data, coded them, and defined themes, after which all authors reviewed the proposed themes and refined them until consensus was reached.
Results
The results presented here describe HCPs' perspectives towards the digitalisation of growth hormone therapies and do not originate from scientific data.Overall findings suggest that each country health system and socio-economic and educational landscape may be varied, shaping the HCPs perspectives on the utility of digital health solutions and difficulty to seamlessly transfer from one setting to another.Four major themes were identified for presenting the data analysis: 1) understanding the context of digital transformation, 2) relevant digital health design considerations, 3) perceived benefits and risks of using digital solutions for adherence monitoring, and 4) Workshop structure.
perceived usefulness and ease of use of Aluetta ® with Smartdot ™ and the Growzen ™ digital health ecosystem.
Understanding the context of digital transformation
In this theme, experts' comments expressing their perspectives on how their organisations were supporting the use of digital health solutions, including adherence monitoring, were recorded.Stakeholders' perspectives on the use of novel solutions along with current strategies used to manage growth disorders were also considered.
Organisational and technical support
This subtheme collected experts' comments about support provided by organisations, healthcare institutions, and other entities for the use of digital health solutions towards management of growth disorders.
In both Korea and Italy, experts agreed on the importance of monitoring adherence to GH treatment.However, experts in Italy perceived that the use of digital health solutions to monitor adherence was not a priority for the Italian healthcare system due to the lack of involvement of health institutions.The potential benefits of such solutions were not considered in their evaluation.They mentioned that adherence is not often used in cost-effectiveness analysis.
In Italy, experts identified the pharmacoeconomic approach of the healthcare system while making decisions.Similarly, the Korean panel also reported some economic aspects, such as the fact that some GH treatments were financed by insurance companies that may lead to higher adherence, as suggested by academic reports.In these circumstances, monitoring adherence to treatment was mandatory, and caregivers were required to provide adherence data to physicians.
"In case of continuous glucose monitoring it is mandatory that a certain percentage of adherence is observed in order to receive reimbursement (insurance coverage) from the government.So, caregivers are obliged to show us the data and doctors also check it thoroughly and make data entry."[Korea] While Korean HCPs could perceive the use of digital health solutions as an additional task to perform in their existing short visit time, they mentioned extrinsic motivation, such as payment of fees, as a relevant factor, whereas the Italian panel emphasised the role of the healthcare institutions in convincing and supporting HCPs in integrating these solutions (additional bureaucracy, training, and clear guidelines, etc.).Both the panels agreed on the need for training and some external services to support them in their tasks (for example, patient support programmes).
"I would prefer a course.It's not difficult, but I would put the need for targeted education" [Italy]
Perspectives on the use of a digital health solution
The Korean panel considered the Korean society to be highly digitalised and, therefore, thought that HCPs may have a positive outlook towards the use of digital health solutions.They discussed the need to tailor digital solutions to patients and caregivers and emphasised that these solutions should be user-friendly.
"Nowadays, most people are very used to smartphones and digital applications.So, I don't think that there will be any significant barriers in using this kind of application.But making it user-friendly, I think, is very important."[Korea] Conversely, Italian experts reported that some HCPs might not be interested in the collected adherence data.Consequently, they may not promote digital health solutions due to lack of interest perceiving an increased workload and wastage of time.However, they agreed that having trustworthy data and accessing them through a usable platform would be beneficial for patient management.
"Because this trust in having a useful device for patient management, and the attitude of trust even in regard to rapidly usable data."[Italy] Both Korean and Italian panels highlighted the need to integrate digital health solutions into clinical practice.Additionally, the experts commented that this integration should be performed gradually, allowing patients/caregivers to use preferred tools for adherence monitoring.Although both panels expressed the need for actionable recommendations based on the collected data, the Korean panel preferred a brief paper summary to be used in their visits despite the highly digitalised context.However, the Italian panel agreed that they would like to receive these feedback and recommendations through a digital platform such as Growzen ™ Connect HCP platform."Perhaps it is not used in daily life, because there is not a moment that comes to mind during the visit to say I'm going to check this thing.
" [Italy]
Regarding patients'/caregivers' perspectives on the use of a digital health solution to support GHD management, the Italian panel focussed on individual characteristics such as age, digital literacy, and response to external control.Although the Korean panel also identified some individual characteristics, they focussed on technical aspects.For example, they commented on the importance of encouraging patients to continue using digital health solutions for a longer duration.
Both Korean and Italian experts agreed that patients/caregivers should be aware of the potential benefits for the management of the condition of using the digital health solution.This approach was considered a key strategy to convince experts/caregivers to adopt digital health solutions.
"What do you think about users' attitude towards using the mobile APP? Initially reluctant, it is important to motivate them to use and explain the importance of adherence."[Italy]
Managing growth disorders
Both countries reported several strategies to monitor patient adherence.Most Korean experts commented that they asked about adherence or the number of missed injections to patients or caregivers.Similarly, the current adherence data collection strategies in Italy were based on collecting subjective data, leading to unreliable datasets.HCPs also reported their concerns about the accuracy of collected data with non-digital entries.
"It is well known that adherence is related to treatment efficacy, and most caregivers understand that their children need regular injections to be able to expect a good treatment outcome.This then leads to the question of 'how can we improve adherence?'"[Korea] "In the case of written data (e.g.diary), they are difficult to analyse, require enormous expenditure of time on the part of the doctor, risk of error and lack of objectivity of the data" [Italy] 3.2 Relevant digital health design considerations
Interesting functionalities of a digital solution for GHD management
Tailoring the content of the digital health solution was considered an interesting feature to support GHD management.Although both panels agreed on most of the important factors, the Italian panel emphasised the sociocultural level of patients/caregivers, whereas the Korean panel highlighted the character of the individual.
"We think a lot about the hardware factor but also in terms of the software, we should think about what the contents of the reminder would be and even if the contents are the same, what the nuance is going to be and how to make it more encouraging, whether emojis would be used, etc.All these would be very important factors to consider.Also, we should decide whether it would target the children or the caregivers, and tailor for each patient group."[Korea] "Instead, for the rest, it is a last point.I would put it's easy.Transmission, data reliability, ability to maintain data over time.So, these are the three most important things.Cultural, socio-cultural reasons" [Italy] Both panels agreed to offer support in the management of GHD through digital health solutions, with the Korean panel contributing to some technical aspects, such as the use of video or artificial intelligence tools (chatbots).They also highlighted the desire to provide just-in-time support.
"So that's why I think a chatbot function would be useful because the patients can ask questions to it or ask what to do in certain situations.I used to think that all these questions in a chatbot were answered by human but later, I found out that it wasn't.So, if such algorithm is developed, this will lessen the workload of HCPs and also enable caregivers to obtain accurate information at the same time."[Korea] Reminders and motivational messages were considered relevant functionalities by both panels; however, the Korean panel discussed more technical details such as frequency or content.
"While not many actually do as told, I think it still helps them to take their medications regularly.So such alarm or a tool that allows us to check (compliance) would be ideal."[Korea] "Interestingly, if the patient does not take the dose for several days, he sends an alert" [Italy] Both Korean and Italian experts agreed that providing feedback to patients/caregivers was a relevant functionality.In this regard, both panels identified several technical aspects, with the Korean panel highlighting some advanced features, such as the inclusion of gamification elements.Additionally, this panel pointed out rewards schemes and discussed this topic at length."We say positive feedback.Rewards and positive feedback.The patient sees the smiley faces, he feels good.Quality of life, he is happy" [Italy] 3.3 Perceived benefits and risks of using digital health solutions for adherence monitoring
Healthcare system level
From the point of view of the healthcare system, a few differences were found between the two countries.Although both Korean and Italian panels considered resource optimisation and cost reduction as potential benefits of automatic adherence data collection, the Italian panel perceived that these data could lead to fewer laboratory tests and less drug wastage, whereas the Korean panel focussed on financing issues, especially for insurancefinanced treatments.
"Healthcare system.Less drug waste, therefore economic impact.Who cares then, the only thing, then they would be able to put in more visits if the visits last less, because we are in bad shape.Use number and times because you make fewer visits" [Italy] "Regarding insurance,…, in the case of continuous glucose monitoring, the policy is that the government won't cover the cost for those with a compliance level of under a certain level.So also with GHD, if the adherence is a lot less than expected from accumulated data, and yet a lot of patients are receiving reimbursement, this might enable changes in reimbursement paradigm."[Korea] The Italian panel felt that automatic adherence data collection would allow them to optimise motivational strategies to encourage patients/caregivers to comply with GHD treatment.Additionally, the panel identified the potential risk of using these data when measurements are not accurate (e.g., dose detection and fake injections).
"From a therapeutic point of view, I see the consequences only in a positive sense.Obviously, the therapy, maximized clinical efficacy, that's right.The middle ground: patient support is needed mostly from the HCP and from the national health service.The middle ground is the crisis between the two.Malaise for mistakes, possible?It's like that, in the sense, it's not accessible to everyone, because it's an application.Sharing, how can the patient perceive it?As a task to which he has to obey, let's say, an added task, a boredom of having to mark things down?A further burden is the last thing, the fact that if the data is not correct there is a risk."[Italy]
HCP level
In terms of potential benefits in decision-making, both Italian and Korean HCPs reported that having automatically collected data would allow them to apply more personalised and just-intime interventions."In this system, would there be a way for us to see patients with bad adherence and give feedback?Usually, those who are good are not the ones that we have to worry about, and the purpose of this (system) is to find those who are in need of help.So, if we could find ways to provide feedback to encourage those with bad adherence, such as providing happy calls, it might be helpful" [Korea] The Italian panel perceived some potential inequities because of the lack of digital skills of HCPs, lack of training of HCPs, or lack of non-digital alternatives.Both panels agreed that checking the collected data could lead to increased workload.The Korean panel had a more negative perception, with some of the experts reporting that HCPs could feel guilty for not being able to check collected data as expected by patients/caregivers.Although the Italian panel agreed, they felt that HCPs may even feel more confident because they could use more accurate and reliable data to optimise patient treatments."However, regarding the third question of whether HCPs will welcome using this data, we are already experiencing a high workload and it will immediately be recognised as a burden.So, I think there should be more reward for HCPs than just being able to better take care of their patients.For example, there is a code for a medical fee for provision of CGMs.Likewise, if they could provide a separate code for growth assessment of children receiving GHD treatment, that could be a reward for us for monitoring these patients more closely and going the extra mile for analysis" [Korea] "Because we usually see GHD patients quite frequently for more than 5 years, GHD patients are one of the patient groups that we usually have good rapport with.So, I don't think we've had much issues in terms of sharing data and on the contrary, I think I often felt bad about not giving them enough feedback on the data provided" [Korea] "This here data accuracy, reliable, absolutely certain, objective" [Italy]
Patient/caregiver level
Both Korean and Italian experts agreed that the use of these digital health solutions helps in prescribing evidence-based therapies that could lead to patient/caregiver empowerment.They linked tracking adherence data to high levels of empowerment.
"Scenario 3 (the use of Aluetta ® with Smartdot™ and the connected digital ecosystem to gather adherence data) the patient's empowerment and self-efficacy improve markedly.Minimum effort, because the patient has to make a minimum of effort to improve the results.Empowerment when the therapy is followed correctly and the fact that the doctor interacts in real time remotely."[Italy] "I agree.Of course, there will be a lot more ways and items to be able to link and observe growth curve on the app, which will be beneficial in terms of patient empowerment" [Korea] Both Korean and Italian experts believed that automated adherence data collection could influence patient/caregiver motivation and promote adherence.Korean experts agreed that the feedback features would impact the ability of patients/caregivers to self-monitor their GHD and, therefore, improve their selfmanagement.Similarly, Italian experts agreed that the use of digital health solutions can facilitate timely feedback.Some of the experts reported that digital solutions can maximise the opportunities to provide feedback to patients and caregivers.These experts linked feedback to high levels of motivation and satisfaction among children and their families.
"We have written about the importance of effectiveness in terms of growth, which returns a fairly evident result that brings satisfaction to both the family and the child himself" [Italy] "So, in clinic, regarding health management from the caregiver's perspective, I think it could give them an impression that their physician is paying attention and caring for them" [Korea] Regarding the potential risks associated with the use of digital health solutions, the Italian panel reported that a lack of digital literacy could lead to health inequities.However, the Korean panel considered that a non-digital alternative should be implemented, allowing patients/caregivers to choose their preferred option.Additionally, some of the Korean experts commented on the possibility of patients feeling controlled and potential data privacy issues causing reluctance in data sharing.
"But even so, patients may prefer the notebook, because using the app would mean disclosing a lot of their private information, which may act as a resistance factor.Writing the values down in a notebook might feel like they are keeping a secret to themselves, while using an app automatically means that the data is accessible by others, which I think will be associated with resistance.So, we need to take into consideration the privacy issue and there should be a way to protect that" [Korea]
HCP-patient relationship
All experts agreed that automated adherence data collection provides them with accurate and reliable data that they can use to communicate with their patients/caregivers.Automated collection could increase trust between HCPs and patients/caregivers.However, the Korean panel reported that more reliable and accurate data could lead to conflicts between patients and caregivers, which would negatively impact the HCP-patient relationship.They reported that some patients may also be reluctant to come for further visits if HCPs determined that they were lying.
"As repeatedly mentioned, from management perspective, availability of objective data allows us to build further trust with caregivers.When such tool was not available, I had a diabetes patient whose adherence was really bad despite constantly being told that she needs to improve.So, I gave her an ultimatum by saying that I'd have to transfer her to a different clinic, and she started crying and said that I never encouraged her by saying that she had also been good" [Korea] The Italian panel provided mixed opinions.Some experts commented that these objective data could provoke negative feelings of control or intrusiveness among some patients.Conversely, others felt that the HCP-patient relationship may improve because patients/ caregivers may feel that HCPs were taking care of them.
"There are two points: trust and improvement of the doctorpatient relationship and surveillance" [Italy] 3.4 Perceived usefulness and ease of use of Aluetta ® with Smartdot ™ and the Growzen ™ ecosystem Experts from both countries agreed that Aluetta ® with Smartdot ™ had a user-friendly format for transforming a pen into a digital health solution.They did not perceive any changes in terms of weight when the Smartdot ™ accessory was attached to the Aluetta ® pen, making it suitable for use by children.
"Considering the size of the pen, which is substantial, I think they've done their best with the technologies available to minimise the cap size" [Korea] Although some Korean experts commented on the desirability of incorporating some form of feedback to indicate that the coupling process has been successfully completed, most of the Italian experts reported that they missed receiving feedback and preferred to receive audio feedback.
"It would take something like click, which gives the feeling that it engages, in my opinion.It would take a shot when you put it" [Italy] Between the two countries, the main difference was highlighted in charging.The Italian panel expressed concerns about battery life, fearing that forgetting to charge the device could result in data loss.Conversely, the Korean panel emphasised the technical aspects of charging, expecting a more advanced process like wireless charging to make it easier and prevent data loss.
"Lastly, it should be easy to charge the device, for example, adopting a wireless charging system that will automatically charge the device once it's placed and stored in the case, rather than having to charge every two weeks.So, these are the five suggestions" [Korea] Apart from these features, the HCPs from both countries also shared their comments on the perceived ease of use of Aluetta ® with Smartdot ™ , as summarised in Table 1, and the Growzen ™ digital health ecosystem.The Growzen ™ Buddy patient app was considered as an easy-to-use application by experts from both countries.
"The application, what do we think?It is objectively easy; you immediately understand how to use it;" [Italy] "Regarding how to attract the existing Aluetta ® users, one of the biggest barriers will be how fast they get used to the application.In other words, we have to minimise time wasting that may arise from the app" [Korea] The use of the Growzen ™ Buddy patient app was perceived to be easy for healthcare professionals to teach and for patients/ caregivers to learn.
"It is easy to explain to an assistant/patient how to configure and associate the device with the APP" [Italy] "Is the setup easy to learn?It's easy, also because you don't need to update" [Italy] "I think such app would really help.If we provide thorough explanation at first, it should be useful for patients and caregivers" [Korea] Regarding appearance, both Italian and Korean experts found the graphical user interface of the application to be user-friendly and appropriate for patients/caregivers. "The colours, the icons, the combination of data; navigation is easy and intuitive, is it clear and understandable?I would say that these 5 are perhaps the most interesting.Even the reminder isn't bad, but there's not much to say about the reminder.Does the mobile app help users gain more insight into their adherence behaviour?Yes, of course, that's why it's made, so I'd say it's the most fitting" [Italy] "So maybe the users will be interested in the initial phase, because it kind of looks like a game, but in order to maintain the momentum, then we would need to provide some kind of reward.And while it may depend on the character of the patients, those who show good growth would probably have a high satisfaction level with the application" [Korea] Experts from both the Korean and Italian groups commented that the Growzen ™ Buddy patient app implemented several feedback strategies that would positively impact patients in their adherence.
"Is feedback useful when it has been configured?Very helpful" [Italy] "The mobile app helps users to get more information about their joining behaviour.The most important of all" [Italy] "In a sense that this device is about recording the treatment history, I think it could be meaningful for them to see the record of treatment history, And I think this will be positive impact on caregivers" [Korea] Italian experts agreed that the Growzen ™ Connect HCP platform can be used to engage and discuss reports with patients/caregivers. "The platform is our stuff.APP is something mobile, SMS is mobile, we can put the platform just www.growzenconnect.com.We put it on reports, it is consulted only by the healthcare provider, therefore only by us.But it is something that obviously serves the patient's purposes, therefore something digital in any case."[Italy] TABLE 1 Summary of experts' comments on the perceived usefulness and ease of use of Aluetta ® with Smartdot ™ .
Component Topic
Summary of comments from Italy
Summary of comments from Korea
Aluetta ® with Smartdot ™
Location of components
The administration button of Aluetta ® with Smartdot ™ was easier to use and more ergonomic, improving the user experience.
The device was manageable and its dimensions made it suitable for use by both adults and children.
Pairing and configuration
Aluetta ® with Smartdot ™ configuration in the Growzen ™ Buddy patient app was easier to do than the new smartphone configuration.
The pairing process was found to be similar to that used in other current Bluetooth ® devices.
Ease of use Aluetta ® with Smartdot™ was quite similar to other electronic devices, and people who were familiar with it could easily use it.
Aluetta ® with Smartdot ™ improved the usability of the pen.
Aluetta ® with Smartdot™ mounted was lightweight and perfectly suitable for children.
Reliability and accuracy -
Aluetta ® with Smartdot ™ may be helpful in analysing the cause of non-adherence and objectively verify patients with poor adherence by documentation.
Target users -
Aluetta ® with Smartdot ™ will likely be used by new patients and existing adherent patients.
Perceived risks
Aluetta ® with Smartdot ™ may have issues with connection due to refrigeration.
Discussion
This study comprehensively explored HCPs' perspectives on the adoption of digital health solutions and the acceptance of a digital device ecosystem across Korea and Italy.Understanding the nuances of these perspectives is indispensable for developing strategies to overcome the challenges and leverage the opportunities presented by the ongoing digital transformation in healthcare.Although HCPs appreciate the potential of digital health solutions to improve patient engagement and, hence, clinical outcomes, the participatory workshops revealed several aspects on how this digital transformation is impacting treatment options and the need for digital literacy for successful implementation (Figure 3).
The method employed to conduct the participatory workshops facilitated the collection of perspectives of experts from universities and hospitals across Korea and Italy, wherein they shared clinically valuable and understandable technology acceptancy information governing potential barriers and facilitators for the use of Aluetta ® with Smartdot ™ and the Growzen ™ Buddy patient app and the Growzen ™ Connect HCP platform.The qualitative analysis compared the opinions of the experts considering two different groups-Italy and Korea-and a researcher reviewed all themes and comments included in them.It was observed that the healthcare systems in both countries are different.As digitalisation was well accepted in Korea given the technical readiness and awareness among patients, HCPs' adaptation to digital health solutions could be positive.Conversely, the Italian national health system had limited human and technical healthcare resources to support the GH digital health ecosystem, and patients'/caregivers' adoption of digital health solutions varied depending on an individual's characteristics, skills, or motivation level.With the spectrum of options available with patients and caregivers, the choice of digital health solution may impact adherence.Therefore, it would be crucial to consider the specific needs and preferences of the patients, caregivers, and HCPs and have features that could be useful to support patients/caregivers in managing their conditions in both countries.The analysis revealed some of the risks and benefits associated with the use of digital solutions for adherence monitoring, such as accessibility to adherence data, data-driven clinical decisions, visibility of results, and strengthened HCP-patient relationships.HCPs from both countries perceived Aluetta ® with Smartdot ™ as an excellent digital health solution for GH therapy that can create scientific evidence on the relationship between adherence and efficacy.
Over many years, the digital devices for diagnosis, treatment administration, and monitoring have evolved with technological advances in mobile connected health, artificial intelligence, digital patient support programmes, telemedicine, and gamification using virtual and augmented reality (3).To date, the perspectives of HCPs towards digital health solutions have not been studied for paediatric GH therapies using injector pens.This article presents first-of-its-kind insights that emphasise the benefits of the digital ecosystem, constraints of HCPs and the need to address important aspects related to the acceptance of such technology upgrades.Many of the statements from the clinicians reinforced the adherence support described by the World Health Organization, which included elements such as literacy and support (17).Multiple clinicians highlighted the importance of having Frontiers in Endocrinology frontiersin.orgmore adherence data to improve clinical practice and research.This feedback is congruent with recent reviews on the use of sensors to monitor adherence (18,19).However, the use of connected sensors for adherence can affect the cost effectiveness of the treatment, which is not always quantified and recognised by healthcare systems.The importance of the value of adherence data appears to be clearly linked to the provision of visualisations to facilitate condition management, which also includes the visualisation of data in both the mobile application and interface for the doctors.There are emerging initiatives on creating standards for adherence reporting that also mention such needs; however, more research is required in that area (20).In the case of connected injection pens, it is essential to consider that the user interfaces encompass not only the connected pen but also the mobile application used for pairing process to link the Aluetta ® with Smartdot ™ with the Growzen ™ Buddy patient app.
All HCPs highlighted that a connected injector device such as Aluetta ® with Smartdot ™ in the Growzen ™ ecosystem can help personalise care by enabling patient empowerment and clinical decision-making.Aluetta ® with Smartdot ™ was considered to be easy to use, easy to learn and teach, ergonomically suitable for use by both children and adults, comfortable to be transported, robust, easy to charge, and easy to pair with other devices, thereby providing a better administration experience.From HCPs perspective, the Growzen ™ Buddy patient app would be easy to use, easy to learn, and the feedback provided by the application would be valuable to motivate patients.Growzen ™ Connect HCP platform was considered useful for data analysis by HCPs and for promoting discussion with patients/caregivers.Furthermore, the following aspects were considered actionable: 1) healthcare systems need to include adherence monitoring as part of pharmacoeconomic models considered by payors; 2) training on the use of adherence data derived from connected devices should be promoted to both clinicians and patients; 3) easy-to-use platforms that support HCPs in data analysis should be accessible, including alerts when events requiring attention occur, such as actionable recommendations when a lack of adherence is detected or predicted; 4) the possibility of prediction tools based on newly captured data should be explored to bring about a positive impact on research; 5) digital literacy and privacy concerns experienced by some users should be addressed, and the potential negative impact of using digital health or health disparities should be reduced; and 6) best practices to incorporate such sources of data into the provision of care should be studied, especially considering the impact on clinicians' time.
One of the challenges observed in the true adoption of a digital health ecosystem is the long-term engagement of HCPs and patients/ caregivers with digital health applications/devices. Often owing to the limitations of time or digital literacy, sustained engagement with technology poses a challenge.Sensor-based devices such as Aluetta ® with Smartdot ™ present with an alternate communication platform that is essential to engage patients/caregivers and develop user-centered solutions for the treatment and management of GHD (21).
The participatory workshops in these two countries examined the perspectives of a small group of experts over a short period.Further studies are required to determine the extent of digital health solution adoption among HCPs and patients/caregivers.Furthermore, with the progression and evolvement of technology, some desired features discussed may be incorporated, and HCPs' recommendations may be altered.Although these perspectives may not be universal, they do help in the development of an individualised approach to GH treatment.
Conclusion
HCPs are one of the foremost stakeholders in the implementation of digital health solutions.Our participatory workshop helped capture meaningful insights from them as experts.The main findings highlighted that experts considered/perceived Aluetta ® with Smartdot ™ within the Growzen ™ digital health ecosystem as userfriendly, intuitive, and easy-to-use digital health solutions.Aluetta ® with Smartdot ™ enabled automatic, real-time injection data transmission to support adherence monitoring and data-driven treatment decisions, thereby helping understand the reasons for suboptimal response or adherence issues with GHD therapy.The availability of unbiased, reliable, and accurate data transmitted by the device would be beneficial and help generate new evidence-based knowledge to support GHD therapy, strengthen patient-HCP relationships, and empower patients throughout the treatment process.The findings from these workshops can further contribute towards novel insights to enable HCPs to better adopt and prescribe digital health solutions as part of their routine care and support researchers with new clinically relevant datasets for better management of GHD.
FIGURE 3
FIGURE 3Patient-centric digital health solutions.
|
2024-07-14T15:28:48.286Z
|
2024-07-10T00:00:00.000
|
{
"year": 2024,
"sha1": "654a01137a05cfc1f3d0c4e7f0e6302f35a1dece",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fendo.2024.1419667",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "82407be4a156bdf16fa160c0a3208c01f000d167",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
51952928
|
pes2o/s2orc
|
v3-fos-license
|
Retrospective study to identify risk factors for chronic kidney disease in children with congenital solitary functioning kidney detected by neonatal renal ultrasound screening
Abstract To evaluate the prognostic significance of factors frequently associated with a reduction in renal mass, such as prematurity, low birth weight, and congenital anomalies of kidney and urinary tract (CAKUT), in patients with solitary functioning kidney (SFK) and investigate signs of early renal injury due to glomerular hyperfiltration damage or dysplasia in the remaining kidney. Retrospective observational study of congenital SFK diagnosed and followed at a tertiary care hospital over a period of 10 years in which 32,900 newborns underwent routine neonatal abdominal ultrasound screening. We analyzed age at diagnosis, sex, gestational age, anthropometric measurements at birth and prenatal and neonatal ultrasound findings, in addition to follow-up data corresponding to imaging findings (ultrasound, micturating cystourethrography, dimercaptosuccinic acid renal, and scintigraphy), ipsilateral CAKUT, compensatory hypertrophy, and renal injury in the form of albuminuria, blood pressure, and estimated glomerular filtration rate (eGFR). In total, 128 congenital SFK cases were detected (1 per 257 live births). Of these, 117 (91.4%) were diagnosed by neonatal ultrasound screening and 44.5% of these had been previously identified by prenatal ultrasound. Neonatal ultrasound had a specificity of 100% and a sensitivity of 92.1%. Forty-five patients (35.2%) had ipsilateral CAKUT and the most common type was urinary collecting system anomalies (75.5%). Over a median follow-up of 6.3 years (1–10 years), compensatory renal hypertrophy was observed in 81 patients (63.7%), most of whom had ipsilateral CAKUT (76.1% vs 56.6% of patients without ipsilateral CAKUT). Albuminuria and hypertension were observed in 3.12% and 5% of patients, respectively, and both were associated with ipsilateral CAKUT (P < .05). In addition, 75% of albuminuria cases (P = .031), 83.3% of hypertension cases (P = .004), and 100% of decreased eGFRcases (P = .031) were significantly associated with CAKUT (renal parenchymal anomaly category), being the strongest predictor of GFR the presence or absence of CAKUT. Neonatal ultrasound screening is useful for the early diagnosis of SFK. The presence of ipsilateral CAKUT should be evaluated in all patients with SFK as congenital anomalies of the renal parenchyma are associated with a poorer prognosis. Because morbidity from CAKUTs may not develop until adulthood, patients should be closely followed throughout life.
Introduction
Children with a solitary functioning kidney (SFK) have at least a 30% reduction in renal mass. According to the Brenner hyperfiltration theory, a reduced number of nephrons could cause hemodynamic changes in the remaining glomeruli, leading to glomerular hypertension and an increased glomerular filtration rate (GFR). [1,2] Although these changes seem to reflect a positive adaptive response, patients with unilateral renal agenesis and a normal contralateral kidney are at an increased risk of proteinuria, hypertension, and renal insufficiency and hence require close long-term follow-up and strategies to preserve optimal function in the remnant kidney.
It has been shown that nephrogenesis ends in the 36th week of gestation [3] and that no new nephrons are formed in the postnatal period. [4] The number of glomerular units per kidney varies widely in humans, with figures ranging from 200,000 to over 2,500,000. [5,6] As prematurity interrupts nephrogenesis, premature newborns may have a decreased nephron number. Moreover, intrauterine growth restriction can cause fetal reprogramming, which could have a profound effect on the development of kidneys and other organs. [7] In one study of intrauterine growth retardation, a reduced nephron number at birth in low birth weight (BW) infants was identified as an indicator of altered kidney development. [4] SFK is an important subgroup of congenital anomalies of the kidney and urinary tract (CAKUT), [8] which are the main causes of chronic kidney disease (CKD) in childhood. [9] CAKUT are classified into 3 groups: renal parenchyma anomalies, migration and fusion anomalies, and urinary collecting system anomalies. During nephrogenesis, reciprocal inductive interactions controlled by a gene regulatory network occur between the metanephric mesenchyme and the ureteric bud. [10] Disruption of the control over the complex network of interactions involved in nephrogenesis results in a wide spectrum of renal and urinary tract malformations, explaining why about 40% of children with SFK have associated congenital kidney anomalies. A low nephron number or impaired nephron function due to alterations during nephrogenesis places patients with CAKUT at a high risk of stage 2 to 5 CKD. Other studies have found more favorable outcomes, which is the opposite of what might be expected according to the Brenner hypothesis. [11,12] Follow-up studies of children with SFK are scarce and most have been conducted in children who have undergone unilateral nephrectomy, [11][12][13][14] that is, in children without congenital SFK. In addition, evidence suggests that prognosis may vary depending on whether the patient has congenital or acquired SFK. [15] We performed a retrospective study of congenital SFK cases diagnosed at our hospital over a period of 10 years to investigate the potential prognostic impact of factors associated with additional renal mass reduction.
Study design
This retrospective observational study was conducted to evaluate cases of SFK diagnosed in newborns at a tertiary university hospital over a period of 10 years (January 2007-December 2016). The study protocol was approved by the Research Ethics Committee of Santiago-Lugo (2017/576). Written informed consent was obtained from the parents or legal guardians of all the patients included.
SFK was diagnosed by routine prenatal ultrasound performed at 12, 20, and 32 weeks gestational age (GA) and/or by routine neonatal ultrasound performed by neonatologists within the first 7 days of life within the hospital's neonatal screening program. All diagnoses are confirmed by renal scintigraphy and patients are followed-up to the age of 18 years by the hospital's pediatric nephrology unit, which is the reference unit for the area.
Population
Of the 32,900 newborns evaluated during the 10-year study period, 128 were diagnosed with congenital SFK and followed at our unit.
Methods
Recumbent length was measured with a length board and weight was measured using a manual baby scales. These measurements were made by specialized personnel for these patients in all cases. Body mass index (BMI) was calculated as weight (kg)/height 2 (m 2 ) and classified as normal weight (3rd-85th percentiles), overweight (>85th-95th percentiles), or obesity (>95th percentile) according to Hernández's charts. [20] BP was measured using oscillometric devices and high BP was confirmed by auscultation and/or ambulatory BP monitoring.
Albumin concentrations in first-morning urine were measured using bromocresol green. Serum creatinine was analyzed using the kinetic Jaffe method and eGFR was estimated using the Schwartz Renal ultrasound was performed using a 2.5-MHz probe (GE Voluson Expert 730 Ultrasound System; GE Healthcare, Spain). Micturating cystourethrography was performed using the Optima XR646 digital radiography system (GE Healthcare), and results were classified according to grading system presented in the International Reflux Study in Children. [21] DMSA renal scintigraphy was performed using a Brivo NM615 collimator (GE Healthcare), and results were evaluated on the scale formulated by Goodrich. [22]
Statistical analysis
Statistical analysis was performed using R Core Team (2017), version 3.4.0. In order to evaluate the significance of the difference between qualitative variables, we used the Fisher's exact test, and the Benjamini-Hochberg correction was applied to adjust the P-values. Only P-values under .05 were considered significant. We also used a stepwise AIC (Akaike information criterion)-based regression method to identify risk factors for renal injury.
Results
The 128 congenital cases of SFK detected corresponded to an overall incidence of 1 case per every 257 live births. The most common SFK phenotype was a full-term (106/128, 82.8%) newborn male (82/128; 64%) of appropriate BW for GA (93/ 128, 73%) without a family history of kidney disease (94/128, 73.4%) or associated ipsilateral CAKUT (83/128, 65%). The kidney diseases detected in the family were nephrolithiasis (15.6%), renal agenesis (6%), and renal hypoplasia, vesicalureteral reflux, and glomerular disease (5%). Fifty-seven (44.5%) of the 128 cases of SFK were detected by prenatal renal ultrasound and confirmed by neonatal abdominal ultrasound and 60 (46.8%) were detected by routine neonatal ultrasound screening. The remaining11 cases (8.6%) were diagnosed during the ultrasound investigation of other underlying diseases in the postnatal period. Eight of these cases were detected within the first year of life. All diagnoses of SFK were confirmed by renal scintigraphy. The functioning kidney was the right kidney in 60.9% of cases. Seventy-five (58%) of the 128 patients had renal agenesis and 53 (42%) had MCKD. All of the patients in whom CFK was diagnosed outside the neonatal period had MCKD (Table 1).
Overall, neonatal ultrasound had as specificity of 100% and a sensitivity of 92.1%. Sensitivity was 100% for renal agenesis and aplasia and 82.8% for MCKD. The respective positive and negative predictive values were 100% and 99.9%.
Forty-five patients (35.1%) had ipsilateral CAKUT, which corresponded to urinary collecting system anomalies in 75.5% of cases. Vesicoureteral reflux was the most common malformation, present in 21.8% of the population.
Median follow-up was 6.3 years (range 1-10 years). Compensatory renal hypertrophy was observed in 81 patients (63.7%). The rates were similar following analysis by GA (57.7% for preterm newborns and 66.6% for full-term newborns) and BW (60.8% for low BW and 65% for appropriate BW). Compensatory renal hypertrophy was more common in patients with ipsilateral CAKUT (76.1% vs 56.6% in patients without ipsilateral CAKUT) but the differences were not significant ( Table 2).
Albuminuria or microalbuminuria was detected in 4 patients (3.12%), all of whom were full-term newborns with an appropriate BW and ipsilateral CAKUT (P = .023). Six patients (5%) had hypertension and all of them had ipsilateral CAKUT (P = .0051) and BMI in the range of obesity. Three patients had decreased eGFR (stage III or higher) and 2 of these had associated ipsilateral CAKUT (P = n.s) (Fig. 1).
We used a stepwise regression method based on AIC to identify the strongest predictors of GFR, systolic and diastolic pressure, and albuminuria, among the variables: age at diagnosis, gestational age, sex, weight, BMI, CAKUT, and compensatory renal hypertrophy. We found that the strongest predictor of GFR was the presence or absence of CAKUT (Adj R 2 = 0.03, P = .029). The best fitting model for systolic and diastolic pressure was the combination of BMI and CAKUT (systolic: Adj R 2 = 0.156, BMI Table 1 Characteristics of 128 cases of solitary functioning kidney diagnosed over a 10-year period at a hospital with routine neonatal ultrasound screening. P value = 6.3e À05 , CAKUT P value = .0098; diastolic: Adj R 2 = 0.064, BMI P value = 0.021, CAKUT P value = .034). In albuminuria, the strongest predictors were the age at diagnosis and CAKUT (Adj R 2 = 0.11, age at diagnosis P value = .0036, CAKUT P value = .0094).
Discussion
We analyzed clinical, biochemical, and imaging data corresponding to 128 patients with SFK born in a tertiary care hospital to investigate potential prognostic factors and assess the importance of early diagnosis. All the patients underwent routine prenatal and neonatal ultrasound screening and were followed for a mean of 6.3 years. The incidence of SFK over the 10-year study period was 1 case per 257 live births, which is higher than rates reported elsewhere. In a systematic review of unilateral renal agenesis, Westland et al [23] calculated an incidence of 1 case per 2000 births, while in an analysis of data from 13 European registries, Winding et al [24] found an overall rate of 1 case of MCKD per 2427 births. It is difficult to estimate the incidence of SFK in the general population, since many patients remain asymptomatic into adulthood. Prenatal ultrasound screening detected just 45% of subsequently confirmed cases of SFK in our series, providing further evidence that prenatal detection rates are generally low. [25] One reason proposed to explain these low detection rates is the fact that ultrasound features of adrenal glands or intestinal loops during gestation can mimic renal tissue, causing confusion and misdiagnosis. [26] Although neonatal ultrasound detected SFK in the vast majority of patients in our series, it missed 8.6% of cases, which were all MCKD. We believe that this is probably because it is easier to detect a missing kidney than anomalies in the renal parenchyma.
According to the hyperfiltration theory, [1,2] children with SFK have a high risk of developing long-term hypertension, albuminuria, and reduced GFR in the long-term. Although this theory is accepted worldwide, it has only been demonstrated in experimental animal models, as it is not yet possible to count nephrons in vivo or to measure single-nephron GFR in human. Nevertheless, patients with a single kidney would constitute a human model of a 50% reduction in renal mass. It has been shown that compensatory renal function in solitary kidneys reaches a GFR of 75% of the estimated total value for both kidneys, [27][28][29] indirectly demonstrating hyperfiltration in the remaining glomeruli. Compensatory growth in response to the loss of a contralateral kidney also indirectly supports the compensatory hyperfiltration theory, with reports of solitary kidneys reaching a volume of up to 180% the volume of a healthy kidney. [30] Since the consequences of compensatory hyperfiltration and final kidney size may be related to the number of functional nephrons, it may be necessary to identify risk factors for nephron number reduction. In the present study we assessed whether factors that predispose to a reduced nephron number, [31] such as prematurity, low BW for GA, and CAKUT, might be associated with changes in SFK size. We found that 63.7% of patients developed compensatory hypertrophy, and the rates were similar when the data were analyzed by GA and BW. Compensatory hypertrophy, however, was more common in patients with ipsilateral CAKUT. One study of ex vivo human samples showed that glomerular size increased with a decreasing number of nephrons; accordingly, an adequate number of nephrons may protect against glomerular hypertrophy. [32] Since the number of nephrons in a single human kidney can vary from 200,000 to over 2,500,000, [5,6] it could be hypothesized that children with a single kidney without ipsilateral CAKUT might have an adequate number of nephrons capable of assuming excessive function in addition to less compensatory hypertrophy. On the other hand, one of the possible explanations why an ipsilateral CAKUT is associated with compensatory hypertrophy more is probably because even in the single kidney some of the nephrons can have developmental issues further reducing the number of Although in our observational study we found that compensatory renal hypertrophy was more common in patients with ipsilateral CAKUT than in those without CAKUT, large longitudinal follow-up studies are needed to explore clinical outcomes in patients with different types of SFK. The incidence of albuminuria, hypertension, and decreased eGFR over the median follow-up period of 6.3 years (3%, 5%, and 1.6%, respectively) is consistent with previous reports. [33,34] This low overall incidence can probably be explained by the fact that manifestations of renal injury in SFK tend to increase with age, [25,35] with most cases occurring after 25 years of follow-up, ie, in adulthood. [29][30][31][32][33][34][35][36] It should be noted, however, that all the patients with proteinuria, hypertension, and decreased GFR in our series had associated CAKUT (mostly renal parenchymal anomalies), and patients with SFK and ipsilateral CAKUT have been found to have a higher incidence of renal injury and earlier manifestation of symptoms. [25,31] It should be also noted that manifestations of CKD were less common in patients with other types of CAKUT (urinary collecting system anomalies in most cases), indicating that not all congenital abnormalities affecting the kidneys and urinary tract have the same impact on prognosis, and that renal parenchymal anomalies appear to be associated with a higher risk of early CKD. Finally, it is worth mentioning that preterm and/or low BW newborns who display more rapid growth on weight charts have been found to have an increased risk of developing metabolic syndrome later in life. [37] In our series, 40% of hypertensive patients were preterm newborns and www.md-journal.com had a BMI in the range of obesity, strongly suggesting that weight management should be an integral part of follow-up.
The main limitations of the study are that it is a retrospective study, it is a relative short follow-up for a slow disease, so does not clearly reflect the impact of reduced number of nephrons and that not all cases have the same evolution time. However, this study implies a neonatal screening of a large number of cases, allowing early identification of the risk factors associated with kidney damage in the SFK. Detection of CAKUT during childhood should be followed by lifetime monitoring of the patient, using age-appropriate guidelines. [38] Attention to risk factors and renoprotection may maintain adequate renal function throughout life.
In conclusion, we consider that universal neonatal ultrasound screening by highly qualified staff may be an extremely useful tool for the early diagnosis of SFK. Early detection of associated risk factors, in particular other types of CAKUT, is also important since they can predispose to CKD. Close follow-up of patients with SFK and associated ipsilateral CAKUT, in particular renal parenchymal anomalies, is also necessary for the early detection of microalbuminuria, hypertension, and decreased eGFR. Whether due to dysplasia or glomerular hyperfiltration damage in the remnant kidney, renal injury in the form of hypertension and/or proteinuria was observed in up to 5% of children with congenital SFK in our series. Finally, early initiation of CKD treatment is essential to slow progression to end-stage renal failure.
|
2018-08-14T20:37:27.083Z
|
2018-08-01T00:00:00.000
|
{
"year": 2018,
"sha1": "6c935d5658f263f00bdceb2842e1d8e946d323c1",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/md.0000000000011819",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c935d5658f263f00bdceb2842e1d8e946d323c1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234676218
|
pes2o/s2orc
|
v3-fos-license
|
EFFECTS OF A TEACHER TRAINING INTERVENTION ON TEACHERS’ AND STUDENTS’ MOTIVATION TO PHYSICAL EDUCATION CLASS
The present study evaluated the effects of a teacher training intervention, based on Self-Determination Theory, on teachers’ and students’ motivation in physical education class. This is a pre-post quasi-experimental study with 4 physical education teachers and 611 students from four public schools. A handbook was developed and teacher training sessions were conducted. Statistical analysis consisted of paired t-tests and general linear model repeated measures to assess teachers’ self-determined motivation and linear mixed effect regression to evaluate students’ motivation. A significant increase in teachers’ and students’ motivation score was observed after the intervention. Among teacher, we verified an increase in self-determined motivation. Among students there were significant interaction time by group in Extrinsic Motivation Identified Regulation (F=5.6), Extrinsic Motivation External Regulation (F=7.41), Amotivation (F=5.32) and Self-determined Motivation (F=4.87). Also, Intrinsic Motivation significantly declined with age for boys (β= -0.151) and girls (β= -0.121) as well as Extrinsic Motivation Introjected Regulation for girls β= (-0.141). Training sessions can support teachers in planning lessons resulting in increasing teacher and students' motivation in physical education classes. However, this strategy was not enough to improve intrinsic motivation during the investigated period.
Introduction
Providing physical education (PE) classes in schools is an important strategy for children and adolescents development and may offer opportunity to engage in more physical activity, psychological and social gains 1 . Moreover, a favorable school environment and positive experiences in physical education class could promote healthy lifestyles and include other structured activities for regular practice of physical activity 1,2 . However, beside this knowledge the participation in PE classes in Brazil is low. The National Survey of School Child Health (PeNSE) showed that only 37.3% of the students attend the PE classes given twice a week, even though the classes are compulsory 3 .
Students reported that infrastructure-related problems, school administration, and the classes sports content are some reasons to not participate in PE classes 4 . Another reason not mentioned in these surveys is the low students' motivation. Motivation is recognized as an important factor related to meaningful student engagement and participation in PE classes 5,6 . The Self-Determination Theory has been used as a theoretical tool to understand motivation by assessing the intensity and direction of the behavior towards teaching and the practice of physical activity at school 7 .
According to Deci and Ryan 7 motivation variations are represented and established within a self-determination continuum, which includes: intrinsic motivation (interest, enjoyment, inherent, satisfaction); Extrinsic motivation Integrated Regulation (congruence, awareness, synthesis with self); Extrinsic motivation Identified Regulation (personal importance, conscious valuing); Extrinsic motivation Introjected Regulation (self-control, ego-involvement, internal rewards and punishments); Extrinsic motivation External Regulation (compliance, external rewards and punishments) and amotivation (nonintentional, non-valuing, incompetence, lack of control). The empirical literature contains few studies that assess extrinsic motivation of the integrated regulation type in adolescents; this regulation is seen more often in adults, possibly because of an underdeveloped sense of the self in adolescents 8,9 . Self-determined forms of extrinsic motivation (identified and integrated regulation) have been combined with intrinsic regulation to form autonomous motivation 10 . Autonomous motivation occurs when people feel identified with the value of the activities and have integrated the internalizations into their own sense of self 11 . Autonomous motivation is expected to lead to many positive outcomes such as long-term persistence, healthier behavior, and more effective performance 11 . In contrast, controlled motivation consists of the non-selfdetermined types of extrinsic motivation, including introjected regulation (i.e. acting to avoid guilt or gain pride) and external regulation (i.e. acting to satisfy an external contingency) 7 .
Studies based on the Self-Determination Theory conducted in educational environments suggest that teacher motivation can influence student motivation through the creation of an optimally motivating learning environment, which increases class attendance, concentration, and the effort to perform the physical education activities 12,13 . Furthermore, teachers who teach in a way that increases self-determined motivation can increase opportunities for students to be motivated 14,15 . Also, some studies demonstrated that autonomous motivation is associated with higher levels of self-reported physical activity, both during 6 and outside the PE class 16 .
Interventions that use Self-Determination Theory have been implemented in schools with a variety of objectives, but most have focused on increasing physical activity 17,18 . Intervention studies focusing on the motivation of Physical Education teachers and students are scarce, especially adolescents. Therefore, based on the principles of the Self-Determination Theory the present study aims to analyze the effects of a teacher training intervention teachers and students' motivation to physical education class.
Study design and participants
This quasi-experimental study was conducted in Recife, located in Northeast region of Brazil. The flow chart in Figure 1, as recommended by the TREND statement 19 , describes the different phases and design of the study. Schools had to meet all of the following inclusion criteria: full-time physical education teachers and schools which should have appropriate environment and materials to conduct physical education classes. Six public High schools met the eligibility criteria and were invited to participate in the study. Four indicated agreement and were accepted into the study. Four Teachers (one each school) and 611 students (all classes) were assessed at the baseline in the beginning of the school year (February, 2012). The post-test was completed following the intervention (June, 2012). The research protocol was approved by the Human Research Ethics Committee of the Cancer Hospital of Pernambuco under protocol number 33/2011 and CAAE 0027.0.447.000-11. The teachers and students signed an informed consent form before joining the study. Students aged less than 18 years joined the study after their parents or guardians signed an informed consent form
Intervention
Four physical education teachers (one at each school) received the intervention which was designed to offer teacher training. All classes at each school were taught by a single teacher. The principal researcher drew up a handbook focused on the importance of the teacher in the teaching process, the way to organize PE activities based on a content selection and methodology according to the principles of the Self-Determination Theory (basic psychological needs). The researcher is an expert physical education teacher with experience in leading trainings for physical education teachers. The teachers were provided with this handbook before the training workshop. Four individual training workshops (March to May) were conducted in each school. These trainings were delivered by a member of the research team. Each session lasted approximately 45 minutes, 10 minutes for discussing the handbook content, 15 minutes to share their practices and 20 minutes to plan the class.
The first session regarded the importance of physical activity, its health benefits, and the role of physical education in promoting physical activity. Additionally, we presented the results of studies on the reasons students skip physical education classes. The second session began with a presentation of the concepts of basic psychological needs. We also discussed the importance of the teacher in the teaching process to improve learning and increase student motivation. The characteristics and role of motivated teachers were also particularly emphasized. During the third session, we discussed with the teachers a way to organize physical education based on content selection and teaching style. Proposals for student assessment during class were also presented. The last session, we suggested contents to help each teacher to use some teaching strategies in their classes. The emphasis during training was to show the importance of physical education for the students' development and health and value the teacher's role in the development of a quality class and the importance to interact with the students. All teachers participated in all individual sessions.
The evaluation of the teacher's knowledge about the teaching of physical education to high school students, lesson planning, content and teaching styles used in the classroom was obtained through a form (appendix 1) with 15 open ended questions (i.e: How often you plan your classes? How do you teach? How do you evaluate your classes, etc). In addition, at least two lessons from each teacher were observed by the researchers before and after the intervention. The teachers' motivational profile at work was assessed by the Work Motivation Inventory. This scale was used to know and report the teachers' motivation before and after the intervention. This scale was created by Blais et al. 21 and has acceptable validity and accuracy. This instrument contains 24 items subdivided into six motivational dimensions, each containing four questions: Intrinsic Motivation (IM), Extrinsic Motivation Integrated Regulation (EMInR), Extrinsic Motivation Identified Regulation (EMIdR), Extrinsic Motivation Introjected Regulation (EMIjR), Extrinsic Motivation External Regulation (EMER), and Amotivation (AMOT). This instrument has an initial question: "why do you teach?" followed by 24 7-point Likert scale-type items: 1 -does not correspond in any way; 2 -corresponds very little; 3 -corresponds a little; 4 -corresponds moderately; 5 -corresponds well; 6 -corresponds very well; 7 -corresponds completely/exactly. Specifically, each subscale score was multiplied by an assigned weight according to its position on the selfdetermination continuum. The product scores were then added together to form a selfdetermination score. Self-determination was scored with the following weights as suggested by Taylor, Ntoumanis, and Standage 22 (2008): 3 (three) for Intrinsic Motivation, 2 (two) for Integrated Regulation, 1 (one) for Identified Regulation, -1 (negative one) for Introjected Regulation, -2 (negative two) for External Regulation, and -3 (negative three) for Amotivation. The scale was translated into Brazilian Portuguese and culturally adapted according to the standards proposed by Reichenheim and Moraes 23 .
The Perceived Locus of Causality Questionnaire, developed by Goudas, Biddle, and Fox 24 assessed students's motivation to participate in physical education classes. This instrument was translated into Brazilian Portuguese and culturally adapted by Tenório et al. 25 . This scale is subdivided into five dimensions: intrinsic motivation, extrinsic motivation identified regulation, extrinsic motivation introjected regulation, extrinsic motivation external regulation, and amotivation (Cronbach's alpha: 0,71 to 0,79). Each one consists of four items, totaling 20. We calculated a self-determination score to reflect the student self-determination, using the same process used to calculate teacher motivation. 7-point Likert scale-type items as follows: 1fully disagree; 2disagree very much; 3generally disagree; 4do not agree nor disagree; 5generally agree; 6agree very much; 7fully agree. The scale was scored as recommended by Vallerand 9 and Taylor and Ntoumanis 26 with the following weights: 2 (two) for intrinsic motivation; 1 (one) for extrinsic motivation identified regulation; -1 (negative one) for the average between extrinsic motivation introjected and external regulation; and -2 (negative two) for amotivation.
Data Analysis
Data analyses was performed by the Statistical Package for the Social Sciences (SPSS) version 17.0. Paired t-test compared the teachers' and students' motivation scale means before and after the intervention. Differences between students' self-determined motivation stratified by school were assessed by general linear model repeated measures ANOVA and effect sizes (Cohen's d) were calculated. Effect sizes were interpreted as small (0.20 a 0.49), medium (0.50 a 0.79), and large (≥0.80) 27 . Linear mixed effects regression analysis stratified by sex was employed to assess the differences in students' motivation (self-determination continuum) before and after the intervention. Each model was adjusted for time, socioeconomic status, and age. The Skewness and Kurtosis values of student's motivation constructs was available (appendix 2). The significance level was set at 5% (p<0.05).
Results
All teachers (3 males and 1 female) had a specialization degree. The overall students' mean age was 16.39 (1.15) years, and 56.4% were females. Table 1 shows the socioeconomic and demographic characteristics of the teachers and students by school.
Source: Authors
Knowledge about teaching physical education to high school students The baseline data indicate that all teachers were planning all classes only once a year. The PE class's content consisted essentially of sports and they did not provide much choice and opportunities for students' initiatives. After the intervention, the teachers changed the class contents, all teachers' began including general exercises and games. Teachers from schools 3 and 4 also included dance classes. The teachers also tried to improve their relationship with the students by instructing them more carefully during the class and offering them feedback with regard to their performance.
Motivation
Table 2 shows the overall mean of self-determined motivation score and by teacher before and after the intervention. The teachers' combined score increased by 2.09 points after the intervention. Individually, teachers 2 and 4 presented the highest increases in motivation score. Table 3 shows the students' motivation by school before and after the intervention. The linear mixed effects regression analysis was performed stratified by sex (Tables 4 and 5). Significant group and group by time interactions were found. For boys, Intrinsic Motivation, Extrinsic Motivation Identified Regulation, and Extrinsic Motivation External Regulation scores were significantly higher over time. For girls results indicate all scores were higher over time except for Extrinsic Motivation External Regulation and Amotivation. Also, intrinsic motivation significantly declined with age for boys and girls (i.e., time-based differences) as well as Extrinsic Motivation Introjected Regulation for girls.
Discussion
This study assessed the teachers' and students' motivation in PE classes pre and post intervention. A significant increase in teachers' and students' motivation score was observed after the intervention. Also we found a statistically significant main effect by time in Extrinsic Motivation Identified Regulation and Self-determined Motivation. There were significant interaction time by group in Extrinsic Motivation Identified Regulation), Extrinsic Motivation External Regulation, Amotivation and Self-determined Motivation.
Few studies have assessed the motivation of PE teachers at work 28 . A study conducted with 204 United Kingdom physical education teachers that also used the Work Motivation Inventory found a mean work motivation score of 8.62 22 , very similar to the present score before the intervention. Among the 4 teachers we observed higher mean self-determined motivation score for the two older teachers (ages 46 and 51), who graduated longer ago and had more work experience than for the two younger teachers (ages 32 and 34), a trend that continued after the intervention. A possible explanation for this finding is experience acquired during class and perceived challenges in conducting activities may encourage teachers to adopt and adapt teaching strategies, possibly increasing their motivation.
All teachers increased self-determined motivation score after the intervention. This change could be associated with knowledge acquired during training and awareness of his competence and autonomy to plan his classes, factors that affect motivation. The importance of encouraging teacher motivation is recognized because motivation is related to teaching methods 29 .
The results of studies conducted in schools using the Self-Determination Theory indicate that motivated teachers are more open to changing methods and teaching contents and that this can influence student motivation and participation in class 14,30 . This premise was realized during an intervention in France, with three PE teachers and 185 students to test the effects of a training program based on motivation and teaching style. The teachers managed to improve their teaching style, and the students were receptive to these changes, becoming more satisfied, motivated, and self-determined, and participating more in class 15 .
The teacher's training was conducted individually in our intervention program. As a result, it was possible to deepen the discussion of the contents in their course materials, and of the difficulties that are faced everyday while teaching PE. Strategies such as collective discussion, study groups, in both presential and online learning forms, are suggested when it comes to applying the same training to large groups of teachers.
During the training the teachers stated the meetings had helped in bringing them up to date with specific knowledge regarding the teaching of physical education and exchanging experiences. They have also said that practices like this were either unusual or infrequent at the school. Such information has enabled us to realize the importance of recognizing and valuing teachers input as a way to make them feel an integral part of the school, and as a result to be more motivated to work.
Regarding student's motivation, we found different effect size by school. The students from School 3 and 4 had the highest changes in all motivation scores. We saw the improvement for the autonomous motivation (self-determined motivation, extrinsic motivation identified regulation) and decrease the controlled motivation (extrinsic motivation external regulation and amotivation). One explanation for this increase is that changes in teacher motivation may have had a positive impact on student's motivation, mainly in schools with the lowest motivation scores. The motivation of students from School 2 did not change significantly after the intervention and we saw the lowest intervention's effect.
We also analyzed the students' motivation stratified by sex. We found some differences between girls and boys. For boys, Intrinsic Motivation, Extrinsic Motivation Identified Regulation, and Extrinsic Motivation External Regulation score were statistically significant higher over time. Girls results indicate all scores were higher over time except for Extrinsic Motivation External Regulation and amotivation. Egli et al. 31 have shown gender differences in motivational regulations by sex. Males tended to be more motivated by intrinsic factors, whereas females were more motivated by extrinsic factors. However, a study with British secondary school students had not found significant difference for any of the motivational regulations by sex 32 .
We observed that the traditional sports-based curriculum may have changed as the teachers had included other content such as dance, games, and exercise. The girls may have more self-competence, engagement and motivation in physical educations class. However, for boys the reverse was true, the lack of competition and different content could have demotivated them. The challenge is how to structure class activities in order to engage and motivate all students'. One strategy would be providing some level of autonomy where students should make choices related to the physical education class content. Providing a wider selection of content increases the likelihood that students' will find something they like that will keep them physically engaged and motivated.
Some studies also used the Perceived Locus of Causality Questionnaire to assess selfdetermined motivation in students but found different motivation and motivation dimension scores. An assessment of 787 British students found a mean general self-motivation score of 7.51, higher than the present score 26 . A study that compared self-determined motivation dimension scores of United Kingdom and Hong Kong students found higher scores in the latter 33 , but still lower than those of the present study. On the other hand, a study conducted in northwest England with 428 students found mean scores similar to the present scores 5 . The disparities may be explained by the cultural and environmental differences between Brazil and the developed countries. Additionally, the organization and structure of PE classes vary by country.
Identification and assessment of students' motivation dimensions are important because understanding the direction of motivational behavior will help to implement strategies that increase motivation. More motivated students learn more and use the teachings throughout life. Hence, motivated students in physical education classes tend to participate more in physical activities away from the school environment, contributing to a healthier life style 34 . One should aim to develop intrinsic motivation since this is one of the most important predictors of the intention to practice physical activities and sports; it is also associated with better learning and socialization 3 .
Few studies assessed the relationship between physical education teacher and student motivation. Taylor and Ntoumanis 26 did not find a significant relationship between these two variables in a study of 51 physical education teachers and their 787 British students. The authors blamed the absence of association between teacher and student motivation on the small number of teachers in their sample and warned that the results should be interpreted with caution.
The present results indicate the importance of providing teacher training courses regularly, focusing on teaching styles and allowing teachers to use this knowledge to feel more competent, autonomous, motivated, and ready to create conditions that motivate their students. Others studies indicate the importance from the PE teachers' professional development training to change the teachers' teaching behavior 35,36 . Van den Berghe et al. 37 show that teacher's behavior related with support for students' basic physiological needs can influence the student behavior in physical education classes.
Our intervention showed the training was effective, since it brought up changes in the motivation levels of teachers and students, especially for girls and from School 3 and 4. We believed the training has piqued the teacher's interest in the search for new knowledge and created an opportunity to share their practice. However, this strategy is not enough to improve motivation during the observed period. An intrinsically motivated individual can endorse an activity because it is interesting, challenging and enjoyable, and is more likely to be an autonomous motivated. Previous research in the context of PE has shown that autonomous motivation is associated with a number of positive outcomes, including increased engagement 38 , concentration and best grades 39 .
Our results suggest that training intervention increases teacher motivation and apparently, student motivation as well. However, the results should be interpreted with caution because of some study limitations, such as a short time intervention, a small number of teachers and schools, and the absence of a control group, which can jeopardize its external validity. Also, we suggest using the Multidimensional Work Motivation Scale 40 to measure the teacher´s motivation, because it tested the psychometric properties and was adapted to Brazilian population.
Conclusion
These results suggest that teacher training can lead to some improvements in teacher and students motivation. It is important to promote continued teachers training to improve or update the knowledge related to class lesson plan, class organization, how activities develop in class, the clarity and quality of teacher feedback among others. Therefore, if a teaching environment is well structured students and teachers can benefit from these investments. Consequently, more motivated students could participate more in physical education classes and physical activity. It would be interesting for future intervention study to evaluate for a long time and assess others variables related to motivation (e.g.:Psychological need satisfaction) as well as physical activity.
|
2021-05-17T00:02:52.153Z
|
2020-11-11T00:00:00.000
|
{
"year": 2020,
"sha1": "25525612e77f30ba9bb89a569ea684967b6acfb7",
"oa_license": "CCBYNC",
"oa_url": "https://www.periodicos.uem.br/ojs/index.php/RevEducFis/article/download/47914/751375150986",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "85095db23560a7caf99fde7f01a4a1e0f4b8edb6",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
212725319
|
pes2o/s2orc
|
v3-fos-license
|
Enhanced energy harvesting using time-delayed feedback control from random rotational environment
Motivated by improving performance of a bi-stable vibration energy harvester (VEH) from the viewpoint of vibration control, the time-delayed feedback control of displacement and velocity are constructively proposed into an electromechanical coupled VEH mounted on a rotational automobile tire, which is subject to colored noise and the periodic excitation. Using the improved stochastic averaging procedure based on energy-dependent frequency, the expressions of stationary probability density (SPD) and signal-to-noise ratio (SNR) are obtained analytically. Then, the efficiency of time-delayed feedback control on the stationary response and stochastic resonance (SR) for the delay-controlled VEH is explored in detail theoretically. Results show that both noise-induced SR and delay-induced SR can occur. Time delay is able to not only enhance the SR behavior but also weaken it. Furthermore, a larger negative feedback gain of displacement and a larger positive feedback gain of velocity are more beneficial for VEH. Interesting finding is that the optimal combination of time delay in maximizing the harvested performance, such as the harvest power, the output RMS voltage and the power conversion efficiency, is almost perfectly consistent with that in maximizing SNR. Compared with the uncontrolled VEH, the delay-controlled VEH can achieve certain desirable optimization in harvesting energy by choosing the appropriate combination of time delays and feedback gains.
Introduction
Vibration energy harvester (VEH) has the ability of utilizing ambient vibration as energy source to harvest electrical power relying on a piezoelectric cantilever in direct piezoelectric effect of vibration-to-electricity conversion. Most solutions currently for enhancing harvested energy mainly focus on resonance behavior, in which case the vibration frequency is coincident to the natural frequency of VEH [1][2][3][4]. Traditionally, linear VEH is the common design type for simplicity [5]. Nevertheless, the linear design suffers from a critical disadvantage in terms of frequency bandwidth, and ambient vibration sources mostly have wide bandwidth. Consequently, the character of narrow bandwidth in linear design makes the linear VEH insufficient to harvest energy from ambient vibration with a wider spectrum. A solution to deal with these difficulties is located in nonlinear bi-stable design for VEH which has been widely applied currently and can enhance the efficiency through increasing the relationship between ambient excitation and VEH response [6][7][8][9][10]. For instance, He et al. [10] has demonstrated that the nonlinear bi-stable VEH can harvest electric power efficiently with the effect of nonlinearity. Although nonlinearity has been proved as a better solution for improving performance of VEH, it is even insufficient for multiple applications related to electrical consumption necessities.
One promising potential technique for enhancing harvested energy can be achieved by using the control method, which can improve the interaction of ambient excitation and piezoelectric cantilever vibration [11,12]. From the viewpoint of vibration control, the time-delayed feedback control will be adopted to adjust nonlinear vibration and optimize system performance [13][14][15][16][17]. Time delay commonly exists in the feedback process of a controlled electromechanical system such that it may has significant impact on system response. Jin et al. [14,15] demonstrated in the study of a Duffing oscillator with delayed state feedback that the appropriate choice of time delay and feedback gain can enhance the control performance of dynamical systems. Yang et al. [16] studied the stiffness nonlinearities SD oscillator under time-delay control and found that time delay can not only enhance the control performance but also suppress the vibration. Furthermore, Yang et al. [17] also found in a novel hybrid energy harvester with time delay that the time-delayed feedback control can enhance stochastic resonance phenomenon leading to a large response and a high output power. Owing to the foregoing advantages of control performance, time-delayed feedback control has become a very popular solution in the field of VEH for improving the energy harvesting effectiveness [17][18][19]. Therefore, the time-delayed feedback control of displacement and velocity is considered in this paper and its efficiency is explored deeply for an electromechanical coupled bi-stable VEH mounted on a rotational automobile tire.
After mounted on a rotational automobile tire, the VEH can be excited autonomously by the periodic force caused by the gravity of the mass and also be disturbed inevitably by the random road excitation [20][21][22]. In this study, the random road excitation (i.e. road irregularity) is assumed as colored noise and generated by passing a white noise through a linear first-order filter [22][23][24].To the authors' knowledge, the stochastic dynamics of a bi-stable VEH with time-delayed feedback control driven by colored noise and a periodic excitation has received less attention.
Despite the challenges associated with strong nonlinearity and complexity of stochastic dynamical behavior induced by noise in bi-stable systems, it has been demonstrated [22,25,26] that the improved stochastic averaging procedure based on energy-dependent frequency is efficient to describe the Brown motion in two potential wells. There are three different periodic motions in the two potential wells according to the energy levels, i.e., vibrating in the right-side potential well, in the left-side potential well or jumping from one well to the other. Zhu et al. [26] firstly proposed the improved stochastic averaging method adopting the variable natural frequency and period of the system corresponding to the different energy levels. Later, the improved stochastic averaging procedure based on energy-dependent frequency got widely used in the study of nonlinear systems, covering the mono-and multi-stable systems [22,27,28].
The purpose of this paper is to study the stochastic dynamics in an electromechanical coupled bi-stable VEH, which is mounted on a rotational automobile tire and subjected to random road excitation. The contributions of time-delayed feedback control on enhancing the energy harvesting are discussed. The paper is organized as follows: In Sect.2, the mathematical model of delay-controlled electromechanical VEH, driven by colored noise and periodic excitation, is established and the equivalent uncoupled system is derived by introducing the harmonic transformation and integrating voltage equation. In Sect. 3, the joint SPD, the effective generalized potential function and the mean output power are obtained analytically by applying the improved stochastic averaging procedure. Then, the effects of time delay on the stochastic stationary response for delay-controlled VEH are mainly analyzed. Meanwhile, the Monte Carlo simulations (MCS) results are also given to verify the validity of the proposed theoretical method. In Sect. 4, the analytical expression of SNR is obtained, and then the effect of time delay on the delay-controlled SR is mainly explored theoretically. Meanwhile, the output RMS voltage and the power conversion efficiency affected by time delay are also analyzed in detail to evaluate the delay-controlled optimization induced by SR. Finally, some specific conclusions are drawn in Sect. 5. The model of a delay-controlled electromechanical coupled VEH mounted on a rotational automobile tire is given in Fig. 1, which has been improved here by considering the time-delayed feedback control based on that of Refs. [21,22]. The VEH consists of a mechanical oscillator coupled to an electrical circuit, for vibration-to-electricity conversion by the piezoelectric mechanism, and a time-delayed feedback controller. In the case of the rotational automotive tire, the system can be autonomously driven by the periodic excitation, caused by the gravity of mass M , and the road irregularity which is considered as colored noise here. The controller is designed as a time-delayed feedback control of displacement and velocity. Thus, the dimensionless coupled system with time-delayed feedback control can be expressed as in which, D and c denote the noise intensity and the correlation time of colored noise, respectively. The symmetric quartic potential function 0 () UX is where 1 and 3 denote the dimensionless linear and cubic stiffness coefficients, respectively. It has two stable equilibria and one unstable saddle point, i.e.
By introducing the generalized harmonic function, the system displacement and the system velocity can be reformed as can be expressed approximately as For the purpose of uncoupling Eq. (1), the dimensionless voltage equation can be integrated and then transformed by can be neglected for reason that the longtime stationary response is just concerned in the system.
After the harmonic transformation, similar to the derivation of Eq.
Consequently, by substituting Eq. (8) into Eq. (7) and neglecting the exponential decay term, the voltage can be 5 obtained as the following form Thus, the equivalent uncoupled system can be obtained by substituting Eqs. (9) and (6) From Eq. (10), one can get the quartic potential function and total energy function as The energy-dependent frequency () H of Eq. (11) can be obtained as where applying the iteration method. A similar calculative process can be found in Ref. [22]. Figure 2 shows the variation of the potential () UX in the absence of harmonic excitation from Eq. (12) and the potential well depth U with different values of control parameters , , 1 and 2 . Note that . It can be seen that () UX is a double-well symmetric potential function about displacement X (see Figs. 2(a) and 2(d)) and varies periodically with increase of 2 as shown in Fig. 2(a). For the time-delayed velocity feedback control, as shown in Fig. 2(b), with the increase of the feedback gain , well depth U increases in some ranges of 2 while decreases in other ranges. This phenomenon can also be found in Fig. 2(c) for the time-delayed displacement feedback control. These imply that the effects of feedback gains on potential function in the direction mainly depend on the chosen values of the time delays. For fixed 1 1.3 = and 2 0.5 = , we find that the feedback gains of displacement and velocity can change not only the well depth but also the well space as depicted in Fig. 2(d).
Delay-controlled stochastic stationary response
The equivalent uncoupled system (10) can be replaced by two first-order differential equations for () Xt and () Ht as follows ( ) where t denotes the time averaging, i.e., Note that due to the symmetry of the right-side and left-side potential wells in the absence of harmonic excitation.
Accordingly, we can obtain the SPD of the total energy from the Îto Eq. (16) (20) where 0 N is a normalization constant.
One can prove that Thereby, the joint SPD of the equivalent uncoupled system (10) (1 ( )) (1 ( )) 1 From the SPD in Eq. (22), the effective generalized potential function of the system displacement and the velocity can be got as 2 2 2 2 4 2 13 According to Eqs. (9) and (22), one can derive the mean-square voltage and then get the mean output power as the According to the analysis of the effects of time delay on the potential function (12) in the absence of harmonic excitation, the time-delayed feedback control may be of great significance in the performance optimization of 8 energy harvester. Thus, the effects of time-delayed feedback control on the output power of the VEH (1) in the absence of harmonic excitation are analyzed. The main system parameters are set as 1 First of all, the theoretical results by the stochastic averaging method are shown and compared with the numerical results by MCS from the original system Eq. (1) to verify the above obtained theoretical results (22) and (25), as displayed in Fig. 3. The joint SPD in Fig. 3(a) and the mean output power [] EP denoted by solid line in Fig. 3(d) are the analytical results determined by Eqs. (22) and (25), respectively. The corresponding numerical results can be seen in Figs. 3(b) and 3(d) depicted with circle symbols. It is found that they are consistent very well. , and the comparison of controlled and uncontrolled cases is present in Fig. 3(d).
Obviously, the output power under controlled case is superior to that under uncontrolled case. These indicate that the harvested power of the delay-controlled energy harvester can be enhanced by choosing the appropriate combination of time delays and feedback gains. Fig. 4. It can be seen in Fig. 4(a) Fig. 4(b)). Meanwhile, for the controlled system with displacement feedback, a larger negative feedback gain of displacement can output much more power [] EP markedly around 1 0.7 = . Fig. 4(a)
Delay-controlled stochastic resonance
From the above analysis, the time delay plays a constructive role in enhancing the output power in the controlled system without the harmonic excitation. For purpose of harvesting the electric energy from the external environment more effectively, it is a great idea to install the delay-controlled energy harvester on a rotational automobile tire such that it can be excited autonomously by the periodic excitation caused by the gravity of the 10 mass. Experiments [20,21] have validated this design, showing that an energy harvester can effectively scavenge much more electric energy from rotational environment by using SR phenomenon induced by the rotational automobile tire. According to the purpose, in this section, we mainly focus on the effect of time delay on the delay-controlled VEH under the rotational environment by observing the output SNR, which can characterize the SR phenomenon quantitatively. Meanwhile, the output RMS (root mean square) voltage and the power conversion efficiency affected by time delay are also analyzed to evaluate the delay- (26) In order to obtain the analytical expression of SNR, the Langevin equation of the equivalent uncoupled system (10) can be expressed as a two variable system where m denotes s + , s − and u in Eq. (28), respectively. Subsequently, by using the definition of mean first-passage time and the steepest descent method, the exact expression of the transition rate R out of s X can be obtained as 11 The expansion in the sin t term and the preservation of the first nontrivial order in the above equation (31) Consequently, the analytical expression of the SNR of the delay-controlled system can be finally obtained as below by substituting Eq. (35) into Eq. (26) , as increases, the peak value of SNR changes slightly while the position of the peak moves toward large noise intensity gradually. Whereas, the position of the peak in SNR moves to small noise intensity gradually with the increase of for fixed 2 0.5 = (see Fig. 5(b)). In these two subplots, if the noise intensity D of the weak noise is fixed as 0.005 displayed with the red lines, we can find that SNR increases monotonically with decreasing and increasing . These once again indicate that a negative feedback gain and a positive feedback gain are more beneficial for the SR behavior of the controlled system in the case of weak noise. Whereas, in Figs. 5(c) and 5(d), for fixed 0.005 D = (see the red lines), the output SNR exhibits non-monotonic with the variations of time delays 1 and 2 , which indicates that 1 and 2 are able to not only enhance the SR behavior but also weaken it for the delay-controlled system.
Sequentially, the joint effect of 1 and 2 on the output SNR is present in Fig. 5(e). We find that there exist multi peaks in the output SNR, which indicates that the SR behavior of the controlled system can be optimized by choosing the suitable combination of time delay 12 ( , ) , e.g. 12 ( , ) (0.6, 2.5) On the contrary, if chosen at the minimum of SNR, time delay can also weaken the SR behavior seriously. Interesting finding is that the optimal combination of 12 ( , ) in maximizing SNR is almost perfectly consistent with that in maximizing the mean output Furthermore, according to the above study about the effect of time delay on the output SNR, the comparisons 13 of delay-controlled case and uncontrolled case for the VEH in the output RMS voltage rms V and the power conversion efficiency % are also presented in Fig. 6. For the delay-controlled case, an optimal combination of 12 ( , ) is chosen as (0.6, 2.5) , and the feedback gains ( , ) are set as ( 0.01, 0.01) − . Under the time-delayed feedback control of displacement and velocity, as shown in Figs. 6(a) and 6(b), the controlled system exhibits better performance in both rms V and % compared with the uncontrolled case, especially for the weak noise.
Figures 6(c) and 6(d) show the comparisons of controlled case and the controlled case just by displacement. As shown in Fig. 6(c), the output SNRs in both cases exhibit multi SR phenomenon as time delay 1 increases, which indicates that delay-induced SR occurs. In addition, the curves of SNR keep the same non-monotonic movement, i.e., the positions of 1 in maximizing SNR are the same. Meanwhile, it can be seen in Fig. 6
Conclusions
In this paper, for purpose of optimizing energy harvesting performance of VEH from the viewpoint of vibration control, the time-delayed feedback control of displacement and velocity are constructively proposed into an electromechanical coupled VEH mounted on a rotational automobile tire, which is driven by colored noise and periodic excitation. By applying the improved stochastic averaging procedure based on the energy-dependent frequency, the effects of time-delayed feedback control on the stochastic stationary response and SR on the bi-stable VEH are discussed based on theoretical analysis. Meanwhile, the output RMS voltage and the power conversion efficiency affected by time delays are also analyzed in detail to evaluate the delay-controlled optimization. The obtained interesting conclusions are drawn as follows: 1) The theoretical results of SPD and SNR obtained by the improved stochastic averaging procedure are well verified by the numerical results through MCS. 2) Both the mean output power [] EP and the SR behavior of the delay-controlled VEH can be optimized by choosing the appropriate combination of time delays and feedback gains. A larger negative feedback gain of displacement and a larger positive feedback gain of velocity are more beneficial to the harvested power and the SR behavior for the controlled VEH. Time delays 1 and 2 can induce non-monotonic phenomenon and multi-peak appearing in the mean output power and SNR. Interesting finding is that the optimal combination of time delays in maximizing SNR is almost perfectly consistent with that in maximizing the mean output power. 3) Both noise-induced SR and delay-induced SR occur. Time delay is able to not only enhance the SR behavior but also weaken it for the delay-controlled VEH. Moreover, the output RMS voltage rms V and the power conversion efficiency % reach the local maximum at the same time delay in maximizing SNR such that they can be optimized by delay-induced SR behavior. 4) Compared with the uncontrolled VEH, the controlled VEH exhibits better performance in the output RMS voltage and the power conversion efficiency, especially for the weak noise. All comparisons indicate that the delay-controlled VEH can achieve certain desirable optimization in harvesting energy by choosing the appropriate combination of time delay under the rotational environment, which has a great of realistic significance in optimizing the performance for VEH.
|
2020-03-17T01:01:05.333Z
|
2020-03-16T00:00:00.000
|
{
"year": 2021,
"sha1": "1e1b574c02aef66809b0522db84ae2553047a5a1",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2003.07277",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1e1b574c02aef66809b0522db84ae2553047a5a1",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
9096762
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of macroscopic examination, routine gram stains, and routine subcultures in the initial detection of positive blood cultures.
Blood was cultured in two vaccum bottles containing Columbia broth with sodium polyanethol sulfonate and CO(2). Filtered air was admitted to one bottle, and the bottles were incubated at 35 C until growth was detected or for a maximum of 7 days. Bottles were examined daily for macroscopic growth. Gram stains were made routinely on the 1st, 4th, and 7th days, and samples were routinely subcultured to sheep blood agar (incubated in GasPak jar) and chocolate agar (incubated in CO(2)) on the 1st and 4th days of incubation. Of 1,127 positive blood cultures, 65% were first detected by macroscopic examination, 23% were first detected by Gram stain, and 12% were first detected only by subculture.
Blood was cultured in two vacuum bottles containing Columbia broth with sodium polyanethol sulfonate and CO2. Filtered air was admitted to one bottle, and the bottles were incubated at 35 C until growth was detected or for a maximum of 7 days. Bottles were examined daily for macroscopic growth. Gram stains were made routinely on the 1st, 4th, and 7th days, and samples were routinely subcultured to sheep blood agar (incubated in GasPak jar) and chocolate agar (incubated in C02) on the 1st and 4th days of incubation. Of 1,127 positive blood cultures, 65% were first detected by macroscopic examination, 23% were first detected by Gram stain, and 12% were first detected only by subculture.
There are many methods recommended for the routine-culture and examination of blood samples. There is agreement that blood cultures should be observed at least daily for macroscopic growth, but suggestions as to the need for routine Gram stains and blind subcultures vary from author to author. We are not aware of any published report comparing the efficacy of these procedures in the initial detection of positive blood cultures. Therefore, a comparative study was carried out to assess the value of the three approaches to detection of initial microbial growth in blood cultures.
MATERIALS AND METHODS Blood cultures were obtained from patients in the University of Minnesota hospitals (approximately 800 beds) and were processed in the Diagnostic Microbiology Laboratory, which receives about 700 blood cultures per month.
Blood was cultured in two vacuum bottles containing 100 ml of Columbia broth with 0.03% sodium polyanethol sulfonate and 10% CO2 (B-D Division of BioQuest). The blood was drawn by physicians, and the amount inoculated into each bottle varied from a few drops to approximately 10 ml. When the bottles were received in the laboratory, filtered air was admitted to one bottle by using a blood collection set (B-D Division of BioQuest); the collection set was removed from the bottle before incubation. The other bottle was considered to be anaerobic. Penicillinase (Difco) was added when indicated. The blood cultures were incubated at 35 C for 7 days or until growth was noted. Cultures from patients with suspected bacte-rial endocarditis or brucellosis were held for 2 to 3 weeks.
Cultures were examined macroscopically for growth in the morning and afternoon on the 1st day of incubation and in the morning of each day thereafter. Cultures that appeared positive were Gram stained immediately, and subcultures were made according to the types of organisms seen.
Gram stains were performed on all bottles that appeared macroscopically negative on the 1st, 4th, and 7th day of incubation. Blind subcultures were also made on the 1st and 4th days to a sheep blood agar plate (incubated anaerobically) and to a chocolate agar plate (incubated in C02). Subculture plates were held for 2 days before they were discarded as negative.
Each procedure was performed in the routine laboratory by a total of 13 microbiology technologists on a rotation basis.
RESULTS
The method of first detection of growth is shown in Table 1. There were a total of 7,357 blood cultures examined over a period of 10.5 months, and 1,127 were positive. Of these, 734 Table 2 shows the day on which cultures were noted to be positive by three methods of detection. Forty-seven percent of those first detected by macroscopic examination were found on the 1st day. Of those first detected by Gram stain, 49% were found on the 1st day, 28% were found on the 4th day, and 23% were found on the 7th day. Of the positive cultures first detected by subculture, 76% were detected on the 1st day and 24% were detected on the 4th day. One hundred twenty-five positive cultures were not apparent macroscopically on the 1st day, and 106 positive cultures were not detected by Gram stain on the 1st day, nor were 33 positives detected by Gram stain on the 4th day.
Of the 1,127 positive blood cultures, 467 (41.4%) were detected on the 1st day either macroscopically or by Gram stain.
The numbers and types of organisms isolated, along with the mean times for detection (by all methods) are shown in Table 3. Of all the organisms isolated, Haemophilus influenzae, H. parainfluenzae, Moraxella sp., and Neisseria gonorrhoeae were detected first only by subculture. These organisms were never detected first by macroscopic examination or Gram stain, although approximately one-half of the Haemophilus cultures appeared macroscopically positive subsequent to subculture.
Of the Pseudomonas aeruginosa isolated, only one-third were detected first macroscopically, one-third were detected first by Gram stain, and the remaining one-third were detected first only by subculture.
Anaerobic organisms were almost always de- tected either by macroscopic examination or Gram stain. Only four strains of Bacteroides were first detected on the anaerobic subculture plate. The organisms detected first by the 7th-day Gram stain included Propionibacterium acnes, Candida, Corynebacterium, Peptococcus, Pseudomonas, Staphylococcus epidermidis, and Torulopsis glabrata, although some strains of these bacteria were also detected by the other methods.
DISCUSSION The data presented indicate that, for optimal speed in detection and identification of organisms from positive blood cultures, both routine Gram stains and blind subcultures should be performed in addition to daily visual inspection of cultures. If routine Gram stains and subcul-INITIAL DETECTION OF POSITIVE BLOOD CULTURES addition to macroscopic inspection, only Gram stains were done, there would have been a delay in 12% of the cultures. If only subcultures had been performed, 23% of the positive reports would have been delayed at least 1 day. One might make the point that results of subcultures themselves were delayed by 1 day and that the culture in some cases may have been positive macroscopically the next day; however, even though this may be true, at the time of reading the subcultures a more definite identification of the organism could be given to the physician rather than just its Gram stain morphology. Subcultures were especially important in the more rapid detection of Haemophilus, since these organisms were all detected first only by this means. Both Gramstains and subcultures were also valuable in the more rapid detection of Pseudomonas, as two-thirds of those isolated were detected first only by Gram stain or subculture. Our experience with Pseudomonas bears out the study by Slotnick and Sacks (3), who stated that the use of visible growth or Gram stains alone are not sufficient to detect the presence of Pseudomonas in blood culture media.
Although there is no question about the importance of a Gram stain to detect positive blood cultures on the 1st day, the value of Gram stains on the 4th day in relation to the amount of work involved and the clinical importance might be questioned. In this study, approximately 6% of the positives were first detected by Gram stains on the 4th day. Individual judg-ments would have to be made as to whether detection of the positive on the 4th day would be that much more important than detection by subculture the following day.
The blood cultures in this study were incubated for a maximum of 7 days, except in cases of suspected brucellosis or endocarditis. This incubation period was based on the results of previous unpublished studies in our laboratory which demonstrated the rarity of isolation of clinically significant organisms after 1 week of incubation. Effersoe (1) has also shown that incubation for longer than 7 days is not necessary, especially if "control" Gram stain and subcultures are performed.
It was not the intent of this study to assess the overall rapidity of organism detection. However, the information in Table 3 does allow for comparison with other recently published studies (2, 4) on this subject. On the basis of these comparisons, we feel that the spacing of the procedures evaluated in our study are appropriate and practical for the clinical laboratory.
|
2018-04-03T00:00:38.419Z
|
1974-03-01T00:00:00.000
|
{
"year": 1974,
"sha1": "8a569baabddc92e83f21427df6066f3ca96bb25b",
"oa_license": null,
"oa_url": "https://doi.org/10.1128/aem.27.3.537-539.1974",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7f8bf46fb2dcedcbed433ded937c6b479b3e2c1d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
108759250
|
pes2o/s2orc
|
v3-fos-license
|
Can Clouds replace Grids? Will Clouds replace Grids?
The world's largest scientific machine – comprising dual 27km circular proton accelerators cooled to 1.9oK and located some 100m underground – currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared "open" and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability – as seen by the experiments, as opposed to that measured by the official tools – still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently "Cloud Computing" – in terms of pay-per-use fabric provisioning – has emerged as a potentially viable alternative but with rather different strengths and no doubt weaknesses too. Based on the concrete needs of the LHC experiments – where the total data volume that will be acquired over the full lifetime of the project, including the additional data copies that are required by the Computing Models of the experiments, approaches 1 Exabyte – we analyze the pros and cons of Grids versus Clouds. This analysis covers not only technical issues – such as those related to demanding database and data management needs – but also sociological aspects, which cannot be ignored, neither in terms of funding nor in the wider context of the essential but often overlooked role of science in society, education and economy.
Intr oduction
The first In order to process and analyze the data from the world's largest scientific machine, a worldwide grid service -the Worldwide LHC Computing Grid (LCG) [1] -has been established, building on two main production infrastructures: those of the Open Science Grid (OSG) [2] in the Americas, and the Enabling Grids for E-sciencE (EGEE) [3] Grid in Europe and elsewhere.The machine itself -the Large Hadron Collider (LHC) -is situated some 100m underground beneath the French-Swiss border near Geneva, Switzerland and supports four major collaborations and their associated detectors: ATLAS, CMS, ALICE and LHCb.
Even after several levels of reduction, some 15PB of data will be produced per year at rates to persistent storage of up to 1.5GB/s -the LHC itself having an expected operating lifetime of some 10 -15 years.These data will be analyzed by scientists at close to two hundred and fifty institutes worldwide using the distributed services that form the Worldwide LHC Computing Grid (WLCG) [4] [5][6].Depending on the computing models of the various experiments, additional data copies are made at the various institutes, giving a total data sample well in excess of 500PB and possibly exceeding 1EB.
F Figur e 1 -Fir st Beam Event Seen in the ATLAS Detector Running a service where the user expectation is for support 24x7, with extremely rapid problem determination and resolution targets, is already a challenge.When this is extended to a large number of rather loosely coupled sites, the majority of which support multiple disciplines -often with conflicting requirements but always with local constraints -this becomes a major or even "grand" challenge.That this model works at the scale required by the LHC experiments -literally around the world and around the clock -is a valuable vindication of the Grid computing paradigm.
Figur e 2 -J obs per month by LHC vir tual or ganisation However, even after many years of preparation -including the use of well-proven techniques for the design, implementation, deployment and operation of reliable services -the operational costs are still too high to be sustained in the long term.This translates to significant user frustration and even disillusionment.On the positive side, however, the amount of application support that is required compares well with that of some alternate models, such as those based on supercomputers.The costs involved with such solutions are way beyond the means of the funding agencies involved, nor are they necessarily well adapted to the "embarrassingly parallel" nature of the types of data processing and analysis that typify the High Energy Physics (HEP) domain.
F Figur e 3 -The ATLAS Detector This makes HEP an obvious test-case for cloud computing models and indeed a number of feasibility studies have already been performed.The purpose of this paper is to explore the potential use of clouds against a highly ambitious target: not simply whether it is possible on paper -or even in practice -to run applications that typify our environment, but whether it would be possible and affordable to deliver a level of service equivalent to -or even higher -than that available today using Grid solutions.In addition to analyzing the technical challenges involved, the "hidden benefits" of Grid computing, namely in terms of the positive feedback provided -both scientifically and culturally -to the local institutes and communities that provide resources to the Grid, and hence to their funding agencies who are thus hopefully motivated to continue or even increase their level of investment, are also compared.Finally, based on the wide experience gained by sharing Grid solutions with a large range of disciplines, we try to generalize these findings to make some statements regarding the benefits and weaknesses of these competing -or possibly simply complementary -models.
Motivation
There is a wide range of applications that require significant computational and storage resourcesoften beyond what can conveniently be provided at a single site.These applications can be broadly categorized as provisioned -meaning that the resources are needed more or less continuously for a period similar to, or exceeding, the usable lifetime of the necessary hardware; scheduled -where the resources are required for shorter periods of time and the results are not necessarily time critical (but higher than for the following category); opportunistic -where there is no urgent time pressure, but any available resources can be readily soaked up.Reasons why the resources cannot easily be provided at a single site include those of funding, where international communities are under pressure to spend funds locally to institutes that are part of the collaboration, as well as those of power and coolingincreasingly a problem with high energy prices and concerns over greenhouse gases.
Whilst Grid computing can claim significant successes in handling the needs of these communities and their applications, the entry threshold -both for new applications as well as additional sites / service providers -is still considered too high and is an impediment to their wide-scale adoption.Nevertheless, one cannot deny the importance of many of the applications currently investigating or using Grid technologies, including drug research, disaster response and prediction, as well as major scientific research areas, typified by High Energy Physics and CERN's Large Hadron Collider programme, amongst many others.
Currently, adapting an existing application to the Grid environment is a non-trivial exercise that requires an in-depth understanding not only of the Grid computing paradigm but also of the computing model of the application in question.
The successful demonstration of a straightforward recipe for moving a wide range of applicationsfrom simple to the most demanding -to Cloud environments would be a significant boost for this technology and could open the door to truly ubiquitous computing.This would be similar to the stage when the Web burst out of the research arena and use by a few initiates to its current state as a tool used by virtually everyone as part of their everyday work and leisure.However, the benefits can be expected to be much greater -given that there is essentially unlimited freedom in the type of algorithms and volumes of data that can be processed.
3. Ser vice Tar gets
There are two distinct views of the service targets for WLCG: those specified up-front in a Memorandum of Understanding [7] (MoU) -signed by the funding agencies that provide the resources to the Grid -and the "expectations" from the experiments.We have seen a significant mismatch between these two views and attempted to reconcile them into a single set of achievable and measurable targets.
Service
Maximum The basic underlying principle is not to "guarantee" perfect services, but to focus on specific failure modes, limit them where possible, and ensure sufficient redundancy is built in at the required levels to allow "automatic" recovery from failures -e.g. using buffers and queues of sufficient size that are automatically drained once the corresponding service is re-established.Nevertheless, the targets remain high, specified both in service availability measured on an annual basis as well as the time to respond when necessary.
Cr iticality of Ser vice Impact of degr adation / loss Very high interruption of these services affects online data-taking operations or stops any offline operations High interruption of these services perturbs seriously offline computing operations Moderate interruption of these services perturbs software development and part of computing operations Ser vice r eadiness Disk, CPU, Database, Network requirements defined?Monitoring criteria described?Problem determination procedure documented?Support chain defined (2nd/3rd level)?Backup/restore procedure defined?
Site r eadiness Suitable hardware used?Monitoring implemented?Test environment exists?Problem determination procedure implemented?Automatic configuration implemented?Backup procedures implemented and tested?As currently defined, a very small number of incidents are sufficient to bring a site below its availability target.In order to bridge the gap between these two potentially conflicting views -and building on the above mentioned industry-standard techniques -we observe relatively infrequent breaks of service: either those that are directly user-visible or those that cannot be smoothed over using the buffering and other mechanisms mentioned.We have put in place mechanisms whereby appropriately privileged members of the user communities can raise alarms in these casesupplementing the automatic monitoring that may not pick up all error conditions -that can be used 24x7 to alert the support teams at a given site.
Whilst these mechanisms are used relatively infrequently -around once per month in the most intense periods of activity -the number of situations where a major service or site is either degraded or unavailable for prolonged periods of time with respect to the targets defined in the MoU is still far too high -sometimes several times per week.Most of these failures fall into a small number of categories: Power and cooling -failures in a site's infrastructure typically have major consequences -the site is down for many hours.Whilst complete protection against such problems is unlikely to be affordable, definition and testing of recovery procedures could be improved -e.g.ensuring the order in which services are restarted is well understood and adhered to, making sure that the necessary infrastructure -redundant power supplies, network connections and so forthare such as to maximize protection and minimize the duration of any outages; Configuration issues -required configuration changes are often communicated in a variety of (unsuitable) formats, with numerous transcription (and even interpretation) steps, all sources of potential errors; Database and data management services -the real killers.For our data intensive applications, these typically render a site or even region unusable.The above table (a detailed description of all of the acronyms is not relevant here but can be found in [5]) emphasizes the importance of database and data management services: the most and second most critical services required at the Tier0 and Tier1 sites are either database related, data management related, or in most cases both.These service targets are complemented by more specific requirements from the experiments.The tables below list those for the CMS experiment.A site can be in one of the following 3 states:
When
1. C COMMISSIONED: daily rules satisfied during the last 2 days, or during the last day and at least 5 days in the last 7 2. W WARNING: daily rules not satisfied in the last day but satisfied during at least 5 days in the last 7 3. U UNCOMMISSIONED: daily rules satisfied for less than 5 days in the last 7 The purpose of these rules is to ensure as many sites as possible stay in commissioned status and to allow for a fast recovery when problems start to occur.The following figure shows a historical snap-shot of CMS Tier2 sites for the specified time-window.
Figur e 5 -Status of CMS Links
In principle, grids -like clouds -should offer sufficient redundancy that the failure of some fraction of the overall system can be tolerated with little or preferably no service impact.This is not, unfortunately, true of all computing models in use in HEP, in which for reasons of both geography and funding specific dependencies exist between different sites -both nationally and internationally.Furthermore, sites have well defined functional roles in the overall data processing and analysis chain which mean that they cannot always be replaced by any other -although sometimes by one or more specific sites.This is not a weakness of the underlying model but simply a further requirement from the application domain -the proposed solution must also work given the requirements and constraints from the possibly sub-optimal computing model involved.The main responsibilities of the first 3 tiers are given below: Tier0 (CERN): safe keeping of RAW data (first copy); first pass reconstruction, d distr ibution of RAW data and r econstr uction output (Event Summar y Data or ESD) to Tier 1; reprocessing of data during LHC down-times; Tier1: safe keeping of a proportional share of RAW and reconstructed data; scale r epr ocessing and safe keeping of corresponding output; d distr ibution of data pr oducts to Tier 2s and safe keeping of a share of simulated data produced at these Tier2s; Tier2: Handling a analysis requirements and proportional share of s simulated event production and reconstruction.In the considerations below, we will discuss not only whether the cloud paradigm could be used to solve all aspects of LHC computing but also whether it could be used for the roles provided by one or more tiers or for specific functional blocks (e.g.analysis, simulation, re-processing etc.) 4. The data is the challenge Whilst there is little doubt that for applications that involve relatively small amounts of data and/or data rates the cloud computing model is almost immediately technically viable this is one of the largest areas of concern for our application domain.Specific issues include: Long-term data curation: if this is the responsibility of "the user" a significant amount of infrastructure and associated support is required to store and periodically migrate data between old and new technologies over long periods of time -problems familiar to those involved with large scale (much more than 1PB) data archives; Data placement and access: although we have been relatively successful in defining standard interfaces to a reasonably wide-range of storage system implementations rather fine-grained control on data placement and data access has been necessary to obtain the necessary performance and isolation of the various activities -both between and within virtual organizations; Data transfer: possibly a curiosity of the computing models involved and strongly coupled to the specific roles of the sites that make up the WLCG infrastructure -bulk data currently needs to be transferred at high rates in pseudo real-time between sites.Would this be simplified or eliminated using a cloud-based solution?Figure 6 below shows the percentage of file transfers that are successful on the first attempt.It is clearly much lower than desirable, resulting in wasted network bandwidth and extra load on the storage services, which in turn has a negative effect on other activities; Database applications: behind essentially all data management applications even if a variety of technologies are used -often at a single site.Again, deep knowledge of the hardware configuration and physical implementation are currently required to get an acceptable level of service.It is perhaps unfair to compare a solution that has evolved over around a decade, with many teething problems and a number of major outstanding issues, with an alternative and impose that a well-established computing model must be supported without change.On the other hand, targets that are relatively independent of the implementation can be defined in terms of availability, service level, computational and data requirements.It may well be that on balance the technical and managerial advantages outweigh any as yet to be found drawbacks.This would leave unavoidable issues such as cost, together with the sociological and other "spin-off" benefits.
Oper ations Costs
The operations costs of a large-scale grid infrastructure are rarely reported and even when this is done it typically refers to the generic infrastructure and not to the total costs of operating the computing infrastructure of a large-scale collaboration.At neither level are the costs negligible: the European Grid Initiative Design Study estimates the total number of full time equivalents for operations-related activities across all National Grid Initiatives to be broadly in the 200-400 range -with an extremely modest 5(!) people providing overall coordination (compared to 15-20 in the EGEE project, currently in its 3 rd and presumably final phase).The operations effort required for a single large virtual organization, such as the ATLAS experiment -the largest LHC collaboration, is almost certainly in excess of 100.A typical WLCG "Grid Deployment Board" -the monthly meeting working on the corresponding issues -also involves around 100 local and remote participants, whereas a "WLCG Collaboration workshop" can attract closer to 300 -mainly site administrators and other support staff.These costs are not always easy to report accurately as they often covered -at least in part -by doctoral students, post-doctoral fellows and other "dark effort".However, any objective comparison between different solutions must include the total cost of ownership and not just a somewhat arbitrary subset.
Gr id-Based Petascale Pr oduction is Reality
Despite the remaining rough edges to the service, as well as the undeniably high operational costs, the success of building a petascale (using the loose definition of 100,000 cores) world-wide distributed production facility that is built using several independently managed and funded major grid infrastructures -of which the two main components are built out of O(100) sites (EGEE and OSG)must be considered a large success.The system has been in production mode -with steady improvement in reliability over time -since at least 2005.This includes formal capacity planning, scheduling of interventions -the majority of which can be performed with zero user-visible downtime -and regular reviews of availability and performance metrics.A service capable of meeting the evolving requirements of the LHC experiments must continue for at least the usable life of the accelerator itself plus an additional few years for the main analysis of the data to be completed.Including foreseen accelerator and detector upgrades, this probably means until around 2030! Whether grids survive this long is somewhat academic -major changes in IT are inevitable on this timescale and adapting to (or rather benefiting from) these advances are required.How this can be done in a non-disruptive manner is certainly a challenge but it is worth recalling that the experiments at LEPthe previous collider in the same tunnel -started in an almost purely mainframe environment (IBM, Cray, large VAXclusters and some Apollo workstations) and moved first to farms of powerful Unix workstations (HP, SGI, Sun, IBM, …) and finally PCs running Linux.This was done without interruption to on-going data taking, reprocessing and analysis, but obviously not without major work.Much more recently, several hundred TB of data from several experiments went through a "triple migration" -a change of backend tape media, a new persistency solution and a corresponding re-write of the offline software -a major effort involving many months of design and testing and an equivalent period for the data migration itself.(The total effort was estimated at ~1FTE/100TB of data migrated).These examples give us confidence that we are able to adapt to major changes of technology that are simply inevitable for projects with lifetimes measured in decades.
7. Towar ds a Cloud Computing Challenge
Over the past few years a series of "service challenges" has been carried out to ramp-up to the necessary level required to support data taking and processing at the LHC.This culminated in 2008 in a so-called "Common Computing Readiness Challenge (CCRC'08)" -aimed at showing that the computing infrastructure was ready to meet the needs of all supported experiments at all sites that support them.Given a large number of changes foreseen prior to data taking in 2009, a further "CCRC'09" is scheduled for 2 months prior to data taking in that year.(This will be a rather different event that the 2008 challenge, relying on on-going production activities, rather than scheduled tests, to generate the necessary workload.Where possible, overlap of inter-VO, as well as infra-VO, activities will be arranged to show that the system can handle the combined workloads satisfactorily).An important -indeed necessary -feature of these challenges has been metrics that are agreed upfront and are reported on regularly to assess our overall state and progress.Whilst it is unlikely that in the immediate future a challenge on an equivalent scale could be performed using a cloud environment, such a demonstration is called for -possibly at progressively increasing scale -if the community is to be convinced of the validity and even advantages of such an approach.The obvious area where to start is that of simulation -a compute-dominated process with relatively little I/O needs.Furthermore, in the existing computing models the Tier2 sites -where such work typically but not exclusively takes place -data curation is not provided.Thus, the practice of storing output data at a (Tier1) site that does provide such services is well established.Thus, the primary question that should be answered by such a question is: Can cloud computing offer compute resources for low I/O applications, including services for retrieval of output data for long-term data storage "outside" of the cloud environment, in a manner that is sufficiently performant as well as cost-competitive with those typically offered today by Universities and smaller institutes?To perform such a study access to the equivalent of several hundred -a few thousand cores for a minimum of some weeks would be required.There is little doubt that such a study would be successful from a technical point of view, but would it be not only competitive or even cheaper in terms of total cost of ownership?The requirements for such a study have been oversimplified -e.g. the need for access to book-keeping systems and other database applications and a secure authentication mechanism for the output storage -but it would make a valuable first step.If not at least in the same ball-park in terms of the agreed criteria there would be little motivation for further studies.
Rather than loop through the various functional blocks that are mapped to the various tiers described above, further tests could be defined in terms of database and data management functionality -presumably both more generic as well as more immediately understandable to other disciplines.These could be characterized in terms of the number of concurrent streams, the type and frequency of access (sequential, random, rarely, frequent) and equivalent criteria for database applications.These are unlikely to be trivial exercises but the potential benefit is large -one example being the ability of a cloud-base service to adapt to significant changes in needs, such as preconference surges that can typically not be accommodated by provisioned resources that do not have enough headroom for such peaks, often synchronized across multiple activities, both within and across multiple virtual organizations.8 8. Data Gr ids and Computational Clouds -Fr iends or Foes?
The possibility of Grid computing taking off in a manner somehow analogous to that of the Web has often been debated.A potential stumbling block has always been cost and subscription models analogous to those of mobile phone network providers have been suggested.In reality, access to the Web is often not "free" there may not be an explicit charge for Internet access in many companies and institutes -and without the Internet the Web would have little useful meaning.However, for most people Internet access is through a subscription service, that may itself be bundled with others, such as "free" national or even international phone calls, access to numerous TV channels and other such services.
A more concrete differentiator is the "closed" environment currently offered as "Clouds" -it may be clear how one purchases services but not how one contributes computational and storage resources in the manner that a site can "join" an existing Grid.A purely computational Grid -loosely quantified as one that provides no long term data storage facilities or curation -is perhaps the most obvious competitor of Clouds.Assuming such facilities are shared as described above between provisioned, scheduled and opportunistic use a more important distinction could -again -be in the level of data management and database services that are provided.
A fundamental principle of our grid deployment model has been to specify the interfaces but not the implementation.This has allowed sites to accommodate local requirements and constraints whilst still providing interoperable services.It has, however, resulted in a much higher degree of complexity and in less pooling of experience and techniques than could otherwise have been the case.This is illustrated when the strategies for two of the key components -databases and data management -are compared.The main database services at the Tier0 and Tier1 sites (at least for ATLAS -the largest VO), have been established using a single technology (Oracle) with common deployment and operational models.Data management services -whilst accessed through a common interface (SRM) -are implemented in numerous different variations.Even when the same software solution is used, the deployment model differs widely and it has proven hard to share experience.The table below shows the diversity in terms of front-end storage solutions: in the case of dCache not only are multiple releases deployed but also the backend tape-based mass storage system (both hardware and software) varies from site to site -creating additional complexity.There is little doubt that the cost of providing such services as well as the achieved service level suffers as a result -even if more "politically correct".Any evolution or successor of these services would benefit from learning from these experiences.
Gr ids ver sus Clouds -Sociological Factor s
For many years oft-levelled criticism of HEP has been the "brain-drain" effect from Universities and other institutes to large central facilities such as CERN.Although distributed computing has been in place since before the previous generation of experiments at the LEP collider -formerly housed in the same 27km tunnel as the LHC today -scientists at the host laboratory had very different possibilities to those at regional centres or local institutes.Not only does the grid devolve extremely important activities to the Tier1 and Tier2 sites but the key question of equal access to all of the data is essentially solved.This brings with it the positive feedback effect mentioned above which is so important that it probably outweighs even a (small) cost advantage -to be proven -in favour of nongrid models.
10.Is the Gain wor th the Pain?
It should be clear from the above that some of the major service problems associated with today's production grid environment could be avoided by adopting a simpler deployment model: fewer sites, less diversity but also less flexibility.However, much of the funding that we depend on would not be readily available unless it was spent -as now -primarily locally.On the other hand -and in the absence of any large-scale data-intensive tests -it is unclear whether a cloud solution could meet today's technical requirements.A middle route is perhaps required, whereby grid service providers learn from the difficulties and costs of providing reliable but often heterogeneous services, as well as the advantages in terms of service level, possibly at the cost of some flexibility, through a more homogeneous approach.Alternatively, some of the peak load could perhaps be more efficiently and cost effectively handled by cloud computing, leaving strongly data-related issues to the communities that own them and are therefore presumably highly motivated to solve them.For CERN, answers to these questions are highly relevant -projections show that we will run out of power and cooling in the existing computer centre on a time-scale that precludes building a new one on the CERN site (for obvious reasons, priority has been given in recent years to the completion of the LHC machine).Overflow capacity maybe available in a partner site to tide us through: do we have the time to perform a sufficiently large scale demonstration of a cloud-based solution to obviate such a move?Is there a provider sufficiently confident of their solution that they are willing to step up to this challenge?There have been no takers so far and time is running out -at least for this real-life exabytescale test-case.In the meantime our focus is on greatly improving the stability and usability of our storage services, not only to handle on-going production activity with acceptably low operational costs, whilst preparing for large-scale data-intensive end-user analysis that will come with the first real data from the world's largest scientific machine.
1 11.Conclusions After many years of research and development followed by production deployment and usage by many VOs, worldwide Grids that satisfy the criteria in Ian Foster's "grid checklist" [8] are a reality.
There is significant interest in longer-term sustainable infrastructures that are compatible with the current funding models and work on the definition of the functions of and funding for such systems is now underway.Using a very simple classification of Grid applications, we have briefly explored how the corresponding communities could share common infrastructures to their mutual benefit.A major challenge for the immediate future is the containment of the operational and support costs of Grids, as well as reducing the difficulties in supporting new communities and their applications.These and other issues are being considered by a design study for a long term e-infrastructure [9].Cloud computing may well be the next step in the long road from extremely limited computing -as typified by the infamous Thomas J. Watson 1943 quote "I think there is a world market for maybe five computers" -to a world of truly ubiquitous computing (which does not mean free).It is clear that the applications described in this document may represent today's "lunatic fringe", but history has repeatedly shown that these needs typically become main-stream within only a few years.We have outlined a number of large-scale production tests that would need to be performed in order to assess clouds as complementary or even replacement technology for the grid-based solutions in use today, although data-related issues remain a concern.Finally, we have raised a number of non-technical, nonfinancial concerns that must nevertheless be taken into account -particularly by large-scale research communities that rely on various funding sources and must -for their continued existence -show value to those that ultimately support them: sometimes a private individual or organization but often the tax-payer.
Table 1 -
Extr act of Ser vice Tar gets (Tier 0)
Table 3 -
Ser vice Readiness Checklist
Table 5 -
Tar gets for Tier 0 Ser vices
|
2017-09-09T13:19:23.371Z
|
2010-04-01T00:00:00.000
|
{
"year": 2010,
"sha1": "33f20500cb5f757905022fbae1983847f4d1a6c4",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/219/6/062026/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7a449d583e2f09e303f79b55d0b02d310116e128",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
250771488
|
pes2o/s2orc
|
v3-fos-license
|
Deviations of the Lepton Mapping Matrix from the Harrison-Perkins-Scott Form
We propose a simple set of hypotheses governing the deviations of the leptonic mapping matrix from the Harrison-Perkins-Scott (HPS) form. These deviations are supposed to arise entirely from a perturbation of the mass matrix in the charged lepton sector. The perturbing matrix is assumed to be purely imaginary (thus maximally $T$-violating) and to have a strength in energy scale no greater (but perhaps smaller) than the muon mass. As we shall show, it then follows that the absolute value of the mapping matrix elements pertaining to the tau lepton deviate by no more than $O((m_\mu/m_\tau)^2) \simeq 3.5 \times 10^{-3}$ from their HPS values. Assuming that $(m_\mu/m_\tau)^2 $ can be neglected, we derive two simple constraints on the four parameters $\theta_{12}$, $\theta_{23}$, $\theta_{31}$, and $\delta$ of the mapping matrix. These constraints are independent of the details of the imaginary $T$-violating perturbation of the charged lepton mass matrix. We also show that the $e$ and $\mu$ parts of the mapping matrix have a definite form governed by two parameters $\alpha$ and $\beta$; any deviation of order $m_\mu/m_\tau $ can be accommodated by adjusting these two parameters.
Introduction
The last decade has seen a well-defined situation take form with respect to neutrino oscillations.The lepton mapping matrix is at least approximately described by the "tribimaximal" formula of Harrison, Perkins and Scott [1], and the differences of squared neutrino masses are known to order of magnitude.
The data on the mapping angles are so far consistent with the HPS values, but best fits suggest some small deviations.There is as yet no information on the T -violating phase angle.
With respect to the mapping angles, the task of theoretical model construction has been sorting itself into two directions: one is to devise a natural way [2] in which the HPS formula can arise as a zeroth approximation, and the other is to propose a perturbative mechanism [3] that gives rise to deviations.This paper confines itself to the second task.
In a recent paper [4], we suggested that T -violation in both quarks and leptons could arise from the coupling of the Dirac matrix iγ 4 γ 5 with an undiscovered particle (called timeon) of large mass.For leptons, it was proposed that the coupling occurs only for the charged leptons, and without it the mapping matrix would be exactly of the Harrison-Perkins-Scott [HPS] form.Both are also assumed in this paper.As we shall see, many of the results of the timeon paper can be derived without the additional assumptions that the bare mass of the electron is zero and that the T -violating coupling acts only on one vector in the flavor space.
The hypotheses proposed in this paper are thus a weaker subset of those in [4]; these are (i) The left-handed charged leptons are eigenstates of a hermitian matrix where L 0 and L 1 are real.
(iii) the strength of L 1 is of order of the muon mass m µ or less.
In Section 2, we shall show that assumptions (i)-(iii) lead to very small, of the order of deviations from HPS in the absolute values of three of the mapping matrix elements and Thus, there are two relations, to be discussed in Section 3, between three mapping angles θ 12 , θ 23 , θ 31 and the T violating phase e iδ in the lepton mapping matrix.
These relations are valid to the accuracy of order of (m µ /m τ ), but not to that of (m µ /m τ ) 2 .Another consequence of (1.3) is that to the same accuracy, the entire lepton mapping matrix can be described by two real parameters, as will be summarized by the (α, β) theorem in Section 4. In Section 5, we shall discuss the experimental implications of these relationships. with where l, l 0 refer to e, µ, τ and the corresponding e 0 , µ 0 , τ 0 .The free neutrino eigenstates will be called |ν 1 >, |ν 2 > and |ν 3 > in the usual way.In the present proposal, deviations from the HPS mapping matrix are due entirely to the perturbation on the charged lepton mass matrix.Thus, the masses of the free neutrinos do not affect these deviations. where with k being 1, 2, or 3.It then follows from (1.5) that or for the example of k = 1 and l = e, the element U e1 is where the elements < 1|e 0 >, < 1|µ 0 > and < 1|τ 0 > refer to those of U 0 , and are precisely the HPS matrix elements; i.e.,
Effect of Large Tau Mass
Consider the mapping element between the state and the τ -state: We shall compute | < k|τ > | 2 to the accuracy of (m µ /m τ ), but neglecting corrections of order (m µ /m τ ) 2 .By first-order perturbation theory, we have Likewise, By hypothesis (iii), both these elements are of order of (m µ /m τ ).Therefore (2.5) By hypothesis (i), the elements of L 1 are real, and < k|e 0 >, < k|µ 0 > and < k|τ 0 > are also real since these are HPS matrix elements.Thus, from (2.6) we have and the same absolute values as HPS.(See also Eq. ( 12) of Xing [3].) The standard form of the mapping matrix is with Eq. (2.8) can then be written as and (Here, U 3j is the same U τ j =< j|τ > of previous sections, and likewise for other It is convenient to express relations in terms of quantities that vanish in the HPS limit.From (3.5), we find and Note that both sides of (3.6)-(3.8)vanish at the HPS point.
Next, the difference of (3.3) and (3.4) gives From (3.5), we have Then the square root of (3.12) can be written as In the absence of T violation, we have and correspondingly, In the presence of T violation, we may write (3.19) as which, in turn, leads to and, on account of (3.15) The above LHS is an increasing function of cos δ, and its RHS at fixed φ is an increasing function of θ 12 .Thus,
The alpha-beta Theorem
This section is devoted to establishing a theorem that shall be called the alpha-beta theorem.
Theorem: Suppose the mapping matrix U to have its τ -elements given (apart from their phases) by the HPS values as in (3.3)-(3.5),and that the third τ -element is real and positive with Then there exist real numbers α and β, such that where S 1 and S 2 are both diagonal unitary matrices, and To prove the theorem, we make use of the following lemma, proved in Appendix B.
Lemma: Let W be a 3 × 3 unitary matrix of the form where t is a 2 × 2 matrix, ξ and η are both real 2 × 1 column matrices and Supposing the Lemma to be established, we prove the alpha-beta theorem as follows: The five matrix elements in the third row and the third column of U can all be made real by introducing an extra phase factor into each of these elements.
This task can be achieved by introducing unitary diagonal matrices S ′ 1 and S ′ 2 such that has the form (4.5) required by the lemma.Moreover, for our applications,
.10)
The corresponding vector η ′ is, in accordance with (4.7), with the signs in η and η ′ being chosen for later convenience; the ambiguity will be subsumed in the arbitrariness of β in (4.6).
Since W is unitary, we have ξt + dη = 0 (4.13) Hence, we may define and on account of (4.7) Substituting these expressions into (4.5)-(4.6),we find Assembling W according to (4.5), we find that the matrix is given by (4.4), and that establishes the alpha-beta theorem, with and (4.20) By using the alpha-beta theorem, we can derive several interesting relations between the four parameters θ 12 , θ 23 , θ 31 and δ of the mapping matrix U .
These will be discussed in Appendix C.
Remark It will be seen that the above expression for V is identical to the matrix ).It follows therefore, from the alpha-beta theorem just established, that the "c e , c p " correction terms in the upper two rows of V l−map in [4], which are admittedly of first order in (m µ /m τ ), can be taken into account (to that order) by adjusting the values of α and β, which in Ref. [4] were restricted to be certain given expressions in terms of the detailed matrices G and F .
The outcome is that any experimental predictions made from using Table 1 of [4], plus the knowledge that its "c e , c p "-corrections are of first order and its χ-corrections of second order in m µ /m τ , can just as well be made on the basis of the weaker hypotheses (i)-(iii) stated in Section 1 of this paper.(5.2) A striking feature of our model is that it predicts a much smaller deviation from HPS in θ 23 than in θ 31 .Since θ 31 is known to be small, from (5.2) we expect θ 23 even closer to its HPS value of 45 • , as a linear deviation in θ 23 would be quadratic in θ 31 .
At present, current data [6][7][8][9][10] are compatible (within 1σ) with the HPS values of θ 23 and θ 31 , but there is a suggestion that sin 2 θ 31 may be about 0.015.If we take this value, then and from (5.2) These data seem not yet precise enough to say whether the deviation of sin 2 2θ 23 from 1 is as small as given by (5.4).
(ii) Next, we turn to our second relation, (3.16) and (3.18) relating θ 12 to θ 23 and δ.We may replace cos 2θ 23 with tan 2 θ 31 in accordance with (3.8).At any fixed δ, these equations define a curve describing the variations of x = sin 2 θ 12 vs y = sin 2 θ 31 . (5.5) The envelope of the family of such curves is shown in Figure 2, and corresponds to cos δ = ±1 . (5.6) The region below the envelope corresponds to cos 2 δ > 1 and is therefore forbidden.
An examination of current data [10][11][12][13] indicates that points on the outermost curve (no T violation) are far from the best fit, and that the forbidden region below the curve is improbable.As the best fit (represented by the circle) shown in Figure 2 already prefers large T violation, a measurement of δ, combined with improved precisions in θ 12 and θ 31 , would give a sensitive test to our model.
(iii) It is of interest to compare the assumptions and results of Ge, He and Yin [GHY, ref.14] and those of this paper.Both papers regard the HPS mapping matrix as correct to 0 th order, and concentrate on the 1st-order deviations from it.
In GHY, these deviations are attributed to a perturbation in the neutrino sector, whereas in the present paper the perturbation arises in the charged lepton sector.
In the notations of this paper, a perturbation in the charged lepton sector leads to a mapping matrix U given by (1.9) whereas a perturbation in the neutrino sector would yield an equivalent form A difference appears only when different physical approximations are made in K and K ′ .As a result, the constraints arrived at on the four parameters θ 12 , θ 23 , Here we prove the lemma stated in Sec. 4. Let W be the unitary matrix given by (4.5); it follows then and with λ an unknown complex number.Thus, we can write Turn now to the upper left part of W † W ; it gives where I is the 2 × 2 unit matrix.From (B.10), we find where we have used (B.3) and (4.7) to eliminate the inner products in ξ and ξ ′ .
Using (B.11) and (B.12) and after some rearrangement, we have On the other hand, we can also verify that (i) Determinations of cos δ and cos β By equating we find cos δ given by Likewise, from it follows then and from |U 12 | 2 = |V 12 | 2 , we find Thus, Points below the curve are forbidden and the HPS limit is (x, y) = ( 13 , 0).
|
2010-08-14T02:42:36.000Z
|
2010-08-03T00:00:00.000
|
{
"year": 2010,
"sha1": "a3d80af96b5c620899f1b90f60bc978b58c062f2",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.1088/1674-1137/34/12/022/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "a3d80af96b5c620899f1b90f60bc978b58c062f2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
85552213
|
pes2o/s2orc
|
v3-fos-license
|
Optimizing the configuration of a superconducting photonic band gap accelerator cavity to increase the maximum achievable gradients
We present a design of a superconducting rf photonic band gap (SRF PBG) accelerator cell with specially shaped rods in order to reduce peak surface magnetic fields and improve the effectiveness of the PBG structure for suppression of higher order modes (HOMs). The ability of PBG structures to suppress long-range wakefields is especially beneficial for superconducting electron accelerators for high power free-electron lasers (FELs), which are designed to provide high current continuous duty electron beams. Using PBG structures to reduce the prominent beam-breakup phenomena due to HOMs will allow significantly increased beam-breakup thresholds. As a result, there will be possibilities for increasing the operation frequency of SRF accelerators and for the development of novel compact high-current accelerator modules for the FELs.
I. INTRODUCTION
Modern high-power free-electron lasers (FELs) continue to place serious demands on driver accelerators, which are required to provide high current, continuous duty, electron beams with minimal degradation to transverse and longitudinal emittances [1].Superconducting radio-frequency (SRF) cavities are a natural choice for accelerator systems operating in a continuous wave (CW) mode [2].The drawback of providing refrigeration for operating the machine IS offset by the relative ease with which CW radio-frequency (rf) can be coupled into and sustained in SRF cavities.Going to higher frequencies in SRF accelerators will save on cooling power, as well as provide a more compact and lower cost accelerating structure.However, extremely low rf losses in SRF cavities become a handicap when we consider higher-order modes (HOMs), which once excited, oscillate with very high Q-factors and interact with a bunched beam causing instabilities, energy spread, degradation of the beam quality, and additional cryogenic losses [3,4].The beam breakup threshold due to HOM wakefields in the linac scales inversely proportional to frequency squared [5].Therefore, much of the effort in the field of SRF for high current electron accelerators was directed to ensure strong damping of parasitic HOM oscillations [2,[5][6][7].
To minimize the effects of HOMs, special HOM dampers (absorbers or couplers) are typically attached to the beam tube sections of SRF cavities [2].The absorber is a section of the beam pipe with a layer of microwave absorbing material (lossy ferrites or ceramics) [8,9].One of the major disadvantages of absorbers is that ferrite materials are brittle and, if cracked, contaminate SRF cavities with particulate matter.HOM couplers usually are made with either a set of waveguides or coaxial lines connected to the beam pipe.A typical configuration of waveguide couplers uses a Y-shaped arrangement of three waveguides on the beam tube of the accelerating section [10,11].Waveguides act as natural bandpass filters with respect to the fundamental mode and are inherently broadband.However, they often perform inferior to absorbers with respect to sufficiently damping HOMs.Coaxial dampers usually offer stronger damping than waveguide couplers and are more compact [12].Their main disadvantage is their fundamental rf rejection filters, which must be carefully tuned.In addition to the above, placing HOM couplers or absorbers in beam tubes occupies space on the beam line (in case of the absorbers, outside of the cryostat) which otherwise could be used for accelerating the beam.Therefore, it reduces the fill factor and the real estate gradient.Finally, optimization of the dampers for sufficient absorption of HOMs always leads to compromises with the gradient performance of the accelerating structure.For example, enlarging the beam pipe to facilitate propagation of lowestfrequency HOMs to the absorber lowers the shunt impedance of the fundamental mode.
It has been demonstrated that photonic band gap (PBG) [13] cavities have the intrinsic potential for absorption of HOM power and reduction of wakefields.A PBG accelerating cavity employs a PBG structure in the form of a triangular array of metal rods with one rod removed from the center [14,15].This periodic electromagnetic structure possesses a rejection band which serves to confine and localize the fundamental mode around the defect (missing rod) [16].The spacings of the array and the diameters of rods are adjusted so that the frequencies of HOMs fall outside of the rejection band.In this manner, all parasitic wakefields are not confined at the center and may be extracted at the periphery of the structure [17].The first room-temperature PBG accelerator cavities were built without an external enclosing wall so that the wakefields freely radiated into surrounding space [17,18].Since SRF cavities are immersed in liquid helium and cooled down to superconducting temperatures, SRF PBG resonators must be enclosed by a solid wall [19].Coupling waveguides may be attached to the outer wall for efficient extraction of HOM power and also for coupling rf to the fundamental accelerating mode.One possible design for an SRF accelerator section with a PBG cell and couplers is shown in Fig. 1.This accelerator section resonates at the frequency of 2.1 GHz [20].Two HOM couplers in the form of the WR-229 waveguides were placed at the periphery of the PBG cell to reduce the Q-factors of the lowest HOMs.A bigger WR-430 waveguide serves as a high-power fundamentalmode coupler.This design incorporates HOM and fundamental mode couplers as a part of the accelerating structure without losing valuable space on the accelerating line and without causing additional azimuthal asymmetries in the accelerating mode.Together with WR-229 waveguides, the PBG structure serves as an efficient natural rejection filter for the accelerating mode.The WR-229 couplers that are placed in the accelerating cavity provide strong HOM damping without decreasing the shunt impedance of the accelerating mode.
An initial proof-of-principle fabrication of the simplest version of SRF PBG resonator was recently reported in [19].Two cavities with simple PBG structures made of round niobium rods were fabricated.The cavities were tested at both 4 and 2 K and performed well.One demonstrated accelerating gradients of 15 MV=m, which corresponds to peak surface magnetic fields of approximately 130 mT.However, in order for SRF PBG cavities to be fully competitive with more traditional SRF coupler configurations for high current accelerator applications, they need to be capable of withstanding the gradients higher than 15 MV=m without compromising their efficiency with respect to HOM damping.
The research that is reported here presents a novel design of an SRF PBG resonator with elliptically shaped rods, which should be able to maintain gradients 40 percent higher than previously tested resonators with round rods.In addition, this resonator performs better in confining the fundamental accelerating mode and is more efficient for damping HOMs.
II. MINIMIZATION OF PEAK SURFACE MAGNETIC FIELDS IN A PBG RESONATOR
We started our investigation of the strategies to minimize peak surface magnetic fields by examining the geometry of the 2.1 GHz SRF PBG resonator with the regular array of round cylindrical rods reported in [19].The dimensions and peak surface fields in this resonator are summarized in Table I.We investigated ways that the geometry of this structure could be modified so that the peak surface magnetic fields are reduced and overall gradient limitations of the structure are increased.The initial idea was to bend the inner rods of the PBG resonator in a manner mimicking an elliptical SRF cavity, thereby pushing the high magnetic field away from the surface.However, bending the rods of the PBG structure did not produce the same effect (Fig. 2).The surface field reached its maximum value at the point of maximum curvature of the central rod, and this maximum FIG. 1 (color online).A conceptual drawing of an SRF accelerator section consisting of four regular elliptical accelerating cells and an accelerating PBG cell with HOM couplers.value was higher than the peak magnetic field on the surface of an unbent round rod of equal diameter.Next, we followed the idea of Munroe et al. [21][22][23] and changed the shape of the six inner rods of the PBG resonator from round cylindrical to an elliptical cross section (Fig. 3).Squeezing the rods in the radial direction towards the center of the resonator produced immediate reduction in peak surface magnetic fields.It is obvious, however, that for very elongated elliptical rods, the magnetic field will increase at the sharp corners of the ellipses.Therefore, the minor radius of the elliptical rods was optimized to minimize the peak surface magnetic field.The optimization was performed for each possible major radius.In the process, we noticed that any changes in the minor radius of elliptical rods resulted in shifting the frequency of the resonator.This had to be compensated for by other changes in the geometry (Fig. 4).One way to adjust the frequency was to put rectangular inserts into each elliptical rod to make the rods thicker [as shown in Fig. 4(a)].Another way was to move the elliptical rods closer to the center of the resonator, and the third way was to decrease the period of the whole PBG structure.The size of the rectangular inserts or the required decrease in the period of the structure are functions of the chosen major radius of the elliptical rod.However, we have discovered that the optimized minor radius stayed nearly constant and almost independent of the major radius and approximately equal to 0.09 of the period of the PBG structure (Fig. 4).Using results from simulations with CST MICROWAVE STUDIO [24], we discovered that if the major half-axis of the new elliptical rod was equal to the radius of the round rod of the resonator of [19] (0.15 times the period of the structure) then the peak surface magnetic field immediately decreased by 17 percent.If the major radius of the ellipse is further increased, then the surface magnetic field decreases further below 60 percent of the value in the structure with round cylindrical rods (Fig. 5).
III. GEOMETRY OPTIMIZATION FOR MAXIMUM HOM SUPPRESSION
We determined that changing the inner row of PBG rods from the round shape to elliptical shape effectively reduces the peak surface magnetic field.However, a question arises: would the PBG resonator with elliptical rods still be as effective with respect to the confinement of the fundamental mode and suppression of wakefields as the resonator with equally spaced round rods?To characterize the fundamental mode confinement and the filtering of HOMs, we first modeled the PBG resonator with open sidewalls as shown in Fig. 6.We ran simulations with the time-domain solver of the CST MICROWAVE STUDIO and excited the cavity with a virtual current source which was placed in a cavity off-axis (at the radius equal to a half of the radius of the beam pipe).The current source was driven by a pulse containing the frequency spectrum of interest.Open boundary conditions were defined in all three directions.We looked at decay rates of microwave energy stored in the cavity and computed the diffraction Q-factors of the fundamental mode and the slowest decaying HOM.This method ran fast and converged quickly.
First, we tested the transient method on a well-known geometry and modeled the decay of the fundamental mode and the higher order modes in a cavity with a regular PBG structure of round rods with different diameters.The fundamental mode was excited by a current pulse with a narrow frequency spectrum centered at 2.1 GHz.The HOM spectrum was excited with a current pulse with a frequency content from 2.5 to 3.5 GHz.The results of this simulation are summarized in Fig. 7.The figure shows the plot of the Q-factors for the fundamental mode and the most-slowly decaying HOM as a function of the ratio of the radius of the structure's rod to the period of the structure, r=p.It can be seen from the graphs that the decay of both the fundamental mode and the higher order modes slows down as the radii of the rods of the PBG structure increase, and that the fundamental mode is more strongly confined by the structure than the HOMs.
Next, we analyzed the confinement of the fundamental mode and the HOMs in all three types of PBG structure with elliptical rods: rods with rectangular inserts, shifted first row of rods, and the structure with the reduced period.The results are summarized in Fig. 8.The plots show diffraction Q-factors computed from the decay of the fundamental mode and HOMs for all three cases.For comparison, the Q-factors of the fundamental mode and HOMs for the resonator with round rods and r=p ¼ 0.15 are shown on each plot.For the particular case of a cavity with the first row of elliptical rods shifted towards the center, it can be seen that for the certain ratios of the elliptical rod's major half-axis to the period of the structure a=p < 0.27, the HOMs decay faster than HOMs in the resonator with round rods and r=p ¼ 0.15.However, for the same structure and a=p > 0.21, the fundamental mode decays slower than in the structure with round rods and r=p ¼ 0.15.This effect can probably be explained by the perturbed periodicity of the structure.Together with the fact that the peak surface magnetic fields in the structure with elliptical rods with 0.21 < a=p < 0.27 are approximately 40 percent lower than in the structure with round rods, it makes the resonator with elliptical rods shifted towards the center a perfect candidate for a structure with improved wakefield suppression and capability to achieve high accelerating gradients.
To complete the higher-order-mode analysis we modeled a PBG resonator with shifted elliptical rods and a=p ¼ 0.25 surrounded by a solid metal wall with three coupling WR-229 waveguides attached to the wall (Fig. 9).We varied the positions of the three waveguides to achieve the optimal coupling to HOMs.The final angular positions of the waveguides are shown in Fig. 10.The decay of the stored electromagnetic energy in this geometry for frequencies from 2.75 to 3.5 GHz is shown in Fig. 10.All HOMs decay out through the waveguides with Q-factors below 115.
IV. Thermal analysis
Thermal analyses were conducted to rule out the possibility of thermal quench due to inadequate cooling.The niobium rods were intended to be manufactured as tubes so that they could be cooled with liquid helium along the length of the rod [19].High gradient tests were planned to be conducted in a vertical cryostat 965 mm in diameter and 3048 mm in depth.The exact geometry was reproduced in the thermal simulations.Thermal analysis was performed using ANSYS software [25].The electric and magnetic fields in a PBG resonator operating at a gradient of 10 MV=m were computed with the CST MICROWAVE STUDIO and then exported to ANSYS on a compatible grid.Analysis was performed first at 2 kelvin with liquid helium in a superfluid state and then at 4 kelvin when both conduction and free convection mechanisms of heat transfer had to be taken into account.
At 2 kelvin and 10 MV=m accelerating gradient the estimated thermal load on a whole cavity with round rods was 0.61 W. Due to the high thermal conductivity of superfluid helium at 2 kelvin, the bath did not exhibit any temperature gradient that would drive buoyancy forces (i.e., natural convection), so the heat transfer was purely due to conduction.The temperature distribution on the surface of a niobium cavity with round rods immersed in superfluid helium is shown in Fig. 11.It can be seen from the figure that for 10 MV=m CW accelerating gradient the temperature change at the cavity's surface was less than 0.03 kelvin.Therefore, we concluded that the thermal quench is not an issue at 2 kelvin for reasonable accelerating gradients.
At 4 kelvin and 10 MV=m accelerating gradient the estimated thermal load on a cavity with round rods was 23.6 W and for the cavity with elliptical rods was 19.28 W. Thermal conductivity of liquid helium at 4 kelvin is approximately seven times lower than at 2 kelvin; therefore, both heat transfer mechanisms were taken into account to accurately simulate cooling of the cavity.The full computational fluid dynamics analysis was conducted with realistic coupling between the two heat transfer mechanisms.The results of this simulation are shown in Fig. 12.It can be seen from that figure that for the cavity with round rods running CW at 10 MV=m the peak surface temperature does not exceed 6.72 kelvin, and for the cavity with elliptical rods the temperature is somewhat lower, below 6.32 kelvin.Therefore, we concluded that in the tests of the cavity with round rods [19] thermal quench could not be an issue.Maximum observed gradients at 4 kelvin did not exceed 10.6 MV=m, and therefore peak surface temperatures were well below the critical temperature of niobium, which is 9.2 kelvin.However, if the cavity with elliptical rods goes to significantly higher gradients and thermal loads, thermal quench may become an issue, since heating of the surface could become significant.
V. CONCLUSION
We have designed an improved 2.1 GHz SRF PBG resonator with six elliptical inner rods.This new resonator is superior to the resonator with round rods due to more efficient higher order mode suppression and better high gradient performance.The improved resonator has 40 percent lower peak magnetic fields than the resonator with round rods.We simulated the damping of HOMs in this resonator with three WR-229 waveguides attached to the outside metal wall and found superior performance.We have conducted thermal analysis and concluded that thermal quench should not be an issue in this geometry at 2 kelvin, but may become an issue at 4 kelvin if the resonator achieves gradients significantly higher than 10 MV=m without experiencing magnetic quench.The resonator with elliptical rods has recently been fabricated with niobium.It is currently undergoing high power tests at Los Alamos National Laboratory.Maximum gradients achieved experimentally will be reported elsewhere.
FIG. 3 (
FIG. 3 (color online).Change in the shape of the inner rods of the PBG cavity to reduce peak surface magnetic fields.
FIG. 4 (
FIG. 4 (color online).Optimized minor half-axis of the elliptical rod and change in dimensions of the PBG structure as a function of the major half-axis of the elliptical rod for: (a) elliptical rods with frequency-compensating rectangular inserts; (b) regular elliptical first row of rods with the rods shifted towards the center of the resonator; (c) regular elliptical first row of rods with the reduced period of the whole PBG structure.
FIG. 5 .
FIG.5.Dependence of the peak surface magnetic field on the major half-axis of the elliptical rod.
FIG. 8 (
FIG. 8 (color online).A diffraction Q-factor for the fundamental mode (a) and the higher order modes (b) in a PBG resonator made of two rows of rods as computed by the CST MICROWAVE STUDIO.The six central rods have elliptical shapes and the three cases correspond to elliptical rods with rectangular inserts, elliptical rods shifted to the center, and reduced period of the whole structure.For comparison also shown are the Q-factors of the fundamental mode and HOMs in a structure with equally spaced round rods and r=p ¼ 0.15.
FIG. 11 (
FIG. 11 (color online).Temperature distribution on the surface of a 2.1 GHz SRF PBG resonator with round rods immersed into superfluid liquid helium at 2 kelvin when operating CW at 10 MV=m accelerating gradient.
TABLE I .
[19]dimensions and accelerator characteristics of the 2.1 GHz SRF PBG accelerator cavity with round rods reported in[19].ID of the rods (cooling channel), d in 8.8 mm ID of the equator, D0 300 mm Thickness of Nb end walls, t wall 2.8 mm Length of the cell, L 60.73 mm ¼ λ=2 ID of the beam pipe, R b 31.75 mm ¼ 1.25 inches Radius of the beam pipe blend, r b Spacing between the rods, p 56.56 mm OD of the rods, d 17.04 mm ¼ 0.3 × p 8 Ohmic Q-factor at 2 K, Q 0 ð2KÞ 5.8 × 10 9 Shunt impedance, R=Q 0 145.77Ohm E peak =E acc 2.22 B peak =E acc 8.55 mT=ðMV=mÞ
|
2019-02-01T06:58:16.398Z
|
2014-02-18T00:00:00.000
|
{
"year": 2014,
"sha1": "7d85a1cb7ba87c0f68465bb118e96a0ad13cd55a",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevSTAB.17.022001",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7d85a1cb7ba87c0f68465bb118e96a0ad13cd55a",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
271231110
|
pes2o/s2orc
|
v3-fos-license
|
Increasing psychological flexibility is associated with positive therapy outcomes following a transdiagnostic ACT treatment
Objectives Increasing psychological flexibility is considered an important mechanism of change in psychotherapy across diagnoses. In particular, Acceptance and Commitment Therapy (ACT) primarily aims at increasing psychological flexibility in order to live a more fulfilling and meaningful life. The purpose of this study is to examine 1) how psychological flexibility changes during an ACT-based treatment in a transdiagnostic day hospital and 2) how this change is related to changes in symptomatology, quality of life, and general level of functioning. Methods 90 patients of a psychiatric day hospital participated in the study. Psychological flexibility, symptomatology, and quality of life were assessed at three measurement time points (admission, discharge, and 3-month follow-up). The level of functioning was assessed at admission and discharge. Differences in psychological flexibility were tested via two-sided paired samples t-tests. Correlations of residualized change scores were calculated to detect associations between changes in psychological flexibility and other outcomes. Results Psychological flexibility increased significantly from pre-treatment to post-treatment (d = .43, p <.001) and from pre-treatment to follow-up (d = .54, p <.001). This change was significantly correlated to a decrease in symptomatology (r = .60 –.83, p <.001) and an increase in most dimensions of quality of life (r = -.43 – -.75, p <.001) and general level of functioning (r =-.34, p = .003). Discussion This study adds further evidence for psychological flexibility as a transdiagnostic process variable of successful psychotherapy. Limitations are discussed.
Introduction
Traditionally, most clinical research on behavioral therapy focusses on specific mental disorders rather than transdiagnostic treatments (1,2).While this arguably makes the studies more comparable, it also creates problems when transferring the results into the real world.First of all, the focus on disorder-specific treatments stands in striking contradiction to high rates of comorbidity.A nationally representative study in Germany, for example, found comorbidity rates of over 40 percent among individuals with psychiatric diagnoses (3).A second issue that arises with diagnosis-specific treatments, is the restricted efficiency.Mono-diagnostic treatments require clinicians to use different manuals or techniques for every disorder, making their training and preparation more costly and time-consuming (4,5).This could also be a reason why clinicians seldom use treatment manuals, despite their proven effectiveness (6).Also, providing disorder-specific inpatient units or group therapy may not be economically feasible everywhere.Clinics may not be able to fund manuals and training for all the different diagnoses.Additionally, smaller clinics, in particular, may not have enough patients with each diagnosis to plan diagnosisspecific groups or units (4).Third, it has been increasingly observed in recent years that the mechanisms behind different mental disorders and their treatments are often very similar (1,2).
One hypothesized transdiagnostic mechanism is psychological flexibility.Already in the 1940s, researchers found that mental health was related to flexible and contextual behavior (7,8).In the last decades, the concept of psychological flexibility gained more attention with the rise of Acceptance and Commitment Therapy (ACT), a thirdwave behavioral therapy approach.Psychological flexibility according to ACT can be defined as "the tendency to respond to situations in ways that facilitate valued goal pursuit" (9, p. 2), which includes being in touch with the present moment and the feelings it comes with, without fighting them unnecessarily (10).Psychological flexibility becomes especially important in challenging situations (9), and is closely linked to resilience (11).
ACT considers its counter pole, psychological inflexibility, or the "inability to persist or change in the service of long term valued ends" (12, p. 6), as a major source of psychopathology.This is true regardless of the diagnosis.For example, avoidance of fear or pain and non-value-orientated behavior leads to suffering, whether the diagnosis is anxiety disorder, post-traumatic stress disorder, depression, or somatoform disorder.
The hypothesis of psychological inflexibility as an important factor of psychopathology is supported by existing research: Metanalyses indicate moderate to large correlations between psychological inflexibility and different measures of psychopathological symptoms, stress, pain, and reduced quality of life (12-14).Therefore, ACT has the primary goal of promoting psychological flexibility in order to be able to live a full, vibrant and meaningful life (10, 12).Unlike other therapeutic approaches, this approach considers symptom reduction only as a by-product.The main goal remains the improvement of the subjective quality of life (15).
ACT seeks to promote psychological flexibility through six core processes: being present, acceptance of (unpleasant) inner events, defusion from unhelpful thoughts, understanding the self as context (rather than concept), being aware of one's values and following them through committed action (12).ACT assumes that just as psychological inflexibility leads to suffering, regardless of diagnosis, so the promotion of psychological flexibility leads to improvement, regardless of diagnosis.This makes ACT a genuinely transdiagnostic therapeutic approach (16).Additionally, ACT has a strong focus on therapy processes rather than only outcomes (17).
The efficacy of ACT has been demonstrated in more than one thousand RCT studies (18) and several meta-analyses (cf.19).Corresponding to the underlying theory, mediation analyses report increasing psychological flexibility as a mediator or process variable of ACT treatment effects, such as increases in quality of life and decreases in symptoms (20-22).However, although ACT sees itself as a transdiagnostic approach of psychotherapy, most studies continue to examine primarily disorder-specific or non-clinical contexts.Only recently, the first studies on ACT in transdiagnostic clinical settings have been published (23)(24)(25).All of them reported significant improvements in symptoms.Gloster et al. (25) have found psychological flexibility to moderate the association between stress and symptoms as well as disability.Morgan et al. (23) reported a significant increase in psychological flexibility during the treatment.Ohse et al. (24) found a significant association between the increases of psychological flexibility and the decrease of symptoms during treatment.These studies contribute important new insights into the use of ACT in transdiagnostic clinical settings and the role of psychological flexibility.Yet none of these studies in transdiagnostic clinical settings reported follow-up data on psychological flexibility so far.Nor did either of these studies examine the impact of altered flexibility on quality of life, although this is the stated primary goal of ACT.The present study aims to help fill these gaps by examining the following questions: Does psychological flexibility change during and after a treatment in a transdiagnostic psychiatric day hospital?And how is the change related to quality of life, general functioning, and symptoms? 2 Methods
Procedure
The above questions were investigated as part of a larger effectiveness study (26).The investigation was carried out in accordance with the Declaration of Helsinki (2013).The research project was approved by the Ethics Committee of the Medical Association Berlin (12th February 2020, case number Eth-03/20) and was retrospectively registered in the German Clinical Trials Register (http://www.drks.de/DRKS00029992,identifier: DRKS00029992) on August 19th, 2022.Participants were recruited in a psychiatric day hospital in Berlin, Germany.All participants in the study gave their written informed consent.
Participants
92 participants were included in the evaluation trial.For a detailed flow of participants, s.Rutschmann et al. (26).Two participants did not respond to the Acceptance and Action Questionnaire (AAQ-II) and were therefore excluded from statistical analyses for the present study.Of the 90 participants included in the present study, 47 participated at all three survey time points.The remaining participants participated in the pre-and post-treatment survey or in the pre-treatment and follow-up survey.They were also included in the statistical analyses.
Treatment
ACT had been implemented in the psychiatric day hospital for approximately one year before the start of the study.The entire professional team, including physicians, psychologists, nurses, social workers, movement therapists, and music therapists, had been trained in ACT and participated regularly in ACT-supervision sessions.
The therapy focus was on promoting psychological flexibility in transdiagnostic group sessions, including ACT group psychotherapy twice a week for 50 minutes each, based on the Wengenroth (27) material, and occupational therapy, art therapy, movement therapy, mindfulness training and an ACT-Matrix group about once a week for 50 minutes each (cf.26).The ACT-Matrix is a tool to distinguish between internal vs external events on the one hand, and approach vs avoidance behaviors on the other hand, thus supporting valueoriented, flexible behavior (28).In addition, regular ACT-based one-on-one therapy was offered to address individual issues and problems once a week for 25 minutes.Each week, a different one of the six core processes of psychological flexibility was focused on across groups and professions.So, if, for example, the focus was on the core process of acceptance in one week, this was not only treated in an experience-oriented way in the ACT group twice this week, but also in art therapy, movement therapy etc.The regular length of stay was at least six weeks to ensure that each core process was completed once.
Treatment conditions had to be adjusted due to the pandemic, but it was always ensured that ACT group therapy and ACT-based individual therapy took place as described above.
Measures
All patients received a set of questionnaires at admission, discharge, and 3 months after discharge.At admission and discharge, the therapist noted the diagnosis after an unstandardized interview based on clinical impression.Additionally, the therapist assessed the current level of functioning using the Global Assessment of Functioning Scale (GAF) (29).
Psychological flexibility was assessed with the AAQ-II (30).The AAQ-II consists of 7 items, each to be answered on a 7-point Likertscale.The sum score indicates the degree of psychological flexibility: The higher the sum score, the lower the flexibility.The AAQ-II is recognized as a unidimensional, reliable, and valid instrument for assessing psychological flexibility (20, 30, 31).It is unspecific for diagnoses and can be used universally (20).Translations into many languages exist, as well as specific AAQ questionnaires for different diagnoses (32).
Symptom severity was assessed using the Global Severity Index (GSI) of the Symptom Checklist-90-Standard (SCL-90-S) (33).The SCL-90-S was also used to assess the level of education and the gender.
The depressed mood was assessed using the sum score of the Beck Depression Inventory II (BDI-II) (34).
The World Health Organization Quality of Life-Short Version (WHOQOL-BREF) (35) was used to measure subjective quality of life in the dimensions of physical and psychological well-being, social relationships, environment, and global quality of life.
After discharge, it was also noted whether the medication was administered during the stay, based on clinical guidelines, with adaptations as necessary (applied/increased, switched, decreased/ stopped, or left unchanged).
Data analyses
Differences between the final sample and dropouts regarding the AAQ-II were analyzed via two-sided independent t-tests (IBM SPSS Statistics Version 28.0.1.0).Missing data of all studied variables were analyzed by Little's MCAR test.Pre-to post-, preto follow-up-and post-to follow-up-differences in psychological flexibility were tested using two-sided paired samples t-tests.
Correlations of residualized changes were calculated to identify associations between changes in psychological flexibility and changes in quality of life, symptom severity, depressed mood, and global functioning respectively.The threshold of significance was adjusted according to the Bonferroni method to p <.006 for correlations of pre-to post-changes and to p <.007 for correlations of pre-to follow-up-changes.
Possible associations between individual characteristics and psychological flexibility at the time of admission were examined via Pearson correlation (age), point-biserial correlations (gender) and ANOVA (educational level, main diagnosis).
In addition, possible individual factors influencing changes in psychological flexibility were examined, using Pearson correlation (age), point-biserial correlations (gender), and ANOVA (educational level, main diagnosis, change of medication).
Results
The final sample and the dropouts did not differ regarding their AAQ-II sum at any survey time (all p >.23).Missing data in the final sample were missing completely at random (Little's MCAR test, p = .47),justifying the use of pairwise deletion for the following analyses.This leads to varying sample sizes in the different analyses.
Psychological flexibility, as measured via the AAQ-II, increased significantly between admission and discharge (t(84) = 3.96, p <.001, d = .43),and between admission and follow-up (t(51) = 3.92, p <.001, d = .54).There was no significant difference in psychological flexibility between discharge and follow-up (t(46) = 0.47, p = .64,d = .07).A similar pattern was found for the other outcome measures (BDI-II, WHOQOL-BREF, GAF, GSI).All of them improved between admission and discharge, and the effects remained stable, with no significant changes between discharge and follow-up.Tables with detailed information on this can be found in the larger effectiveness study (26).
Residualized changes in psychological flexibility correlated significantly with residualized changes in symptom severity, depressed mood, global functioning, and most measures of quality of life (see Table 1).This was true both for changes between admission and discharge and between admission and follow-up.The correlation between the residualized change scores of psychological flexibility and quality of life in the environment dimension failed to reach significance in the pre-treatment to posttreatment comparison (p = .009)but was significant in the pretreatment to follow-up-comparison.The effect sizes of the correlations were generally larger for the pre-treatment to followup comparisons.
Residualized pre-treatment to follow-up changes in psychological flexibility correlated significantly with age (r = .43,p = .002),with older participants showing smaller lasting changes.Apart from that, there were no other significant associations between participant variables (gender, educational level, main diagnosis, change of medication) and changes in psychological flexibility (all p >.13).
Key findings and interpretation
ACT considers psychological flexibility an important transdiagnostic factor in the pathogenesis of mental disorders.This view has been supported by the findings of several meta-analyses that found associations between psychological flexibility and various measures of symptomatology, stress, pain, and quality of life (12-14).Increasing psychological flexibility is therefore seen as an important mechanism of change in ACT-based therapies.Consequently, psychological flexibility has also been repeatedly examined as a process variable of therapeutic change in studies on ACT.However, although ACT is considered a transdiagnostic therapy method, it has not been studied in transdiagnostic clinical settings for a long time.As far as we know, there have been only three other studies on ACT in such settings so far (23)(24)(25).Additionally, no study in a transdiagnostic clinical setting has yet reported longitudinal data on psychological flexibility as well as associations between changes in psychological flexibility and changes in quality of life.
This study examined changes of psychological flexibility during and three months after a treatment in an ACT-based transdiagnostic day clinic and their association with changes in symptoms, quality of life, and general functioning.Participants showed higher levels of psychological flexibility at discharge and three months after treatment compared to admission.This change was associated with improved quality of life, reduced symptom burden, and improved general functioning.These findings are consistent with other studies that have found an association between increased psychological flexibility and reduced symptom burden following a transdiagnostic clinical treatment (23)(24)(25).In addition, results from other settings could be replicated that an increase in psychological flexibility is associated with a higher quality of life and level of functioning (12, 13).
Interestingly, the correlations between psychological flexibility and symptoms as well as quality of life were stronger in the followup than in the post-treatment, while none of these variables themselves changed significantly between post-treatment and follow-up (s.26).One possible explanation could be that patients, who have sustained increases in psychological flexibility, may benefit more in the long term, whereas outcomes at discharge may be more influenced by additional factors of the treatment (such as day structure and social contacts in the day hospital).
Another interesting side fact is that, while age was not significantly correlated to psychological flexibility at admission, older patients showed lower lasting changes in psychological flexibility three months after the treatment.This could be due to the age-related cognitive decline in learning ability (36).One conclusion of that might be that older people need longer or more intensive training in ACT to persistently increase their psychological flexibility and thus benefit persistently.On the other hand, ACT has shown to be a promising therapy method for the elderly (37).Additionally, a review has shown older patients to have a higher average psychological flexibility on other measures than on the AAQ-II (37).Although psychological flexibility at admission differed significantly by diagnosis, there was no difference in change in psychological flexibility during treatment.This supports the view of psychological flexibility as a transdiagnostic factor in therapy.
Limitations and future directions
The most important limitation arises from the naturalistic study design.Statements about causality are only possible in experimental designs.Therefore, we can only describe which phenomena occur together, not in which causal relationship they stand.The lack of a control group and repeated testing during treatment also do not allow conclusions about mediation effects (cf.38).
Another potential limitation arises from the use of the AAQ-II.In recent years, there has been increasing disagreement about what the AAQ-II actually measures (39-41).Some of the confusion can be attributed to the inconsistent use of terms in the literature.Some authors refer to the AAQ-II as a measure of psychological flexibility or inflexibility, others as a measure of experiential avoidance, and still others as a measure of acceptance (41).The distinction is important though because, strictly speaking, the antipoles acceptance and experiential avoidance represent only one of the six core processes of psychological flexibility (cf.above).Some authors suggested that the AAQ-II may rather measure negative emotionality than psychological flexibility (39, 42).On the other hand, previous examinations have proven the AAQ-II to have incremental utility above neuroticism or depressive and anxiety symptomology (20).
The use of the term psychological flexibility in connection with the AAQ-II has also been criticized.The AAQ-II was designed to measure psychological inflexibility and it remains questionable whether the absence of inflexibility implies more flexibility, or whether non-inflexibility is equivalent to flexibility (40, 41).For reasons of better readability and on the basis of the term used in the literature, we are still referring to flexibility here as well.Assuming that a reduction in inflexibility is at least accompanied by an increase in flexibility, we are referring to flexibility here, nevertheless, this point has to be considered as a potential limitation.
Another point of criticism is that the AAQ-II measures psychological flexibility as a unidimensional construct, while the underlying theory is that psychological flexibility consists of six interrelated core processes (40,41,(43)(44)(45).On the other hand, it has been argued, that psychological flexibility is a higher level construct, rather than simply the sum of the six core processes (20).In response to the various criticisms of the AAQ-II, several other questionnaires have been developed in recent years to assess psychological flexibility, such as the Open and Engagement State Questionnaire OESQ (46), the Multidimensional Psychological Flexibility Index MPFI (44) or the Psy-Flex (47).All of these measures require further validation to test whether they are preferable to the AAQ-II.Until now, the AAQ-II remains the most frequently used and best studied measure for psychological flexibility (41).
This study makes some important contributions to the research on psychological flexibility as a process variable in transdiagnostic treatments.It is one of the first to examine changes in psychological flexibility in a transdiagnostic clinical setting.It is the first in a transdiagnostic clinical setting that examines the associations between changes in psychological flexibility and quality of life.And it is the first in such a setting to report longitudinal data of psychological flexibility.
To investigate the hypothesis of psychological flexibility as a process variable in transdiagnostic ACT treatment, further studies are needed.The use of a control group, as well as repeated measurements in the process, would be necessary to make sound statements about mediation.In addition, longer follow-up intervals should be used to investigate the role of lasting changes in psychological flexibility and their association with symptoms and quality of life.The role of age and its possible effects on ACT and psychological flexibility should also be investigated in future studies.
|
2024-07-17T15:15:41.757Z
|
2024-07-15T00:00:00.000
|
{
"year": 2024,
"sha1": "fdcf8baeb84d62b286943291f3cc5acc03bcb3cd",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8c32f71149a4ad0b8975c9112870386cb8259933",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
158966942
|
pes2o/s2orc
|
v3-fos-license
|
A Mixed-Method Review of Cash Transfers and Intimate Partner Violence in Low-and Middle-Income Countries
There is increasing evidence that cash transfer (CT) programs decrease intimate partner violence (IPV). However, little is known about how CTs achieve this impact. We conducted a mixed-method review of studies in low-and middle-income countries (LMICs). Fourteen quantitative and eight qualitative studies met our inclusion criteria, of which eleven and five, respectively, demonstrated evidence that CTs decrease IPV. We found little support for increases in IPV, with only two studies showing overall mixed or adverse impacts. Drawing on these studies, as well as related bodies of evidence, we developed a program theory proposing three pathways through which CT could impact IPV: (a) economic security and emotional well-being, (b) intra-household conflict, and (c) women’s empowerment. The economic security and well-being pathway hypothesizes decreases in IPV, while the other two pathways have ambiguous effects depending on program design features and behavioral responses to program components. Future studies should improve IPV measurement, empirical analysis of program mechanisms, and fill regional gaps. Program framing and complementary activities, including those with the ability to shift intra-household power relations are likely to be important design features for understanding how to maximize and leverage the impact of CTs for reducing IPV, and mitigating potential adverse impacts. Intimate partner violence. Domestic violence. Cash transfers. Women’s empowerment. JEL codes
There is increasing evidence that cash transfer (CT) programs decrease intimate partner violence (IPV).However, little is known about how CTs achieve this impact.We conducted a mixed-method review of studies in low-and middle-income countries (LMICs).Fourteen quantitative and eight qualitative studies met our inclusion criteria, of which eleven and five, respectively, demonstrated evidence that CTs decrease IPV.We found little support for increases in IPV, with only two studies showing overall mixed or adverse impacts.Drawing on these studies, as well as related bodies of evidence, we developed a program theory proposing three pathways through which CT could impact IPV: (a) economic security and emotional well-being, (b) intra-household conflict, and (c) women's empowerment.The economic security and well-being pathway hypothesizes decreases in IPV, while the other two pathways have ambiguous effects depending on program design features and behavioral responses to program components.Future studies should improve IPV measurement, empirical analysis of program mechanisms, and fill regional gaps.Program framing and complementary activities, including those with the ability to shift intra-household power relations are likely to be important design features for understanding how to maximize and leverage the impact of CTs for reducing IPV, and mitigating potential adverse impacts.Intimate partner violence.Domestic violence.Cash transfers.Women's empowerment.JEL codes: I10, I30, I38, J10, J12, J16.
There is increasing interest among social epidemiologists and development economists in exploring the role that cash transfers (CTs) have on intimate partner violence (IPV).Social epidemiologists have demonstrated the pervasiveness of IPV globally, with one in three women estimated to experience at least one act of physical and/or sexual violence by an intimate partner in her lifetime (Devries et al. 2013a).Development economists have invested heavily in rigorous large-scale evaluations of social protection schemes, including CTs in low-and middle-income countries (LMICs).As the body of research grows and sophistication of methodology increases, there has been a push to demonstrate the impacts of CTs on a wider range of outcomes beyond immediate program objectives related to poverty and food security, including intra-household gender dynamics, and, more recently, women's experience of IPV.Thus, the fields of epidemiology and economics have converged on the importance of understanding if CT and IPV are linked, and which behavioral mechanisms may underpin this relationship.
Theoretically, the mechanisms through which CTs affect IPV depend on the design of the CT program.At their core, CTs are economic safety nets designed to reduce poverty.Absolute resource theory and stress theory hypothesize that CTs may lead to decreases in IPV by improving a household's economic situation, thereby reducing poverty-related stressors on individuals and households (Fox et al. 2002;Ellsberg et al. 2015;Vyas and Watts 2009).Additionally, many CT programs target women as the main beneficiary, thus potentially affecting power dynamics within the household.To model these power dynamics, economists use variants of nonunitary household bargaining models in which an increase in a woman's income (either earned or unearned as with a CT), may decrease violence by improving her bargaining power within the household (Tauchen, Witte, and Long 1991;Farmer and Tiefenthaler 1997).However, variants of the bargaining model also predict that an increase in women's resources may put a woman at increased risk of IPV if men feel threatened and use violence to reassert authority in the relationship (Eswaran and Malhotra 2011).Additionally, cash and other transfers targeted at women may also put them at risk if men use violence to extract cash or resources from them (Bloch and Rao 2002).
Theories in other disciplines such as marital dependency and feminism likewise offer mixed predictions of the effect of cash on a woman's risk of experiencing IPV.Women who are economically dependent on their partner and are surrounded by institutions that promote gender inequality and male authority over female behavior may be more susceptible to violence (Vyas and Watts 2009).Thus, CTs that target women may empower them both in the home and in the community, thereby reducing their risk of IPV.At the same time, if a woman's partner feels emasculated in his role as provider, or threatened by her increased independence, he may redouble his efforts to assert authority, using violence if necessary (Heise and Garcia Moreno 2002;Hautzinger 2003).As Jewkes (2002) observes: "An inability to meet social expectations of successful manhood can trigger a crisis of male identity.Violence against women is a means of resolving this crisis because it allows expression of power that is otherwise denied."Buller et al.Finally, many CT programs include complementary activities such as trainings and/or linkages to health or educational services, either as a part of the program or as a "conditionality" intended to influence beneficiary behaviour-components which themselves could affect IPV.For example, group-based trainings attended by women could reduce IPV by improving their knowledge, self-efficacy, and self-esteem, thus enhancing their bargaining power.Frequent interactions with other beneficiaries in the community could build women's social capital and social ties (Brody et al. 2015), or increase the social cost of men's violent behavior (Stets 1991;Van Wyk et al. 2003).Since variation in program design is large-including size, duration, and targeting of transfers, and overlay of complementary activities-implementers' routinely make critical decisions that influence the program's potential impact on diverse beneficiary populations.
While testing and validating theoretical models is needed to better understand and predict the impact that cash may have on IPV, there are also pressing programmatic and policy reasons to better understand these relationships and how they function across contexts and populations.First, the scale and reach of CT programming globally is both large and increasing.According to the World Bank's State of Social Safety Nets (2015), 1.9 billion people worldwide are enrolled in some form of social safety net, with approximately 20 programs operating in the average developing country and CTs present in nearly every country.In addition, CTs are expanding rapidly.For example, in sub-Saharan Africa (SSA), about half of countries in the region (21) had some form of unconditional cash transfer (UCT) programming in 2010-a number that reached 40 by 2014.In addition, CTs tend to be cost-effective, both in comparison with alternative in-kind transfers, as well as in comparison with alternative forms of poverty-alleviation (Margolies and Hoddinott 2014;Gentilini 2016).Because of their scale (reaching 718 million individuals globally) and relative cost-effectiveness, small changes in how transfers are designed and delivered have the potential to influence their impact on IPV at the margin (World Bank 2015).Similarly, given the possibility of backlash and increases in IPV, it is essential that donors and implementing agencies understand these risks and work to minimize unintended harm from such programs.
Recent reviews have sought to summarize evidence on this topic; however, none have been sufficient to understand the complex relationship between CTs and women's risk of IPV (Bardasi and Garcia 2014;Bastagli et al. 2016).Some focus largely on quantitative evidence and group IPV outcomes alongside other gendered outcomes such as women's decision-making, agency, fertility, or early marriage, thus providing little understanding of the mechanisms underlying the cash/violence relationship in different contexts.Those that focus more narrowly on IPV as an outcome combine cash transfers with a range of other economic strengthening interventions from microfinance and savings schemes to the impact of women's employment on IPV, making it impossible to isolate the impact of cash alone (Krishnan et al. 2010;Gibbs, Jacobson, and Kerr Wilson 2017).
220
The World Bank Research Observer, vol. 33, no. 2 (2018) In order to fill this gap, we have conducted a mixed-method review to help inform the understanding of the causal link between CTs and IPV in LMICs.First, we review the existing body of rigorous quantitative and qualitative research linking CTs and IPV, with a focus both on mechanisms underlying the results and the implications of CT design features on the IPV outcome.Second, we build a program theory and evaluate the level of evidence existing in support of the various pathways, drawing on both the reviewed CT literature and evidence from other fields that support or refute steps along the hypothesized causal pathway.Finally, we propose program design components and factors that may be key in delivering beneficial impacts, identify research gaps, and discuss how upcoming evaluations could be tailored or modified to fill these gaps.
Methods
We conducted a scoping exercise, which comprised a rapid assessment of the known literature, hand-searched articles, as well as articles obtained from general search engines (Google scholar).Based on this initial rapid assessment, we conducted interviews via Skype with six experts (researchers and implementers) with prior experience on the intersection between CTs and IPV.These interviews helped identify key literature, working papers, and ongoing studies, and pointed to mechanisms and hypotheses that leading experts considered viable as potential pathways linking CTs and IPV.
For the formal review process, searches were conducted using the following broad criteria: "cash transfers" and "violence", "intimate partner violence" or "domestic violence".Searches were conducted using the following electronic databases: PubMed, Medline, Web of knowledge, Web of Science, Global Health, and Social Sciences Abstracts.No search period restriction was imposed; however, we did limit our search to documents written in English and Spanish.Articles published in peer-reviewed journals and relevant grey literature were included.We ran forward and backward citation checks among all identified articles that met the inclusion criteria.
Table 1 describes the broad inclusion and exclusion criteria for our review.We focused exclusively on LMICs and included all types of cash transfers, whether they are conditional cash transfers (CCTs), UCTs, or bundled as part of multi-sectoral or component programming, regardless of their objective (e.g., food security, entrepreneurship, or old-age pensions).We excluded two cases of lump-sum CTs that were included primarily as part of entrepreneurship and micro-credit programs (in Uganda and Burkina Faso), as they were likely to vary substantially in the mechanisms and impact pathways; however, we include these two cases as part of the discussion.We focused on the outcomes of IPV (or domestic violence), which encompasses the following: physical, sexual, emotional, and/or psychological violence, including controlling behaviors, typically experienced inside the household, regardless of the
Indicators
Emotional, physical, sexual IPV (including homicide and assault), controlling behaviors, psychological and economic violence between co-habiting, dating or marital partners.
Proxy measures for IPV such as "conflict", "disagreements", "disputes" or autonomy/empowerment measures, as well as perpetration from non-partners.
Methodology
(quantitative) Use of rigorous methodology to link CTs to IPV, including a credible counterfactual.
Does not provide sufficiently rigorous research design, or description of analysis to credibly claim that pattern or results can be attributed to the program.
Methodology
(qualitative) Studies that explicitly discuss and provide evidence on the link between CTs and IPV; In order to assess quality of studies, we used the COREQ checklist.These were scored on a high, medium and low scale.
Studies were assessed on the basis of methodological limitations of individual studies, relevance to the review question, coherence across studies and adequacy of data.
Source: The authors.
Note: CTs = Cash transfers; IPV = Intimate partner violence; COREQ = Consolidated criteria for reporting qualitative research.
222
The World Bank Research Observer, vol. 33, no. 2 (2018) specific methodology used to collect or measure each indicator.The IPV is further defined as violence between intimate partners (e.g., marital, co-habiting or dating partners), primarily experienced by women and perpetrated by men.However, we did not exclude evidence in the opposite direction.We included evidence showing impacts on one or more combinations of IPV outcomes, including those that show different impacts by violence type.We excluded studies that only used proxy measures for IPV, including general terms such as "conflict", "disputes", or measures of autonomy or empowerment.For empirical studies, we focused on methodologies that allowed a credible identification of the counterfactual, typically either randomized controlled trials (RCTs) or quasi-experimental designs with data collection at two or more points in time.For qualitative studies, we used the consolidated criteria for reporting qualitative research (COREQ) checklist (Tong, Sainsbury, and Craig 2007). 1 Two independent researchers scored the articles using three domains-(a) research team and reflexivity, (b) study design and methods, and (c) data analysis and reporting-to assign a score of high, medium, or low quality.We did not exclude any studies according to this assessment but report on the scores achieved by each study.For both quantitative and qualitative studies, we first read the content to identify themes and mechanisms.Thereafter, we developed a matrix summarizing key information regarding the program design, implementation, and features of interest.For quantitative studies, we compiled information on methodological design, sample sizes, indicators, and impacts.For qualitative studies, we summarized methods, sample sizes, and implied impact of CT and IPV (increase, decrease, mixed, or null impacts).Where available, information was also extracted for both types of study on the underlying mechanisms that authors advanced or tested as being possibly responsible for the impacts observed.Descriptions or mechanisms that relied on the interpretation and opinion of the authors were treated as theoretical insights (hypotheses), rather than evidence.
To further refine the program theory and assess different steps in the hypothesized causal chain, we conducted comprehensive but non-exhaustive reviews of other bodies of literature.We employed snowball sampling to identify additional studies for further explanation-building, such as tracking citations in footnotes, endnotes, and references of potentially relevant articles.The protocol was registered in the Prospero database (CRD42015024511).
Review of Programs and Quantitative Evidence
Table 2 summarizes the program components from the identified core quantitative papers, organized alphabetically by country and by year of publication.For the quantitative evidence, we report impacts for all qualifying IPV indicators analyzed as part (3) (5)
228
The World Bank Research Observer, vol. 33, no. 2 (2018) The baseline ( 2012), midline (2013) and endline (2014) did not collect information on IPV.However, the post-endline survey in 2014/15 did collect this information, therefore estimates are reflective of 6-10 months after a 24-month transfer program had ended.
2 Homicide data collected from Brazil's National Mortality Data Base (SIM), complemented by the Sistema de Informacioes Hospitalares (SIH) and coded as homicide if it is classified as such, or the cause of death is aggression. 3 The data are municipality-level reports recorded by the National Institute of Legal Medicine via the National Reference Center on Violence, thus as events are reported events to health and justice systems, they are unlikely to comprise of less severe IPV (including emotional violence).Therefore, associated treatment coefficient is the month of CCT receipt (short-term effects), leveraging variation in payments across municipalities.The exact questions were: "Who is (are) the individual(s) who drinks the most in this household, irrespective of the frequency?"and "While drinking, does this person (referred to the heaviest drinker) have an aggressive behavior?" 10 The authors use an imbedded module "La Encuesta sobre Violencia y Toma de Decisiones (ENVIT)", which was integrated into a random sample of the 2004 follow-up of the experimental evaluation for the urban sample of Oportunidades, "La Encuestra de Evaluacion de Oportunidades en areas urbanas 2004".The survey covers both "internal controls (eligible households in communities with Oportunidades who are not beneficiaries)" and "external controls (eligible households in non-Oportunidades communities)".Analysis gives coefficients for controls, instead of for program participation. 11 The author uses repeated cross-sections from "La Encuesta Demographica y de Salud Familiar (ENDES)" survey relying on changes in Juntos roll out, restricting analysis to the 880 highest-priority districts.Analysis controls for individual and household characteristics, including the Juntos eligibility score, as well as district and year fixed effects (coefficients reported are from fully controlled models).In addition, matching estimates are presented as robustness checks, but are not reported as they are similar to those reported here. 12 The author uses repeated cross-sections from "La Encuesta Demographica y de Salud Familiar (ENDES)" survey relying on changes in Juntos roll out and construction of a comparison group using a poverty score and limiting the analysis to poor, rural eligibles.Results reported here include difference-in-difference models with individual controls and district, year fixed effects.
of the study.However, we do not present results for each sub-sample or heterogeneity analysis.Instead, we summarize results of additional analysis in column 13 to help unpack potential mechanisms. 2 In total, we identified 14 studies meeting our inclusion criteria: six are peerreviewed journal articles, while eight are technical reports or working papers. 3 In total, nine countries are represented, with multiple studies in Mexico, Ecuador, and Peru.Only three studies were conducted in settings outside Latin America (Bangladesh, Kenya, and South Africa), and in only one case (a World Food Programme pilot in Ecuador targeted at Colombian refugees) could settings qualify as humanitarian or post-conflict.Ten out of 14 studies evaluate government programs (table 2, column 3), which have been designed as CCTs typically conditional on health and education co-responsibilities; three evaluate UCTs, with several providing additional services (e.g., behavior change communication; BCC) together with in-kind or other transfers (e.g., food or food vouchers).Programs provide a mix of flat and variable transfers (according to household size and demographic composition), ranging from 6% to 50% of baseline household expenditures (table 2, column 5).The majority of programs implement some type of means-based targeting to identify extremely poor households as beneficiaries alongside demographic criteria such as the number of children of specific ages residing in the household.Additionally, nearly all programs target women as the main recipient, with the exception of one program in Kenya that randomizes targeting to women or men.Finally, the majority of programs deliver benefits on a monthly basis (table 2, column 6).
Study designs are nearly all experimental (seven are either longitudinal or crosssectional RCTs) or quasi-experimental (five), with the remaining two using nonexperimental designs (table 2, column 7).Sample sizes at the individual level range from 1,010 women (Kenya, Give Directly) to 8,065 women (Peru, Juntos).Additionally, several evaluations used administrative data aggregated typically at the municipal level (in Brazil, Colombia, and Uruguay).Data collection for studies ranges from 1998 to 2015, with most taking place from 2004 to 2012 (table 2, column 9).In only one case did authors collect data post-intervention (e.g., 6 to 10 months after program completion) to assess if impacts were sustained after the program had ended (Roy et al. 2017).
The 14 studies examine a range of IPV outcome indicators (table 2, column 10).Overall, 56 outcomes are analyzed, including 34 measures of physical or sexual violence (13 physical violence, 10 sexual, four combined physical and/or sexual, two combined physical and/or emotional violence, two combined physical/sexual/psychological and economic violence, two IPV reported to the health and justice systems, and one administrative data on homicide).Additionally, 13 studies use measures of emotional violence, and 13 use other typologies (two controlling behaviors, three psychological violence, two economic violence, three threats of physical IPV, two combined measure of physical/sexual/psychological and economic violence, and one aggressive behavior). 4It should be noted that some experts conceptualize controlling behaviors as a risk factor for IPV, rather than a type of violence itself.The studies operationalize IPV in a variety of ways.The majority use some form of the conflict tactics scale (CTS), with recall periods typically six to 12 months, while a minority include lifetime measures (the latter may be less sensitive to a short-term intervention).The exceptions are the three papers that used administrative data, as well as one that asked about aggressive behavior following a partner's consumption of alcohol (Rivera, Hernández, and Castro 2005).For the mean and effect size, we have maintained the same number of significant digits or reporting as in the original reviewed papers.
Across all 56 outcomes, 20 (36 percent) are statistically significant and negative at the p < 0.10 level or higher (suggesting that the CT reduced IPV), while only one (2 percent) is statistically significant and positive at the p < 0.10 level or higher (suggesting that the CT increased IPV).The remaining 63 percent show no significant change in IPV due to the CT.For significant reductions in IPV, the percentage varies by category of violence examined: 44 percent of studies assessing physical and/or sexual IPV and 38 percent assessing other outcomes (e.g., controlling behaviors) demonstrate a significant reduction in violence, whereas only 8 percent of those assessing emotional violence, do so.The one case where an increase is found in emotional IPV is in the Give Directly pilot in Western Kenya when comparing treatment to non-treatment households in the same villages (Haushofer and Shapiro 2016).However, in the Kenya evaluation, reductions are also found for both physical and sexual violence when comparing alternate study arms (e.g., what the authors term the "across village", rather than "within village" estimates).Although we do not formally compute average effect sizes, when considering the 13 coefficients on indicators of individual (rather than administrative) impacts, decreases range from 11 percent to 66 percent reductions over baseline means (or endline comparison means).Further, nine of these impacts represent reductions of 30 percent or more, which is quite notable given that most evaluations took place over the short or medium term.
When considering study-level impacts, overall, 11 out of the 14 studies find decreases in IPV attributable to the program, one finds mixed impacts (both decreases and increases), and two find no impacts.The two studies finding no impact are both from Mexico.One of the studies looks at long-term impacts of Oportunidades approximately nine to 13 years after program initiation empirically through the creation of comparable beneficiary and non-beneficiary groups using national surveys (Bobonis, Castro, and Morales 2015).The authors hypothesize that this lack of impact, which contrasts with the decreases they find in the short term, could be due to marital dissolution and decreases in overall rates of IPV over time; however, they are unable to test these theories comprehensively.The second study examines aggressive behavior following alcohol consumption using data from the 1998 round of the experimental Oportunidades evaluation (Angelucci 2008).Although no average effect is found, there are treatment effects (both positive and negative) by certain household characteristics and by transfer size-however, these are likely to be endogenous and therefore it is unclear how these differential effects should be interpreted.
Authors have put forward various ideas about how CT programs could affect a woman's risk of violence, but few have tested their hypothesized mechanisms empirically.Cash could decrease violence by: (a) Increasing women's empowerment or bargaining power, or changing intra-household gender dynamics (mentioned by all 12 studies documenting decreases, except Rivera, Hernández and Castro 2005; with evidence from all but Rodriguez 2015 suggesting that the pathway could be valid).(b) Decreasing household poverty and therefore poverty-related stress or improving emotional well-being (mentioned by four studies: Rodriguez (2015); Hidrobo, Peterman, and Heise (2016); Haushofer and Shapiro (2016) Roy et al. (2017); with evidence from all but Rodriguez (2015) suggesting that the pathway could be valid).(c) Increasing interaction with the health sector, thereby improving women's overall health and making her more resilient to abuse (mentioned by one study, Ritter Burga 2014, including evidence suggesting the pathway could be valid).(d) Encouraging greater interaction with other women and village leaders, which increases a woman's social capital and social ties, and could increase the social cost of men perpetrating violence (mentioned by one study, however, not tested directly: Roy et al. 2017).
In only two cases do authors hypothesize reasons for potential increases in IPV, including: (a) A partner seeking to extract resources/CT from his wife (mentioned by one study: Bobonis, Gonzalez-Brenes, and Castro 2013, however not tested directly); and (b) male backlash, specifically due to partners feeling threatened by women usurping their traditional "identity" as a provider (Angelucci 2008; with evidence suggesting the pathway could be valid).
Of note, only in one study (Hidrobo and Fernald 2013) do authors acknowledge that there may be multiple mechanisms at play that could cancel each other out (e.g., female bargaining and male backlash).
Review of Programs and Qualitative Evidence
Table 3 summarizes the program components from the identified core qualitative papers, organized alphabetically by country and author and year of publication. 5 In total, we identified eight qualitative studies meeting our inclusion criteria: two are published in peer-reviewed journals and seven are working papers or technical reports.In terms of quality assessment using the COREQ checklist (table 3, column 13), four of the included studies are given a high score and four are given medium scores.Overall, the studies represent six countries, including two assessing Oportunidades/Progresa in Mexico, one each from Ecuador and Nicaragua, two from SSA (Uganda, and Lesotho), and one from Turkey (table 3, column 2).Three of the (3) (5) No clear effect No direct effect of cash on increase in IPV; money from the program was kept aside as money for the HH (for children's education) and not as money for the woman (although she administered it).Hence, men did not feel threatened, and there was no increase in IPV.
If IPV did occur, it was related more to the dynamics of the relationship and the distribution of power within the couple prior to the receipt of transfers.In particular, it resulted from situations where violence was seen as normal by men.The authors hypothesize IPV might decrease as a product of less tension in the HH due to less money problems, however, more research needs to be done to establish this direct link.
Decreased
Few references were made by women regarding men's negative attitudes towards women that participated in the program.In general, there was an improvement in HH relations with an increase in resources due to reduced financial strain.Most women mentioned that men expressed agreement with the program selecting women as recipients, as they were more like to spend resources on the family.There were few instances where women handed over their money to men or that the beneficiary was the man.Rare instances of men forcibly taking money from wives.
High
Buller et al.
Low
Some women also felt that transfers increased women's financial autonomy.
They were less dependent on their husbands than before on expenditures related to their personal needs.If there was not enough money to go around, fighting ensued.Women reported that 'fighting' meant verbal or physical violence.
236
The World Bank Research Observer, vol. 33, no. 2 (2018) Mixed evidence, in most cases decreased and increase in some isolated cases.The CTs resulted in: 1) Overall reductions in physical, sexual and emotional violence; for the male members of beneficiary households, the CT presented a double-edged sword.On the one hand, it improved the financial conditions of the HH whilst relieving them of the burden of responsibility.Hence in some cases, men felt it contributed to or normalized negative masculinities; and 2) Reductions in levels of alcoholism among men.However in some isolated cases, there was an increase in IPV.Some women reported men wanting the money to drink alcohol resulting in the woman getting beaten.Also some men viewed women as over-empowered and thus men perceived women as "threats".The eight qualitative studies explore a range of dynamics relevant to CTs and IPV, including the following: addressing how the receipt of cash has influenced household gender relations; whether conflict over resources within the household has increased or decreased; whether there has been a change in couple and/or family relationships; and whether receipt of the transfer has affected women's decision-making authority.Some studies focus specifically on these themes, while others are more general, exploring the impact of CT on poverty alleviation with sub-objectives that focus on gender relations and household decision-making (table 3, column 10).Five of the studies show a reduction in IPV after receipt of the CT (Adato et al. 2004;Slater and Mphale 2008;Angeles 2012;Yildirim, Ozdemir, and Sezgin 2014;Buller et al. 2016;), while one study shows mixed results with an overall reduction in all forms of IPV but also some isolated households where IPV increased (Nuwakora 2014).Two studies show no clear effect of the CT on IPV (Adato et al. 2000;Maldonado, Nájera, and Segovia 2005).In one of these studies the authors note that IPV was not reported freely given the sensitive nature of the topic, which might have influenced results (Adato et al. 2000).In the other study with no clear effects, Maldonado, Nájera, and Segovia (2005), who explore the impact of Oportunidades on intra-household dynamics in Mexico, hypothesize that the dedicated use of the transfer for children's education-as opposed to money for a woman herself -has meant that men have not felt threatened by the transfer, resulting in null effects on IPV.
Medium
Authors of these qualitative studies suggest that the following mechanisms could explain decreases in IPV: (a) Reductions in poverty-related stress (mentioned by five studies: Adato et al. (2004); Angeles (2012); Yildirim, Ozdemir, and Sezgin (2014); Nuwakora (2014); Buller et al. (2016), with evidence by all suggesting that the pathway could be valid).(b) Reduction in household tensions leading to fewer conflicts (mentioned by four studies: Slater and Mphale (2008); Angeles (2012); Yildirim, Ozdemir, and Sezgin (2014); Buller et al. (2016); with evidence by all suggesting that the pathway could be valid).(c) Increased women's decision-making power in the household and feelings of empowerment (mentioned by four studies: Slater and Mphale (2008); Angeles (2012); Nuwakora (2014); Buller et al. (2016), with evidence by all suggesting that the pathway could be valid).
In the few studies that mention increases in IPV, authors suggest the following mechanisms could increase IPV: (a) The forced extraction of money/cash by a woman's male partner (mentioned by two studies: Nuwakora (2014) and Adato et al. (2000), with evidence from Nuwakora (2014) only suggesting that the pathway could be valid).(b) As a compensatory mechanism to re-assert authority when a man feels his masculinity is being threatened (mentioned by one study: Nuwakora (2014), with evidence suggesting that the pathway could be valid).
Program Theory for Understanding the Relationship between CTs and IPV
Our review suggests that there are three primary pathways through which CTs may affect IPV.For ease of reference, we have named these: (a) the economic security and emotional well-being pathway; (b) the intra-household conflict pathway; and (c) the women's empowerment pathway.The first pathway operates primarily through household-level mechanisms, evolving from a pure "income effect" of cash into the household (regardless of who is the primary recipient), which reduces poverty-related stress and improves emotional well-being.The second pathway works through the effect of cash on marital dynamics and conflict: increased access to cash, particularly in very poor households, can lessen conflict by reducing arguments over limited budgets and daily money needed to run the household.Alternatively, if CT funds are used for expenditures not intended to benefit all household members, for example to purchase alcohol or tobacco, cash could create new sources of marital conflict.Finally, through the third pathway, cash or complementary interventions could, if appropriately targeted, increase a woman's bargaining power, strengthen her self-worth, and potentially increase her perceived value to the household.Similar to the conflict pathway, this may have mixed effects depending on how men respond to potential shifts in resources or power dynamics.On the one hand, some men may feel threatened in situations where their wives are empowered, which can lead to a backlash and increased IPV as men attempt to reassert control and their identity as the household provider or dominant decision-maker.On the other hand, some men may accept this elevated position of women in the household and decrease IPV in order to keep her satisfied within the marriage.
Figure 1 summarises the three pathways and articulates the various steps in the hypothesized causal chain.Design elements listed on the far left-such as the size, frequency and duration of transfers, and targeting criteria including the particular vulnerability and poverty profiles of the beneficiary population and whether or not women are explicit recipients-can influence the impact of a program.We hypothesize that the specific pathways or causal mechanisms that operate in any instance may be a function of: (a) the design features of the CT itself; (b) how a woman's partner reacts to the transfer; and (c) the context of the CT program, including underlying factors such as the gender regimes, social norms, and local laws and policies.In the following sections, we explain stylized versions of each pathway, relying on a broader evidence base than the CT and IPV literature where necessary, and analyze the degree to which data from the review either supports or refutes the hypothesized pathway.
Economic Security and Emotional Well-Being Pathway
As CTs are primarily designed as an economic social safety net, the most generalizable pathway that results in decreases in IPV is through improved household economic 240 The World Bank Research Observer, vol. 33, no. 2 (2018) security and associated decreases in household poverty (e.g., increased financial and food security; increased savings, assets, and investments; and improved financial coping strategies).These improvements, in turn, have the potential to improve emotional well-being of household members by decreasing poverty-related stress and improving mental health.This positive effect could directly lead to decreases in IPV, or work indirectly through decreased use of alcohol as a negative coping mechanism in response to poverty and financial stress.
CTs and Increased Economic Security (Decreased Poverty)
There is a large and robust body of literature across different geographical regions and program typologies showing that, in general, CTs have significant positive impacts on a range of household-level economic-security outcomes, including poverty rates, food security, household expenditure and consumption, household durable and productive assets, income-generation and labor-force participation, and savings and investments (Hidrobo et al. 2014;Bastagli et al. 2016;Natali et al. 2016;Banerjee et al. 2017;Handa et al. 2017;Handa et al. 2018;Hidrobo et al. 2018).Further, there is a growing body of literature documenting the positive local economy impacts of CTs, implying positive spillovers on non-beneficiary households in terms of economic outcomes (Taylor, Thome, and Filipski 2016).For this pathway to be effective, program design and implementation components-such as the relative size of the transfer, and the regularity and duration of benefits-are important factors in determining the magnitude of the impact of CTs.
Economic Security and Improved Emotional Well-Being
There is increasing evidence that poverty and poor mental health are linked in a twoway, reinforcing relationship.On the one hand, poverty is a risk factor for poor mental health and mental disorders though malnutrition, stress, substance abuse, social exclusion, and exposure to trauma and violence (the social causation hypothesis), while on the other hand, poor mental health increases risk of poverty due to increased health expenditures, reduced productivity, stigma, and loss of employment and earnings (the social drift hypothesis; Lund et al. 2010).
Whereas these linkages have been well explored in developed countries, only in the last few decades has the relationship been confirmed in LMICs.Lund et al. (2010) conducted a systematic review of the epidemiologic literature in LMICs to assess the relationship between poverty and common mental disorders, and found that among 115 studies reviewed, most reported positive associations between a variety of poverty measures and negative mental health outcomes (73 percent to 79 percent of studies).However, the strength of this relationship depended on the specific poverty dimensions examined.Corroborating these findings, a meta-analysis using 60 studies finds that individuals of low socio-economic status had higher odds of being Buller et al. depressed (Lorant et al. 2003), and a global analysis of over 139,000 individuals from 131 countries shows a positive relationship between income and emotional well-being, both within and across countries (Sacks, Stevenson, and Wolferes 2010).These findings are supported by qualitative evidence on the effects of transfers on beneficiary families across regions.For example, a woman in Ecuador reported the following: "In my household it was like happiness, we all got along, with my children, with my husband […] in my house we were happy […] because before we did not have enough money for those things [food]," (Buller et al. 2016).Further, a Ugandan beneficiary reports that "Apart from the cash, we have been united as a group.The project has brought happiness in the family, as husbands and wives.It has also united parents to their bigger children," (Nuwakora 2014).Haushofer and Fehr (2014) provide insights into the psychology of poverty by summarizing evidence that suggests poverty-related stress causes negative affective states (including sadness and anger), which increase short-sighted and risk-adverse decision-making and other economic behaviors that reinforce poverty.Overall, evidence confirms a strong relationship between poverty and mental health in developing settings.
Although the relationship between poverty and poor mental health is well established, we know less about the typologies of interventions that are successful at breaking the two-way cycle.A recent review of programming concludes that, although there is good evidence that a variety of mental health programs have positive impacts on economic outcomes, overall the mental health effects of poverty-alleviation programs are inconclusive (Lund et al. 2011).However, CCTs are identified as a caveat to the latter statement.In recent years there has also been increasing evidence that UCTs have the potential to improve mental health and well-being of children, youth, and adults in recipient households.In particular, there is evidence that CTs have positive impacts on measures of happiness and life satisfaction, stress, and depression (Ozer et al. 2011;Daidone et al. 2015;Kilburn et al. 2016a;Haushofer and Shapiro 2016), as well as child cognitive and behavioral assessments, cortisol concentration biomarkers, and adolescent psychological distress (Fernald and Gunnar 2009;Baird, De Hoop, and Özler 2013;Kilburn et al. 2016b).
Emotional Well-Being and IPV
As with poverty and mental health, evidence suggests that the relationship between poor mental health and IPV victimization is bidirectional (Machisa, Christofides, and Jewkes 2017).In a recent systematic review and meta-analysis of longitudinal studies, Devries et al. (2013b) find that, for women and men, depressive symptoms were associated with recent experience of IPV and, conversely, that recent experience of IPV is associated with recent depressive symptoms (the latter for women only).
A recent study including 10,178 men in six countries in Asia and the Pacific finds that depressive symptoms increase the risk of physical, sexual, and emotional IPV perpetration after adjusting for childhood exposure to violence (Fulu et al. 2013).Alongside depression, the literature also identifies anxiety and post-traumatic stress disorder and other mental health disorders as associated with IPV victimization.For example, a review of cross-sectional psychiatric morbidity and populations surveys finds associations between all mental disorders and IPV victimization in both men and women (Oram et al. 2014), and a systematic review and meta-analysis of 41 studies finds a higher risk of experiencing IPV among women with depressive disorders, anxiety disorders, and post-traumatic stress disorder (PTSD), in comparison with women without mental health disorders (Trevillion et al. 2012).
The link between poor emotional well-being (in particular situational stress) and IPV has also been documented.Several studies, including one among couples in Thailand, have demonstrated an association between current life stressors and the risk of experiencing and/or perpetrating IPV (Hoffman, Demo, and Edwards 1994;Cano and Vivian 2001).Additionally, a study among U.S. Air Force Active Duty members documents a strong effect of financial stress on the risk of perpetrating IPV among both men and women (Slep et al. 2010).There is also emerging evidence that childhood abuse or other adversities may potentiate the impact of recent stressors on risk of IPV perpetration, a hypothesis known as the "stress sensitization theory."Among 34,653 adults in the United States, for example, the risk of perpetrating IPV among men with high current life stress was 10.1 percentage points greater among those with histories of high versus low childhood adversity scores (Roberts et al. 2011).
Economic Security, Alcohol Abuse and IPV
A final mechanism through which improved economic security may affect the risk of IPV is through reduced alcohol consumption via improved emotional well-being.Although the relationship between economic security and alcohol use is complex, many studies show that the largest burden of alcohol-related mortality and morbidity falls on populations with low socio-economic status (Jones et al. 2015).Likewise, a robust body of evidence from LMICs shows a strong and consistent association between men's use of alcohol and women's risk of IPV (Gage 2005;Foran and O'Leary 2008;Graham et al. 2008;Hindin, Kishor, and Ansara 2008;Dalal, Rahman, and Jansson 2009;Abramsky et al. 2011); one systematic review pools the results of 11 studies and finds that harmful use of alcohol is associated with a 4.6-fold increased risk of exposure to IPV compared with mild or no alcohol use (Gil-Gonzalez et al. 2006).Studies suggest that alcohol affects risk of IPV in multiple ways: as a trigger for arguments (Heise 2012); by affecting problem-solving and other cognitive abilities (Hoaken, Assaad, and Pihl 1998); by lowering inhibitions and making it easier to misinterpret verbal and non-verbal cues (Klostermann and Fals-Stewart 2006); and by playing into culturally defined scripts about how alcohol affects behavior (Quigley and Leonard 2006).While alcohol alone is neither necessary nor sufficient to cause violence, a recent review concludes that it meets all the epidemiological criteria for being considered a contributing cause of IPV (Leonard and Quigley 2017).
Intra-household Conflict Pathway
While greater financial stability may reduce IPV by improving emotional well-being, access to cash can also affect violence directly by either reducing or increasing fodder for arguments.More cash can reduce marital conflict over money, or it can increase conflict if the money is diverted to temptation goods or partners disagree on how the money is spent.In the systematic review, Vives-Cases, Gil-González, and Carrasco-Portiño (2009) find that marital conflict is significantly associated with IPV in 10 out of 11 studies identified.
Decreased Conflicts over Money
Conflicts over money have been identified by different studies in poverty contexts as a trigger for violent episodes within couples (Rabbani, Qureshi, and Rizvi 2008;Fehringer and Hindin 2014).Our review shows that CTs seem to have an impact in reducing arguments of this type.From the papers included in our review, Buller et al.'s (2016) mixed-methods analysis finds that the provision of cash to households reduces IPV, partially by eliminating the need for women to negotiate the daily cash they need to buy food for the family.During qualitative interviews post-trial, women reported that transfers meant they did not have to ask their husbands for money, which eliminated a source of conflict in the relationship.Furthermore, Angeles (2012) finds that women in Uganda reported a decrease in fights occurring due to competition over scarce resources.The CT helped pay for a number of items such as school fees, medical bills, or immediate needs, effectively reducing the arguments over money.Likewise, Yildirim, Ozdemir, and Sezgin (2014) find that, according to respondents, a majority of fights and continued IPV appeared to be due to financial difficulties, with the majority of victims reporting IPV decreases or cessation after they had started receiving the transfer.According to a respondent in Turkey: "There had been many fights.Because children needed many things that we could not have afforded.I asked my husband and he used to say there is no money.Then I used to get upset and started to yell.We had many fights because of poverty.Not only for us, for all poor, fights come from suffering," (Yildirim, Ozdemir, and Sezgin 2014).
Increased Conflict over Temptation Goods
It is also possible that an unintended effect of CTs could be an increase in spending on temptation goods by either men or women.This relationship has generally not been supported by the literature, although there is limited global evidence on certain types of temptation goods (e.g., gambling and prostitution as compared with consumable goods).Evans and Popova (2017) conducted a systematic review on the link between CTs (both conditional and unconditional) in LMICs and temptation goods, primarily alcohol and tobacco.The authors included 50 estimates from 19 studies and conclude that there is no systematic evidence that beneficiaries increase spending on alcohol and tobacco-a conclusion also reached by a recent analysis of seven government UCT programs in Africa (Handa et al. 2017).It is important to note that this does not mean that cash is not partially used to purchase these goods, but rather that there is no systematic difference compared to spending in non-beneficiary households.
Women's Empowerment Pathway
CTs are often hypothesized to empower women either through increasing their direct access to cash, information (through trainings), or social networks (via group activities)-all of which can enhance women's sense of empowerment.If resources are placed in the hands of a woman, her relative control of resources within the household improves, thus increasing her bargaining power and ability to negotiate her preferences.Direct receipt of cash also increases her financial autonomy and contributes to enhanced self-efficacy and confidence, potentially shifting the balance of power between the woman and her male partner.
Depending on how her partner reacts, this shift in power can either increase or decrease a woman's risk of IPV.Greater female empowerment can strengthen a woman's ability to exit an abusive relationship or at least credibly threaten to leave, which might deter her husband from using violence.Likewise, if the man's reaction is positive and accepting, risk of violence may decrease as the man comes to appreciate both his wife's competency and the added resources she brings to the household.Greater female empowerment, however, could result in more violence if a man reacts negatively to his wife's willingness to assert her preferences more forcefully.Some men may feel threatened by this shift in power and may use violence to reassert their dominance and male authority in the family.
CTs and Empowerment
Case studies support the notion that CTs can have transformational impacts on women's empowerment through improved decision-making and feelings of independence from partners (Patel, Hochfield, and Jacqueline 2012;Nuwakora, 2014;Yildirim, Ozdemir, and Sezgin 2014).As a woman from Northern Uganda reported: "Earlier, we used to farm as a family.However, my husband would sometimes sell household items without consulting me.But now that I have my own money, I can have a say on how to spend income.Moreover, I cultivate the gardens together with my husband […]" (Nuwakora 2014).Moreover, a woman in Mexico states that "I have seen that all mothers, like indigenous women that we are, things changed a lot.I notice it because now women participate a lot, when there is an assembly, or meeting, or "plática".They participate a lot because they have this responsibility, in order for the support [transfer] to come," (Adato et al. 2000).
Numerous studies also show that CTs increase women's savings and incomeearning opportunities, suggesting that CTs may affect women's bargaining power (Perova, Reynolds, and Muller 2012;Green et al. 2015;Natali et al. 2016).However, the broader evidence is mixed.In a recent synthesis of qualitative and quantitative reviews and key evidence, van den Bold, Quisumbing, and Gillespie (2013) find that, although qualitative evidence on CCTs largely from Latin America and the Caribbean generally points to positive impacts on empowerment indicators, quantitative results are mixed.More recent studies focusing on the Africa region come to the same broad conclusions (Bonilla et al. 2017), and others raise competing arguments that CTs can reinforce traditional gender norms, or place an additional burden on women's time use, further reinforcing gender inequities (Molyneux 2006;Chant 2008).
At least part of the ambiguity around this linkage can be attributed to the diverse set of indicators used to measure empowerment and the inherent difficulty in drawing conclusions based on few quantitative indicators of intra-household bargaining (Peterman et al. 2015;Seymour and Peterman 2017).Adding to the complexities, intra-household empowerment is highly contextual, and there has been no clear consensus within and across disciplines of how to best measure it (Malhotra and Schuler 2005).Thus, although there are promising case studies, there are also mixed impacts, and a lack of consolidated evidence across program typologies and diverse contexts with differing gendered norms.
Shifts in Relationship Power and IPV
Another strand of literature reviews how empowerment and shifts in relationship power may decrease or increase IPV (Perova, Reynolds, and Muller 2012;Hughes et al. 2015).A woman's risk of IPV based on the extent of her financial independence and self-confidence is complex, context-specific, and contingent on factors such as socio-cultural contexts of households, characteristics of households and individuals, and particularities of empowerment processes themselves (Hughes et al. 2015).In terms of socio-cultural factors, in patriarchal contexts women's empowerment is more likely to lead to increased conflict and IPV, at least in the short term.Hence, the relative status of women and men in terms of decision-making and how their power and resources compare to each other is an important contributing factor for increased IPV (Hughes et al. 2015).This seems especially common in situations where a man is unable to fulfill his gender-ascribed role as "bread-winner" and a woman is beginning to contribute relatively more to family maintenance, or where a woman takes a job that defies prevailing social convention (Hughes et al. 2015).This aligns with research by Maldonado, Nájera, and Segovia (2005) from Mexico that shows that significant income increases to women may threaten men's status, causing husbands with more traditional gender views to reassert control through violence.Overall, however, the risk of increased IPV could also decline over time as both men's individual attitudes and broader social attitudes become more accepting of women's increased economic activity and financial autonomy (Ahmed 2005).For example, some participants in the South African IMAGE intervention reported that the increased selfconfidence, social support, and communication skills gained from being part of a combined micro-finance and gender training initiative resulted in improved partner communication, preventing any conflict escalating into violence (Kim et al. 2007).
Conclusion and Policy, Programmatic and Research Implications
We conducted a mixed-method review of the impact of CTs on IPV in LMICs and have built a program theory to help understand the mechanisms behind this impact.In total, we identified 14 quantitative and eight qualitative studies that met our inclusion criteria, of which 11 and six, respectively, support the hypothesis that CTs decrease IPV.We find little support for increases in IPV, and only two of our reviewed studies had overall mixed or adverse impacts.
These findings, paired with the scale and relative cost-effectiveness of CTs, suggest that they have the potential to decrease IPV at the margin across large populations of vulnerable groups.However, across the 56 quantitative outcomes measured, approximately 63 percent are insignificant, suggesting that CTs may have different impacts on different types of violence within the same study.Transfers appear to reduce physical and/or sexual IPV more consistently than emotional abuse or controlling behaviors.This finding is an apparent contradiction, since several of the pathways focus on emotional states, which would suggest initial impacts on emotional and psychological IPV before affecting physical and sexual IPV.However, we conjecture that this could be due in part to measurement issues, as emotional IPV is measured less in studies, and with greater variability.Further, definitions of emotionally abusive acts vary across cultures and are thus more difficult to define (Garcia Moreno et al. 2004).
As CTs are primarily a policy tool to respond to poverty and vulnerability, it is unlikely that large-scale institutional programming will be designed with the specific objective of decreasing IPV.However, if small design changes have the potential to decrease IPV-a key indicator of well-being and gender equity-transfer programs have the scope to realize significant gains across sectors, at a lower cost than violencespecific programming.Research to better understand how CTs affect IPV, and under what conditions, can help policy-makers maximize these gains while minimizing any unintended negative impacts of CT programs.As the collection of IPV measures in multi-topic surveys are likely to imply significant survey logistical costs, expanding the feasibility of experimental "light touch" methods are likely to aid understanding of the dynamics in generalized programming (Peterman et al. 2017a).
We found evidence to support all three hypothesized pathways: economic security and emotional well-being; intra-household conflict; and women's empowerment.We also found substantial evidence from related literature to support each step in the proposed causal chains, with the exception of increasing violence by exacerbating conflict over the consumption of temptation goods.According to our program theory, the economic security and emotional well-being pathway is the only one that exclusively reduces IPV; the other two pathways may increase or decrease IPV, depending on whether additional cash aggravates or soothes relationship conflict and/or how men respond to women's increased empowerment.How these pathways play out depends on intra-household gender dynamics, which are, in turn, affected by local gender regimes and socio-economic inequalities within a setting or beneficiary population.Thus far, quantitative evaluations have not been well designed to measure these mechanisms, particularly those relating to relationship dynamics and behavioral intra-household measures.
The qualitative studies suggest that in highly patriarchal settings, shifts in household dynamics that are less challenging to traditional gender norms are less likely to prompt violence.Likewise, programs that generate smaller shifts in relationship power appear more easily accepted by men than those catalyzing larger disruptions (Maldonado, Nájera, and Segovia 2005;Slater and Mphale 2008).For example, Buller et al. (2016) note that increased cash and in-kind transfers to women have been accepted by Colombian and Ecuadorian men in part because they are intended for children's nutrition, a domain already within the domestic responsibilities of women.Indeed, how a program is "framed" and the meaning imbued to cash by a program's stated intent (e.g., for women's entrepreneurship versus child health) may influence the transfer's impact on gender dynamics and IPV as much as any other program feature.More "acceptable" shifts might also be achieved by making smaller, more regular transfers (conducive to small household purchases managed by women), rather than larger or lump-sum transfers.It should be noted, however, that the Kenya Give Directly study tested lump-sum versus periodic transfers and finds that the difference did not significantly affect the magnitude or impacts on IPV (Haushofer and Shapiro 2016).Understanding the importance of transfer size and other design features on intra-household dynamics is important, as economic security and poverty impacts are likely to be larger with increasing size of the transfer relative to pre-program household consumption, thus suggesting a potential program design trade-off.
The recipient of the CT is also likely to be a key factor in understanding potential impacts on IPV.While empirical evidence is scarce and mixed in terms of the impact of recipient sex on economic and human capital outcomes of transfers, there is even less evidence for how different targeting schemes affect IPV outcomes (Yoong, Rabinovich, and Diepeveen 2012).Across the studies reviewed here, the majority of CTs transfer cash to women; therefore, a large gap in knowledge remains with respect to impacts on IPV when men are the main recipient, as is the case in many programs in Africa.Haushofer and Shapiro (2016) published the only study that randomly compares male and female beneficiaries, and the authors find no differential impact on IPV.These differences will be particularly important in settings where men are the de facto recipient due to gendered mobility constraints and lower perceived cultural acceptability of transferring benefits to women (e.g., Middle East and parts of South Asia).
Lastly, the associated benefits from complementary activities such as trainings and group meetings are also likely to be a key factor that shapes how a CT program impacts IPV.Complementary activities could independently decrease IPV by empowering women through increased knowledge, which leads to increased self-esteem, social interaction, and social capital.Most CT programs reviewed are linked to some complementary activities.While the literature acknowledges that complementary activities might play a role in generating impacts, this mechanism is seldom explored explicitly.The Bangladesh study by Roy et al. (2017) is the only one that attempts to separately evaluate the impact of the transfer versus the transfer-plus-auxiliary activities.These authors find that decreases in IPV six months post-program exist only in the CT-plus-BCC group, and not in the CT-only group.
It is worth mentioning that although average impacts of the studies reviewed overwhelmingly show decreases in IPV, several studies find increases for select IPV outcomes within particular sub-groups of beneficiaries (e.g., Bobonis et al. 2013;Hidrobo and Fernald 2013).In addition, we excluded two studies where the cash transfers are one-time lump-sum grants as part of larger micro-enterprise programs with couples therapy or bundled livelihood, savings, and coaching programs.In the first study, Green et al. (2015) find that women in Northern Uganda receiving the micro-enterprise training alone have experienced increased marital control, while those with added couples therapy have not (with no impacts among either group on physical or emotional abuse).In the second study, Ismayilova et al. (2017) find that women in Burkina Faso benefiting from both arms of bundled savings and livelihoods programming have experienced reduced emotional violence; however, this effect is larger amongst those women receiving family coaching.Therefore, while our assessment is optimistic about the direction and level of impacts on IPV, we recognize that diverse programming variations are yet to be widely tested and understood.
Our review has a number of limitations.We exclude studies that explore the impact of transfers on other types of violence that may have implications for IPV, including community-level violence or intra-household violence perpetrated or directed at other household members.For example, there is increasing interest and some potential for social safety nets, including CTs, to decrease violence against children, although the evidence is weak for most types of childhood violence apart from sexual violence and abuse among adolescent girls (Peterman et al. 2017b).Conclusions around promising mechanisms for reduction of violence against children relate to several of those that we identify, including increases in economic security and decreases in poverty-related stress.This suggests the potential for CTs to affect multiple types of intra-household violence simultaneously, but no study to date has explored this potential.Likewise, transfers could decrease community violence through positive economic spillovers into non-beneficiary households, or could increase violence due to social tensions and jealousy triggered by the CT (Adato 2000;Slater and Mphale 2008;Wasilkowska 2012;Beasley, Morris, and Vitali 2016).Finally, we cannot generalize our findings on household dynamics to high-income countries or from CTs to broader social protection or economic strengthening programs.
Our findings, however, have important implications for future research.First, evaluations should carefully consider the IPV metrics to be included to ensure that they capture internationally validated measures of IPV that are sensitive to program impact (Heise and Hossain 2017).To date, we know little about how CTs may affect the frequency and severity of IPV, which would aid our understanding of dynamics at the margin.Second, studies need to go beyond impact to include validated and credible measurement of pathways to better understand the behavioral underpinnings of the CT and IPV relationship.In doing so, studies will deepen both our understanding of how transfers affect IPV, and our understanding of the behavioral relationships beneath each causal link, many of which are understudied in LMICs.It is likely that mixed-method studies will advance our understanding of these links better than either quantitative or qualitative studies alone; however, to date, few mixed-methods evaluations have been conducted.
There is also a need for a better understanding of how program design features affect ultimate outcomes and pathways, particularly with respect to targeting, complementary programming, program linkages, and conditionalities.Of the quantitative studies included in the review, only four use a research design that is able to test program variations (Green et al. 2015;Haushofer and Shapiro 2016;Hidrobo et al. 2016;Roy et al. 2017), and none were able to test potential synergistic effects between program components.
There are large regional and contextual gaps in our understanding of dynamics, with evidence skewed to Latin America and little understanding of Asia and the Middle East, or of how dynamics may differ in humanitarian settings.Evidence from SSA is scarce (particularly empirical evidence) and is concentrated in Eastern and Southern Africa, with little evidence arising from Western and Central Africa, where gender norms and institutions may vary.Finally, we know little about long-term impacts, including how impacts may vary over time horizons and if impacts are non-linear, as well as the sustainability of impacts after CTs end or households graduate (the latter was studied only by Roy et al. 2017).
Although our review indicates that CTs are promising tools to reduce IPV, this relationship is complex and there are large gaps in our understanding of what program design components are necessary or beneficial in diverse settings.For example, it is likely that within any one program there are multiple or competing casual pathways in operation, with differential distributional impacts or those that vary by type of IPV.
250
The World Bank Research Observer, vol. 33, no. 2 (2018) It is also possible that impacts in the short run may differ from longer-term impacts as relationships begin and end and programs are phased out.Although we have not conducted a meta-analysis due to variation in outcomes captured, as the evidence base grows, future work may be able to capture variation in magnitude of impact and how it relates to key program design features, including transfer size and important contextual factors such as baseline prevalence of IPV.As cash and other transfers are increasingly scaled up in development settings, we welcome further research to better understand and leverage gains across sectors on non-traditional outcomes including IPV.
Abbreviations: BL = baseline; BCC = Behavior change and communication; C = control or comparison group; CT = cash transfer; CCT = conditional cash transfer; Govt = Gov-
4
Aggregate psychological index is a z-score constructed by averaging z-scores for 7 psychological outcomes: (1) Does not allow you to see friends or family, (2) Does not allow you to study or work, (3) Ignores you, (4) Yells at you, (5) Tells you you are worthless, (6) Threatens to leave and (7) Threatens to take children.Of these individual indicators only (1) and (2) are negative and significant when disaggregated. 5 Impact of pooled treatment also analyzed on 19 disaggregated indicators, with 6 being significant: Controlling behaviors (1, accused you of being unfaithful, 2, tried to limit contact with your family), Emotional IPV (3, humiliated or insulted you) and Physical or sexual IPV (4, pushed you or shook you or threw something at you, 5, slapped you or twisted your arm, 6, tried to choke or burn you).
Source:
The authors Notes: Asterisk * refers to peer-reviewed journal article; † refers to working paper or technical report; (a) Direction of effect demonstrates whether the CT increased, decreased, had mixed effects or had no effect on IPV; (b) The COREQ assessment is a 32-item checklist to help researchers report important aspects of qualitative research, such as the research team and reflexivity, study design and methods, data analysis and reporting.Scoring based on assessment by two independent researchers; (c) Outcomes included broader forms of GBV (e.g., sexual violence, theft).Abbreviations: BL = baseline; CT = cash transfer; CCT = conditional cash transfer; FGD = focus group discussions; GBV = gender-based violence; Govt = Government; HH = household; IDI = in-depth interview; IPV = intimate partner violence; M = Lesotho Maloti; NGO = non-governmental organisation; PMT = proxy means test; SSI = semi-structured interview; UCT = unconditional cash transfer; US$ = United States dollar; VLSA = village loans savings association.
Figure 1 .
Figure 1.Program Theory Linking Cash Transfer and Intimate Partner Violence
Table 1 .
Inclusion and Exclusion Criteria for Review of Cash Transfers on Intimate Partner Violence.
Table 2 .
Review of Core Quantitative Papers with Impact Evaluation Evidence on Cash Transfers and Intimate Partner Violence.
Table 2 .
Continued Source: The authors.Notes: Asterisk * refers to peer-reviewed journal article; † refers to working paper or technical report; # refers to endline or baseline control mean; OLS is coefficient from ordinary least squares regression; ME is marginal effect from probit regression; OR and RR are odds and risk ratios from logistic regression.Significance levels bolded if significant at the conventional level: Asterisks indicate the following: * = p < 0.1, where additional levels are ** = p < 0.05; ** = p < 0.05 (reported as in the original article).
Table 2 .
Continued 7 The authors use nationally representative household survey data (National Survey on Relationships within the Household -ENDIREH, 2003) and take a number of steps to account for potential endogeneity and construction of credible control group using within-village comparisons.In addition, the authors: (a) Restrict the sample to HHs with children < 12 years at baseline, (b) women aged 25 and older, and (c) women in unions or partnerships since 1997 or earlier (who would have made their marital choices pre-program).Models are estimated using an extensive set of control variables.Reported estimates come from table 3 (page 193), column 1 with village fixed effects, individual, and household controls.
Bobonis, Gonzalez-Brenes, and Castro (2013)stro (2013), the authors use secondary data from ENDIREH meant to capture program impacts after 9 to 13 years of exposure and create similarly restricted cohorts per cross-sectional survey.We report just the estimates for 2006 and 2011, as the 2003 estimates are already included.9
Table 3 .
Review of Core Qualitative Papers with Evidence on Cash Transfers and Intimate Partner Violence.
Table 3 .
Continued Buller et al.qualitative studies are NGO-led programs, of which two are external evaluations of the same CT implemented by Action Against Hunger in northern Uganda in 2012 and 2014; three are government-run programs (two UCTs and one CCT); and two are run by international organizations (table 3, column 3).Of the eight qualitative studies, three interventions are UCTs: one provides cash, food, or vouchers conditional on attending nutrition training, while four are CCTs (table3, column 4).Women are targeted as the main recipient in most programs, despite cases where the household or a small proportion of males receive the transfer (Lesotho and Ecuador; table3, column 6).In almost all the studies, either focus group discussions (two studies), in-depth interviews (two studies) or a combination of the two methods (four studies) were used as the method of data collection.One study in Nicaragua used an ethnographic approach, with semi-structured interviews and participant observation to explore perceptions of the program (table 3, column 7).Data collection for the studies range from 1999 to 2014, with the majority taking place between 2011 and 2014 (table 3, column 8).
|
2018-04-29T02:25:08.845Z
|
2018-03-23T00:00:00.000
|
{
"year": 2018,
"sha1": "0f3a8992680234bb330985d2c000efd7f05c0524",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/wbro/article-pdf/33/2/218/26000886/lky002.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d8d0fa523f8d9c2fee78c3655c3ea64ceefb6dca",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
219283866
|
pes2o/s2orc
|
v3-fos-license
|
Dual Electrochemical Treatments to Improve Properties of Ti6Al4V Alloy
Surface treatments are considered as a good alternative to increase biocompatibility and the lifetime of Ti-based alloys used for implants in the human body. The present research reports the comparison of bare and modified Ti6Al4V substrates on hydrophilicity and corrosion resistance properties in body fluid environment at 37 °C. Several surface treatments were conducted separately to obtain either a porous oxide layer using nanostructuration (N) in ethylene glycol containing fluoride solution, or bulk oxide thin films through heat treatment at 450 °C for 3 h (HT), or electrochemical oxidation at 1 V for 3 h (EO), as well as combined treatments (N-HT and N-EO). In-situ X-ray diffraction and ex-situ transmission electron microscopy have shown that heat treatment gave first rise to the formation of a 30 nm thick amorphous layer which crystallized in rutile around 620 °C. Electrochemical oxidations gave rise to a 10 nm thick amorphous film on the top of the surface (EO) or below the amorphous nanotube layer (N-EO). Dual treated samples presented similar results with a more stable behavior for N-EO. Finally, for both corrosion and hydrophilicity points of view, the new combined treatment to get a total amorphous N-EO sample seems to be the best and even better than the partially crystallized N-HT sample.
Introduction
Pure titanium is a polymorphic material and can crystallize into two types of crystalline structures: α and β [1].
The α-titanium shows a stable hexagonal close packed structure (hcp) up to about 880 • C while the β-titanium has a body-centered cubic structure (bcc) and is stable in the temperature range between 880 • C and the melting temperature of 1670 • C. Aluminum is the main stabilizing element of the alpha phase while vanadium is one of the most used isomorphic β-stabilizing elements thanks to its high solubility in titanium [1].
Ti6Al4V commercial alloy is biphasic with the following chemical composition: 6 wt% of aluminum (α-stabilizer) and 4 wt% of vanadium (β-stabilizer). In recent decades this alloy has been widely used in various fields and multiple applications thanks to its excellent mechanical properties, osseointegration properties, low density associated with high resistance to corrosion, wear, fatigue, creep, and to the propagation of cracks [2]. Considering medical applications, dental implants, as well as knee and hip prostheses are often made with this alloy. In particular, the biocompatibility is linked to the fast formation of an oxide surface layer of 2-5 nm [3] which is formed when the material is exposed to air or to any oxidizing medium such as human body fluid. This layer is protective because it slows down the corrosion process and is also able to reform if damaged in a few milliseconds [4]. It is composed by different oxides layers: a TiO layer directly in contact with the titanium alloy substrate; an intermediate Ti 2 O 3 oxide and finally an outer TiO 2 layer [5]. Unfortunately, the layer that naturally forms in contact with air is not completely stable and at a microscopic level, in particular media and conditions, it can be broken by triggering localized corrosion. Since patient lifetime increases, corrosion resistance of the prosthesis has to be increased to avoid a new surgery on older patients [6]. Moreover, as the material shows the tendency to form a passive layer which greatly reduces its reactivity, bonds between bone cells and prosthesis are not easily created, leading to adhesion loss with time [7]. In order to overcome these problems linked to bio-corrosive resistance and osseoinduction, surface modifications are necessary.
To increase the corrosion resistance properties, a dense oxide layer is generally grown on the metallic surface. It can be achieved by thermal oxidation [8,9], anodic oxidation [10,11], pulsed laser deposition [12], and reactive sputtering [13]. Among them, thermal and anodic oxidations can be considered as the simplest and cost effective techniques to generate a dense oxide barrier. Nanostructurated oxide layers could enhance the corrosion resistance of implants too. This effect is still not clear while a benefit [2] as the opposite result [14,15] is mentioned. Nevertheless, such surface treatment could enhance mechanical interlocking between prosthesis and bone. Barranco et al. [16] observed that osteoblasts showed a higher adhesion to surfaces by increasing their roughness. Several treatments can be realized such as plasma-spraying, grit-blasting with ceramic particles, sol-gel deposition, acid etching, or anodization in fluoride containing electrolyte [17][18][19][20][21][22][23][24]. This last method allows the formation of an amorphous and homogeneous TiO 2 nanotube (TiO 2 NT) array [25,26] with controllable dimensions depending on the experimental conditions. The well-known formation mechanism is a competition between the TiO 2 growth thanks to anodization and its chemically oriented dissolution due to fluoride ions attracted by the positive anodic surface [27]. Different organic or inorganic electrolytes can be used: organic baths containing glycerol or ethylene glycol, and NH 4 F and H 2 O are often preferred to the aqueous ones that contains HF. This is mainly linked to the fact that they are less dangerous and lead to more homogeneous tubes although partially covered with a layer consisting of some conglomerates of partially dissolved tubes [28]. A systematic increase of the corrosion resistance was reported for a nanostructurated layer after heat treatment. However, an open question is related to the action mechanism of this thermal treatment. Some authors found the cause in the formation of a barrier layer that grows below the nanotubes [29][30][31][32]. On the contrary, Munirathinam et al. and co-authors concluded that the crystallization of the initially amorphous nanotubes was responsible for increasing the corrosion behavior [33].
To verify the effect of a barrier layer avoiding the crystallization of the nanotube, this paper proposes for the first time to electrochemically grow this barrier layer coupled to nanostructured film.
In addition, in most papers, only one kind of process is realized. When two processes are combined, few different experimental conditions are used to modify the layer. Moreover, the effect of each treatment is not studied separately. For example, in [2] where Grotberg et al. compare bare, nanostructured, heat treatment and nanostructured treatments combined with heat treatment, only one experimental condition is used. In [31,34] heat treatment of anodic NT was studied at different temperatures (but no variation for the NT layer is performed).
In this research, a comparative study was made between the bare Ti6Al4V alloy (B) and the following surface treatments done to increase both osseointegration and corrosion resistance: (1) Nanostructurated oxide layer (N), electrochemically obtained in ethylene glycol containing fluoride ions media; (2) Bulk oxide, grown either by heat treatment (HT) in air at 450 • C or by electrochemical anodization (EO) in a sodium sulphate bath; (3) Dual treatments combining nanostructuration followed by heat treatment (N-HT) or electrochemical oxidation (N-EO). To verify the effect of a barrier layer avoiding the crystallization of the nanotube that should arise in the case of N-HT, we propose for the first time nanostructuration followed by bulk electrochemical oxidation.
Therefore, our goal was to study the effect of each layer separately (HT, EO, N) or combined (N-HT, N-EO) and compare the morphological, structural, wettability, and corrosion results to those obtained from the bare surface (B). To determine the thermal stability and to characterize the phase transitions under heat treatment, non-isothermal methods (in-situ XRD) are also used in oxidizing media (air).
Materials Synthesis and Post-Treatments
A biphasic α + β Ti6Al4V bar with a diameter of 1.4 mm (French company Aubert & Duval, Paris, France) was used as starting material. It was cut into disks with a thickness of about 2 mm, polished on SiC paper from a grade ranging from 180 to 4000, and then polished with 6 µm to 1 µm thick diamond pastes. An ultrasonic cleaning procedure was carried out in pure ethanol for 5 min. Finally, the bare samples were dried with compressed air. Bare titanium alloy samples were labeled as B.
TiO 2 nanotubes were grown by electrochemical anodization using a two-electrodes configuration with a Ti alloy disk as working electrode placed in a PTFE holder (exposed area of 1.3 cm 2 ) and a Pt grid as counter electrode. The optimum conditions in ethylene glycol solutions were selected according to literature [35]. Ethylene glycol electrolyte (VWR-chemicals, Fontenay sous Bois, France: GPR Rectapur) containing 0.3 wt% of NH 4 F (Sigma Aldrich 98%, Saint-Quentin-Fallavier, France) and 20 wt% of pure water was served as electrolyte at room temperature. A potential of 60 V was applied during 3 h using a power generator (Iso-Tech IPS 603, Paris, France). After anodization, the samples were rinsed with deionized water and dried. These nanostructured samples will be referred as N.
Compact layers were obtained either by heat treatment or bulk electrochemical oxidation. Ex-situ heat treatment under air was done on B and N samples at 450 • C for 3 h using a standard furnace (Nabertherm 30/3000 • C, Lilienthal, Germany). The heating rate was set to 5 • C/min. The heat treated samples were respectively referred as HT and N-HT. Electrochemical oxidation was performed in a 1 M sodium sulphate bath (Sigma-Aldrich Rectapur, Saint-Quentin-Fallavier, France). A classical three-electrode cell was used with a Pt foil as counter electrode, Ag/AgCl electrode as reference (0.2 V/NHE), and B or N samples as working electrodes. The samples were oxidized for 3 h at 1 V/(Ag/AgCl) and referred as EO and N-EO, respectively.
Characterization
Surface morphology and average composition were investigated using a scanning electron microscope (SEM) (Zeiss Gemini SEM 500 70-04, Oberkochen, Germany) equipped with X-ray energy dispersive spectrometer (EDS). Measurements were carried out using an acceleration voltage between 5 and 20 kV. To have a better understanding of the thickness of the layers, a thin cross section of the samples was realized using a dual beam Focused Ion Beam (FIB) (FEI-COMPANY/Helios 600 nanolab, Thermofisher Scientific, Hillsboro, OR, USA). Protective C and Pt layers were first deposited on the surface to protect it. A Ga+ ionic beam allowed to produce thin blades intended for microscopic examinations by SEM or transmission electron microscopy (TEM) (Tecnaï G2, Thermofisher Scientific) observations.
The structure of the various samples was analyzed using X-ray diffraction measurements. Two kinds of experiments were performed, both on a Philips X'Pert diffractometer (Malvern Panalytical, Malvern, UK) working in Bragg-Brentano geometry and using filtered CuKα (λ = 0.15418 nm) as a radiation source. Furthermore, θ-2θ scans were registered over a 2θ angular range from 20 • to 80 • with 2θ steps of 0.04 • using a rapid detector, or from 24 • to 59 • for XRD in-situ experiments to save time. This 2θ range corresponds to a region where most of the diffraction peaks coming from titanium and titania (anatase, rutile) phases can be observed. Ex-situ acquisitions were performed at room temperature to investigate the changes in crystallinity before and after heat treatment. In-situ measurements were done to follow the structural changes occurring during the oxidization process. In order to dissociate the phase transition coming from the substrate from those coming from the nanotubes, experiments were performed on B and N samples. In-situ heating was done under air using a thermo-regulated furnace (HTK 1200 Anton Paar, les Ulis, France). A thermocouple was placed close to the sample surface and the error on the temperature determination was estimated to be around ±20 • C. A heating rate of 10 • C/min was applied from 25 • C to 280 • C, because no structure changes were expected in that range. Starting from 280 • C, the data were collected every 20 • C up to 760 • C. An equilibrium time of 1 min was accorded for temperature homogenization at each step. As the data collection time was equal to 5 min per pattern, this corresponded to an average heating rate of 2.5 • /min.
Anodic potentiodynamic polarization tests were carried out to determine the corrosion resistance properties of the initial B, N, HT, and EO samples as of combined treated samples: N-HT and N-EO. A three-electrodes cell was used in which the sample represented the working electrode, a Pt foil acted as the counter electrode, and an Ag/AgCl (KCl saturated) was used as the reference. All the potentials were indicated versus this reference. A Dulbecco's Phosphate Buffered Saline (DPBS, phosphate buffered saline solution with KCl, KH 2 PO 4 , NaCl, Na 2 HPO 4 , MgCl 2 , CaCl 2 ) solution (Sigma Aldrich) at 37 • C was used in order to simulate the human body environment. Polarizations were done after 30 min of sample immersion in test solution, at 2 mV/s from −0.1V/open circuit potential to 0.8 V/ref, as preconized in ASTM F2129-19a standard dedicated to determine the corrosion susceptibility of small implant devices. The tests were reproduced three times or more to assess the reproducibility. The mean value of the corrosion parameters obtained on three reproducible tests were evaluated and compared as a function of the surface treatments.
Wettability tests were performed to determine the hydrophobic or hydrophilic superficial behavior of the surface in function of the conducted treatment [2]. The study of superficial energy deduced by the contact angle was connected to osseointegration, while a higher surface energy and therefore lower contact angle induced a better hydroxyapatite formation and an easier attachment of cells [32]. Wettability was determined by the measurement of contact angle between a 3-µL droplet of ultra-pure water and the surface of the samples. A contact angle video system consisting of special optics (Adimec MX 12P, Eindhoven, The Netherlands) and a camera was used. Data were analyzed using the Video Savant software 3.0 (IO Industries, London, ON, Canada). Contact angle measurements were recorded 5 s after the drop-off and repeated 10 times to assess the reproducibility and determine the error. Figure 1 is a SEM picture acquired on sample B's surface after etching in 4 mL HF + 2 mL HNO 3 + 100 mL water. Both α and β phases appeared, the whiter areas corresponding to the beta phase. EDS analysis (Table 1) Table 1. EDS analysis made on white and dark areas from Figure 1. Values accuracy ±0.5.
Kind of Surface V (wt%) Al (wt%) Ti (wt%)
White zone 5 6 89 Dark zone 2 6 92 The sample N surface appeared with a dull gray colored aspect. The morphology of the as-formed nanotubes is shown on the SEM images presented in Figure 2a,b. The surface is composed of a non-homogeneous array with a cylindrical geometry. The total diameter of the tubes is 275 ± 55 nm and their wall thickness is around 55 ± 5 nm. Some differences in height appeared with recessed areas. Only few references [36,37] dealing with this kind of inhomogeneity can be found. They suggest that the β phase is etched preferentially by fluoride ions due to a higher dissolution rate of the V-rich phase. The mean length estimated through cross section images ( Figure 2b) is around 1 μm, varying from 0.8 to 1.2 μm, probably as a function of the phase from which they have grown. In order to have an idea of their stoichiometry, eliminating the substrate composition, the nanotubes were scratched on carbon tape. The EDS results given in Table 2 are in good agreement with TiO2 stoichiometry. Note that, as in the substrate, V and Al elements are still present in the nanotubes. The sample N surface appeared with a dull gray colored aspect. The morphology of the as-formed nanotubes is shown on the SEM images presented in Figure 2a,b. The surface is composed of a non-homogeneous array with a cylindrical geometry. The total diameter of the tubes is 275 ± 55 nm and their wall thickness is around 55 ± 5 nm. Some differences in height appeared with recessed areas. Only few references [36,37] dealing with this kind of inhomogeneity can be found. They suggest that the β phase is etched preferentially by fluoride ions due to a higher dissolution rate of the V-rich phase. The mean length estimated through cross section images (Figure 2b) is around 1 µm, varying from 0.8 to 1.2 µm, probably as a function of the phase from which they have grown. In order to have an idea of their stoichiometry, eliminating the substrate composition, the nanotubes were scratched on carbon tape. The EDS results given in Table 2 are in good agreement with TiO 2 stoichiometry. Note that, as in the substrate, V and Al elements are still present in the nanotubes.
areas. Only few references [36,37] dealing with this kind of inhomogeneity can be found. They suggest that the β phase is etched preferentially by fluoride ions due to a higher dissolution rate of the V-rich phase. The mean length estimated through cross section images (Figure 2b) is around 1 μm, varying from 0.8 to 1.2 μm, probably as a function of the phase from which they have grown. In order to have an idea of their stoichiometry, eliminating the substrate composition, the nanotubes were scratched on carbon tape. The EDS results given in Table 2 are in good agreement with TiO2 stoichiometry. Note that, as in the substrate, V and Al elements are still present in the nanotubes. Bulk films made by heat treatment (sample HT) are composed by a very dense layer with a grain size between 5 and 10 nm (Figure 3a). A golden purple color depending of the incident light is clearly visible by eye on HT. In view of the literature, their thickness should be between 10 to 40 nm [4]. A mean value of 30 nm was actually estimated through the TEM image made on the cross section ( Figure 3b) in agreement with [4]. Bulk films made by heat treatment (sample HT) are composed by a very dense layer with a grain size between 5 and 10 nm (Figure 3a). A golden purple color depending of the incident light is clearly visible by eye on HT. In view of the literature, their thickness should be between 10 to 40 nm [4]. A mean value of 30 nm was actually estimated through the TEM image made on the cross section ( Figure 3b) in agreement with [4]. In the case of bulk layer obtained by electrochemical oxidation, no change in color appeared for EO. SEM observations showed a smooth surface (Figure 4a). The grain size cannot be easily determined at this magnification that corresponds to that used for the HT sample (Figure 3a) for Dual treatment consisting of a porous layer followed by bulk oxide layer was realized to obtain N-HT and N-EO samples (Figure 5a,b, respectively). On the N-HT sample, no significant morphological changes were observed after heat treatment, with the nanotube array preserved. Nevertheless, on the SEM cross sectional view in Figure 5a, a start of crystallization seems to appear at the bottom of the tube due to the small crystallites (pointed at by the arrow). The tubes are now attached to the substrate by an almost 39 nm thick heat treatment layer underneath the tubes. In comparison with the literature, the thickness is lower because the temperature and time used for the heat treatment are lower too [38]. Velten et al. [4] shows that on titanium and its alloys the thickness of the heat treatment oxide layer increases in the same trend. A logarithmic function of up to 500 °C and parabolic for higher temperatures are individuated. For the same heat treatment conditions like those used in this study (450 °C, 3 h), but on pure titanium, the TiO2 thickness should be around 40 nm [4], which is in good accordance with our result.
On the N-EO sample ( Figure 5b) the bulk oxide layer below the nanotubes is thinner (around 16 nm). The thickness determined for the bulk HT and EO layers on N samples are in good accordance with the values obtained from sample B. Dual treatment consisting of a porous layer followed by bulk oxide layer was realized to obtain N-HT and N-EO samples (Figure 5a,b, respectively). On the N-HT sample, no significant morphological changes were observed after heat treatment, with the nanotube array preserved. Nevertheless, on the SEM cross sectional view in Figure 5a, a start of crystallization seems to appear at the bottom of the tube due to the small crystallites (pointed at by the arrow). The tubes are now attached to the substrate by an almost 39 nm thick heat treatment layer underneath the tubes. In comparison with the literature, the thickness is lower because the temperature and time used for the heat treatment are lower too [38]. Velten et al. [4] shows that on titanium and its alloys the thickness of the heat treatment oxide layer increases in the same trend. A logarithmic function of up to 500 • C and parabolic for higher temperatures are individuated. For the same heat treatment conditions like those used in this study (450 • C, 3 h), but on pure titanium, the TiO 2 thickness should be around 40 nm [4], which is in good accordance with our result.
On the N-EO sample ( Figure 5b) the bulk oxide layer below the nanotubes is thinner (around 16 nm). The thickness determined for the bulk HT and EO layers on N samples are in good accordance with the values obtained from sample B.
Structural Analysis
XRD pattern performed on sample B is presented in Figure 6. Figure 7. On sample N, the absence of a diffraction line coming from the TiO2 nanotubes structure is consistent with their reported amorphous nature [29].
Structural Analysis
XRD pattern performed on sample B is presented in Figure 6
Structural Analysis
XRD pattern performed on sample B is presented in Figure 6. Figure 7. On sample N, the absence of a diffraction line coming from the TiO2 nanotubes structure is consistent with their reported amorphous nature [29]. Figure 7. On sample N, the absence of a diffraction line coming from the TiO 2 nanotubes structure is consistent with their reported amorphous nature [29]. . From those results, it can be concluded that the initially amorphous TiO2 nanotubes have crystallized into anatase after a heat treatment at 450 °C. On the contrary, an anatase phase is not evidenced for HT. This is in accordance with the fact that anatase can only be stabilized in the nanosized range while rutile is the stable phase for bulk material [39,40].
Finally, among the samples used in this comparative study, only N-HT was partially crystallized, while at 450 °C the initial amorphous nanostructured layer moved in anatase and the thermal layer was still amorphous.
In order to get insight into the structural changes undergone during heat treatment in air, in-situ XRD experiments were performed on B and N samples (Figures 8 and 9, respectively, where only some diagrams were gathered in the interest of the figure). . From those results, it can be concluded that the initially amorphous TiO 2 nanotubes have crystallized into anatase after a heat treatment at 450 • C. On the contrary, an anatase phase is not evidenced for HT. This is in accordance with the fact that anatase can only be stabilized in the nanosized range while rutile is the stable phase for bulk material [39,40].
Finally, among the samples used in this comparative study, only N-HT was partially crystallized, while at 450 • C the initial amorphous nanostructured layer moved in anatase and the thermal layer was still amorphous.
In order to get insight into the structural changes undergone during heat treatment in air, in-situ XRD experiments were performed on B and N samples (Figures 8 and 9, respectively, where only some diagrams were gathered in the interest of the figure).
For sample B (Figure 8), at room temperature, only the diffraction lines coming from the Ti substrate are observed, as already mentioned. Note that the diffraction peak referred as "c" observed around 26.6 • is related to the sheath of the thermocouple placed near the sample. The temperature increase leads to a shift toward lower diffraction angles of the diffraction lines coming from the Ti substrate. This can be explained by the thermal expansion of the Ti crystal lattice. At 620 • C, the onset of several diffraction peaks (noted as R) can be attributed to the rutile phase according to the JCPDS file (PDF2 021-1276; ICDD, 2002). The peak at 36.0 • , attributed to the (101) plane, is the most intense peak due to the preferential orientation of the layer. A further temperature raise leads to an increase of the rutile signal. This suggests that the thickness of the Ti oxide barrier layer that has transformed into rutile continues to increase with temperature. For sample B (Figure 8), at room temperature, only the diffraction lines coming from the Ti substrate are observed, as already mentioned. Note that the diffraction peak referred as "c" observed around 26.6° is related to the sheath of the thermocouple placed near the sample. The temperature increase leads to a shift toward lower diffraction angles of the diffraction lines coming from the Ti substrate. This can be explained by the thermal expansion of the Ti crystal lattice. At 620 °C, the onset of several diffraction peaks (noted as R) can be attributed to the rutile phase according to the JCPDS file (PDF2 021-1276; ICDD, 2002). The peak at 36.0°, attributed to the (101) plane, is the most intense peak due to the preferential orientation of the layer. A further temperature raise leads to an increase of the rutile signal. This suggests that the thickness of the Ti oxide barrier layer that has transformed into rutile continues to increase with temperature. For sample N (Figure 9), as already seen with ex-situ XRD experiments, no other peaks than those from the Ti substrate and the ceramic sheath of the thermocouple are observed up to 380 °C. At this temperature, a new diffraction peak appears at 25.2°, attributed to the (101) plane of anatase. At 620 °C, the diffraction peaks from the rutile phase are evidenced and their intensity increases with the temperature. At higher temperatures, the intensity of the peaks coming from R continues to increase while that for anatase decreases.
In order to quantify the relative proportion of the anatase and rutile phases, the areas of the most intense diffraction peak of each phase were calculated as a function of the temperature for both B and N samples. The results are summarized in Figure 10. The signal coming from rutile increases rapidly For sample N (Figure 9), as already seen with ex-situ XRD experiments, no other peaks than those from the Ti substrate and the ceramic sheath of the thermocouple are observed up to 380 • C. At this temperature, a new diffraction peak appears at 25.2 • , attributed to the (101) plane of anatase. At 620 • C, the diffraction peaks from the rutile phase are evidenced and their intensity increases with the temperature. At higher temperatures, the intensity of the peaks coming from R continues to increase while that for anatase decreases.
In order to quantify the relative proportion of the anatase and rutile phases, the areas of the most intense diffraction peak of each phase were calculated as a function of the temperature for both B and N samples. The results are summarized in Figure 10. The signal coming from rutile increases rapidly from 620 • C on both samples. The evolution of the anatase amount can be divided into three zones: a sharp increase from 300 • C to 440 • C, a slower evolution of up to 640 • C, and a decrease at higher temperatures. Anatase is totally transformed into rutile after 740 • C, while the signal coming from rutile continues to increase. This result proves that the rutile phase is coming mostly from the crystallization and growth of the thermal layer than from the anatase-to-rutile nanotube transformation. Note that the heat treatment temperature used in this study is 450 • C.
Corrosion Behavior
The corrosion study was done in a simulated human environment to evaluate the improvement of each surface treatment for the titanium alloy lifetime.
The polarization curves were gathered in Figure 11 for sample B and surface treated samples. The results extracted from these curves are given in Table 3.
Corrosion Behavior
The corrosion study was done in a simulated human environment to evaluate the improvement of each surface treatment for the titanium alloy lifetime.
The polarization curves were gathered in Figure 11 for sample B and surface treated samples. The results extracted from these curves are given in Table 3.
Corrosion Behavior
The corrosion study was done in a simulated human environment to evaluate the improvement of each surface treatment for the titanium alloy lifetime.
The polarization curves were gathered in Figure 11 for sample B and surface treated samples. The results extracted from these curves are given in Table 3. The corrosion potential (E corr ) and the corrosion current density (i corr ) were obtained using the Tafel's slope method using the intersection of the cathodic slope with a line crossing E corr . Indeed, with the anodic part under passivation, the Tafel method is not valid. The passivation current density (i p ) was determined at 0.8 V; the corrosion rates v corr were deduced using the Faraday Law: with M as the Ti molar mass, i corr the corrosion current density, F = 96,500 C mol −1 , n the number of exchanged electrons, and ρ the density of Ti. For any sample, the instantaneous corrosion currents were very low leading to very weak corrosion rates between 0.06 and 9 µm·y −1 , with all Ti alloy surfaces in a passive state. The N-HT sample shows a higher corrosion rate in comparison, indicating a greater tendency to be corroded. However, it should be considered that the current remains at very low values and, considering the polarization curve form, the tendency to form a passive film with the decrease in current density is present. B and N samples were similar from a corrosion point of view, even if N is slightly worse (higher i p , higher i corr , and v corr ), showing that the porous nanotube array could not improve the sample corrosion performance by itself. These results are in good accordance with those obtained in [14], pointing out that the nanotubes provide more channels for the electrolyte to reach the thin barrier layer at the bottom of the tubes. On the contrary, bulk layers made either by heat treatment or electrochemical treatments really improve the alloy corrosion resistance properties, increasing the corrosion potential and decreasing i corr , v corr , and i p by a factor higher than 10. In the case of dual treatments, the bulk layer underneath the tubes shifts the corrosion potential into the noble direction. The oxidation treatment favors a passivation layer formation below the nanotubes that decreases i p even if a non-stable behavior was obtained on N-HT. Possible defects could then be formed with the polarization with an increase of the current density. Nevertheless, at 0.8 V a high decrease in the passivation current density is still performed, finally indicating a more stable passive layer than those obtained on B and N samples. The improvement of the corrosion resistance is thus linked to the dense TiO 2 layer due to heat treatment or electrochemical oxidations of the titanium substrate and not to the crystallization of the nanotube during heat treatment, as mentioned in [33].
Wettability
The wettability is strongly connected with the surface roughness as indicated in the literature [41][42][43]. Surface wettability represents a key parameter: a high hydrophilicity, in fact, increases the surface energy and therefore the osseointegration. The hydrophilic properties quantified by water contact angle (WCA) is reported in Figure 12. The surface energy Es can be calculated according to Equation (2) [32,44]: with γ = 72.8 mJ/m 2 representing the surface energy between water and air at 20 • C, and θ representing the static contact angle accessible from those measurements. The non-treated sample presents a hydrophilic behavior with a contact angle value of 46 ± 10° and a surface energy of 50 ± 9 mJ/m 2 . The nanostructurated surface allowing water penetration into the tubes shows the lowest contact angle and therefore the highest surface energy. This increase in the hydrophilic characteristic of the samples can also be attributed to the increase of surface area available for adsorption due to nanostructuration but also to the nature of the interaction (amorphous surface vs. crystallized surface). Such an increase in hydrophilicity is very important for cell adhesion properties and is in good accordance with [32,45]. HT and EO with a compact layer show the lowest surface energy, while samples with dual treatments still conserve good hydrophilicity properties because of the porous layer at the surface.
Conclusions
Systematic morphological, chemical, structural, hydrophilicity, and electrochemical studies were conducted to determine the effects of nanostructuration N, heat treatment HT, bulk electrochemical oxidation EO, and combined treatments (N-HT and N-EO) on composition and functional properties of Ti-based alloy.
A nanostructured layer, grown by electrochemical oxidation in ethylene glycol containing The non-treated sample presents a hydrophilic behavior with a contact angle value of 46 ± 10 • and a surface energy of 50 ± 9 mJ/m 2 . The nanostructurated surface allowing water penetration into the tubes shows the lowest contact angle and therefore the highest surface energy. This increase in the hydrophilic characteristic of the samples can also be attributed to the increase of surface area available for adsorption due to nanostructuration but also to the nature of the interaction (amorphous surface vs. crystallized surface). Such an increase in hydrophilicity is very important for cell adhesion properties and is in good accordance with [32,45]. HT and EO with a compact layer show the lowest surface energy, while samples with dual treatments still conserve good hydrophilicity properties because of the porous layer at the surface.
Conclusions
Systematic morphological, chemical, structural, hydrophilicity, and electrochemical studies were conducted to determine the effects of nanostructuration N, heat treatment HT, bulk electrochemical oxidation EO, and combined treatments (N-HT and N-EO) on composition and functional properties of Ti-based alloy.
A nanostructured layer, grown by electrochemical oxidation in ethylene glycol containing fluoride bath, presented amorphous homogeneous titania nanotube array that crystallized into an anatase phase at 380 • C. At temperatures higher than 620 • C, a rutile phase was obtained too, mostly due to the thermal oxide crystallization underneath the nanotubes instead of the anatase-to-rutile nanotube transformation. HT made at 450 • C for 3 h and EO in neutral media at 1 V for 3 h, led both to compact amorphous oxide layers with a respective thickness of around 30 and 10 nm. In good agreement, on combined treated samples, the compact oxide layer below the nanotubes was thicker on N-HT than on N-EO.
In terms of osseointegration properties, contact angle measurements showed that all nanostructurated surfaces led to an increase in hydrophobicity.
In regards to the corrosion resistance performance, nanostructuration alone did not bring any improvement, because of the porosity induced by the array. We highlighted the beneficial effect coming from the compact amorphous TiO 2 layers grown on HT, EO, and the dual treated samples, with the N-EO sample being slightly better than N-HT. It was then definitively demonstrated that the increase in corrosion resistance performed on N-HT after heat treatment was not due to the crystallization of the initially amorphous nanostructured layer but due to the growth of an amorphous thermal layer below the tube.
Finally, for the first time, the combined N-EO sample obtained by nanostructuration followed by bulk electrochemical oxidation appeared to be the best choice to improve both functional properties (hydrophilicity and corrosion), using only electrochemical techniques.
|
2020-06-04T09:08:59.501Z
|
2020-05-29T00:00:00.000
|
{
"year": 2020,
"sha1": "698e5ab296b540f90d3725c8719812fbdf427a29",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/13/11/2479/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f8496984cc2291c6a2fdad68497c94297303ecc",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
259037315
|
pes2o/s2orc
|
v3-fos-license
|
Experiences of LGBTQ student-athletes in college sports: A meta-ethnography
This study aimed to explore and describe the experiences of LGBTQ student-athletes to identify ways in which athletic staff, coaches, and others can support LGBTQ youth's safe participation in sports. Guided by the preferred reporting items for systematic reviews (PRISMA) and eMERGe reporting guidance. We conducted a meta-ethnography to synthesize qualitative research focused on student-athletes’ experiences. Fourteen studies were included in the meta-ethnography published between 1973 and 2022. Four themes were identified: (1) experiences of discrimination and violence; (2) perceived stigma; (3) internalized prejudice; and (4) coping and team support, and they were used to generate a line of argument model, which explains the stress process of LGBTQ student-athletes in the sports. LGBTQ student-athletes experience persistent discrimination in college sports, which poses a significant risk to their mental health. Meanwhile, this study identified that qualitative research on LGBTQ youth sports participation is lacking in many regions of the world and lacks knowledge of the sports participation experience of bisexual, gay, and transgender students. These findings revealed a way for research on LGBTQ-related issues and future policy and practice on LGBTQ youth-related issues in sports.
Introduction
Sport is perceived as an arena of hegemonic masculinity, perpetuating men's social dominance and the social subordination of women and gay men [1][2][3][4]. Numerous studies have consistently demonstrated that lesbian, gay, bisexual, transgender, and queer (LGBTQ) individuals have been excluded, rejected, stereotyped, discriminated against, and treated violently in sports context [5][6][7][8][9][10][11][12]. Given the issues faced by LGBTQ individuals in sports contexts, the International Olympic Committee identified LGBTQ athletes as the group at the highest risk for harassment and abuse in sports contexts [13]. The situation is the same in LGBTQ youth's sports participation. Many LGBTQ youth reported being the target of homophobia in sports; they feared rejection by their teammates and discrimination by coaches and officials [7]. LGBTQ youth perceived the sports context as an exclusive environment allowing blatant bullying and more subtle discriminatory behaviors [14]. As a result, LGBTQ youth were less likely to participate in sports than their heterosexual peers; In some sexual orientation groups, the gap in participation in formal sports has widened over time [15].
While a large body of quantitative research demonstrated the existence of discrimination against LGBTQ individuals in sports [16], scholars pointed out that because the inclusion criteria for LGBTQ youth in existing quantitative studies were inaccurate and quantitative studies mostly had secondary data and cannot examine the unique personal and social factors [17]. Therefore, LGBTQ-related discrimination evidence in the sports context also needs to be gathered through the voices of individual LGBTQ youth [17][18][19][20]. There have been a considerable number of qualitative studies with LGBTQ student-athletes over the past two decades, which gathered evidence through the feelings and experiences of LGBTQ student-athletes themselves. These findings should be grouped and synthesized for ease of use. Synthesizing qualitative research evidence can lead to new policy and education practice insights, especially in sports and exercise [21]. In addition, studies are needed to apply existing evidence on LGBTQ-related sports issues and to propose, apply, and test established theories and frameworks to help people understand the experiences of LGBTQ young athletes, provide the research and practitioner community with focused and evidence-based recommendations of ways coaches, PE teachers, and sport leaders can create sport environments which are welcoming and safe for LGBTQ people. Therefore, this systematic review aimed to undertake a comprehensive meta-ethnographic synthesis of qualitative studies on LGBTQ student-athletes in college sports to generate theoretical insights that could enhance policy and practice development in this field.
Noblit and Hare [22] developed meta-ethnography, which is now widely used in education and other disciplines. Unlike other reviews that summarize results, meta-ethnography translates findings between studies to produce new insights or interpretations that were not apparent in the original research [23]. At the same time, researchers' redevelopment and interpretation of meta-ethnography also enhanced the quality of the results of meta-ethnography [23][24][25][26] and provided a way for the application in sports and exercise psychology [21]. Therefore, this approach could generate new evidence about the experiences of LGBTQ student-athletes in college sports. Moreover, to enhance the quality of reporting, the eMERGe reporting guidance was adopted in this meta-ethnography [27]. The eMERGe guidance has 19 components to report the findings of meta-ethnography. It requires a clear description of the method, a discussion of analytical options, and greater transparency and completeness of reporting.
In addition, the review was theoretically anchored in Meyer [28]'s minority stress theory, which has been widely applied in LGBTQ-related research in sports [29][30][31]. In the minority stress theory, environmental circumstances, minority status and identity, different stressors, and social support are all relevant to minority stress. Meyer [28] indicated that stress could affect an LGB individual's health through a distal to proximal stress process. The distal stress process is prejudicial events, mainly outside discrimination and violence against LGB individuals; the proximal stress process is expectations of rejection, concealment, and internalized homophobia. Minority stress theory can identify the causes of distal and proximal distress and guide interventions for LGBTQ-related issues at the individual and structural levels. Therefore, this study adopted minority stress theory as the theoretical framework and employed a meta-ethnographic approach to synthesize the findings on LGBTQ student-athletes' experiences in college sports.
Study design
According to Noblit and Hare [22], meta-ethnography can be approached through seven phrases (Appendix A). Phase 1 (Selecting meta-ethnography and getting started) was explained in the introduction section. Then, based on the eMERGe reporting guidance (Appendix B), we completed the specific guidance below each phase. It is worth noting that although this review is written based on 19 components of the eMERGe reporting guidance, this meta-ethnography did not use a linear approach as meta-ethnography is an additive process [27]. The protocol for this qualitative research review was registered at INPLASY (ID: INPLASY202240041).
Research team and reflexivity
The first author (MX) is currently a Ph.D. student from China, and her main project is the intersection of diverse gender/sexual identities and sports. Working as a former student-athlete and now a coach in a university, the author tries to understand the experiences of LGBTQ student-athletes. The first author invited another Ph.D. student (YX) to help with the study, who collaborated with the author to conduct searching and analysis and to interpret the findings. Both authors tried to bracket existing biases or assumptions to avoid bias, using memos to record the entire synthesis process and each meeting discussion.
Search for relevant literature
This study systematically reviewed the relevant literature since 1973. We chose 1973 because this is the earliest time we could find for a peer-reviewed scientific study of homophobia [32]. English and Chinese databases were searched in this study, such as EBS-COhost, Scopus, PubMed, ProQuest, SAGE, and CNKI, by December 2022. Specifically, the main keywords used in the retrieval process were (a) LG, LGB, LGBT*, sexual minority, gender minority lesbian, gay, bisexuality, trans, transgender, queer, homosexuality, sexual orientation, sexual identity, gender identity, gender diversity, (b) sports, athletics, athletes, and (c) college, university. The review also supplemented the database search with a citation search of the retrieved articles and focused only on qualitative research.
Inclusion/exclusion criteria and study selection
The review also met the PRISMA guidelines. After developing inclusion and exclusion criteria (Table 1), we independently searched the included databases using the search terms. The initial number of literature was 2189 articles, which resulted from limiting the years and language. After removing duplicate articles, 874 were obtained for the title and abstract screening. As many synonyms exist for LGBTQ, we initially selected studies for inclusion based on title and abstract scan rather than just reading the titles. Articles not meeting our criteria were filtered out by reading the abstracts. If the abstracts also fail to provide information, it was reserved for the full article reading. Duplicate articles were also sorted out in this stage. At this stage, we excluded many quantitative studies and theory studies. As a result, 822 articles were excluded at this stage, and 52 were considered potentially eligible. After independently reading and assessing the eligible studies, we made a joint decision that 14 studies (Fig. 1) be included in the review. It is important to explain that although there were bisexual and gay participants in two articles [33,34], the LGBTQ student-athlete representation in the sample was too low and did not meet the inclusion criteria (LGBTQ student-athlete participants must reach 80% of the total sample). Hence, we decided to exclude them after a discussion. The strict inclusion criteria were consistent with our aim for this review, as we wanted to highlight and synthesize qualitative evidence from LGBTQ students-athletes' perspectives.
Quality assessment
A quality assessment of all included studies was also conducted in this stage. This review used Kmet's qualitative research quality assessment checklist [35] to assess the articles' quality. This checklist includes ten key aspects of qualitative research. Research questions/objectives, design, context, theoretical framework, sampling, data collection methods, data analysis process, validation procedures, conclusions and findings, and reflexivity. Scores were used to evaluate, ranging from 0 to 2.
Data extraction and analysis
After we had read through the included studies, data were extracted using forms developed for the review. These included the researcher, time of publication, the focus of interest, location, participant characteristics, data collection methods, and sports. These data were extracted by the first author and checked by the second author. Like previous research [26], this study used NVivo [36] to extract the raw data from the included studies for synthesis. Included studies were uploaded to NVivo and read repeatedly, then coded each study's findings in NVivo. A hierarchical structure was used to code in NVivo, with researchers' names and time coded as top-level codes and one concept or theme from each study coded as a sub-code [25]. It made it easier for us to track the provenance of each concept or theme. Using NVivo's team function, each author's coding structure was compared, and in the process, all conflicts of opinion were resolved through discussions. We independently identified and coded quotations outlining student-athletes' experiences in college sports (first-order structure) and corresponding authorial interpretation and discussions (second-order structure) (see Fig. 2). Afterwards, these second-order constructs were further abstracted into the author's interpretation of the original researcher's interpretation (third-order constructs) [22]. Each concept or theme was then interpreted independently by both authors and recorded in the NVivo memo, and these interpretations were then compared and combined into a joint interpretation.
With Sattar et al. recommended procedure [23], we first created a list of themes that contained concepts or themes for each study and, in this way, looked to find common and recurring concepts across studies. We then reduced the themes or concepts from the different studies into relevant categories. Next, two authors independently categorized the themes. Whenever the categorization of a theme became difficult, we returned to the full text for a thorough reading and discussion until an agreement was reached. Finally, these formed categories were labeled using terms encompassing all relevant concepts. We then examined the relationships between these themes' key concepts through juxtaposition and identifying common and recurring concepts. In addition, at this stage, we examine the background data for each study, including the context, objectives, and focus.
When the included studies span a considerable period, it is recommended that they be arranged chronologically [26]. As the studies included in this review span more than 25 years, the world has experienced a significant shift in attitudes toward LGBTQ athletes during these decades. So, both authors agreed that a chronological comparison would be appropriate. We also found that the included studies were sufficiently similar in their focus to be inter-translatable. Thus, starting with the previously created categories, we arranged each article chronologically and then compared the themes and concepts of the first article with the second article, then the combination of the former two with the third one, and so on [22]. We kept an open mind to the emerging categories in this process.
Table 1
Inclusion and exclusion criteria.
Inclusion criteria Exclusion criteria
Published in a peer-reviewed academic journal. The methodology did not use a qualitative method. Written in English.
Was not published between 1993 and 2022. A qualitative approach was used to collect and analyze data.
Fewer than 80% of the sample of student-athletes self-identified as LGBTQ. Published from 1973 onwards.
Did not focus on the experiences of LGBTQ student-athletes Participants were student-athletes who played college sports. Participants were self-identified as lesbian, gay, bisexual, transgender, or queer.
LGBTQ student-athlete must reach 80% of the total sample. Focusing on the experiences of LGBTQ student-athletes.
Description of studies
Fourteen studies met the criteria and were included in the review (Fig. 1). Table 2 shows the characteristics of each included study. The results of the quality assessment are shown in Table 3. Most of the included articles were from the USA, with 12 pieces. The remaining articles were from South Africa [37] and Turkey [38]. The total sample size was 156 LGBTQ participants. Of the participants, 107 were females, 40 were males, and nine were transgender. In the subgroup of LGBTQ, 78 participants self-identified as lesbian; 51 self-identified as gay; 14 self-identified as bisexual women; nine self-identified as transgender; one self-identified dyke; one self-identified as "I don't know," and two chose not to label their sexuality.
Data collection involved interviews in all but one study using a video diary [48]. While all the included articles focused on the college sports experiences of LGBTQ student-athletes, some of the studies also included a portion of the high school sports experience [45]; three studies especially focused on the "out" experiences [39,40,46]; and almost half studies recruited former student-athletes [31,39,40,42,44,47].
Participants were involved in many intercollegiate sports: basketball, softball, golf, soccer, track, crew, cheerleading, crosscountry, diving, fencing, football, hockey, rodeo, rugby, speed skating, swimming, tennis, volleyball, water polo, wrestling, lacrosse, tennis. Still, these articles mentioned more than once [5,42,44] that basketball and softball were the most significant number of lesbians.
Description of themes
After conducting a line of argument synthesis, four key third-order constructs were interpretively synthesized from the extracted data: (1) experiences of discrimination and violence; (2) perceived stigma; (3) internalized prejudice; and (4) coping and team support. These four themes were abstracted from 15 categories (Fig. 3). The contribution of included studies towards themes is shown in Table 4. Finally, a line of argument model was developed to express the interpretation of the results from included studies (Fig. 4).
Abuse. Abuse against LGBTQ Individuals made LGBTQ student-athletes feel unsafe in college sports. For example, one lesbian student-athlete described being sexually assaulted, "One of the guys picked me up and gave me the nastiest kiss on the back of my neck. I used all of my strength to fight him off, but I couldn't. He asked, 'still Lesbian now?" [41]. In another case, one gay participant described direct violence by his teammate: "… I was walking in the course for a race that I was expected to win, and a teammate of mine took a rock, about the size of a softball and threw it at the back of my head. … And he said, 'what's the big deal? It's just that gay pussy fag kid'" [46]. This kind of abuse not only hurt the LGBTQ individual involved but was also a warning to all LGBTQ athletes. One gay studentathlete described big worries after hearing about violence against a gay man, "One of the things that was holding me back from coming out was … One of my friend's friend was beaten to a bloody pulp because they thought he was gay" [45]. Thus, both direct abuse and the afraid of being abused can leave LGBTQ student-athletes fearful and worried about their situation in the college athletic context. LGBTQ student-athletes reported frequently hearing "fag", "gay," and "dyke", as well as insulting language about gender and sexual orientation in the sports context. Homophobic language has been mentioned in many included studies. Some lesbian student-athletes thought it was verbal abuse [5], while some gay student-athletes believed it was just joking [45]. Although homophobic language could be interpreted as different motives in different contexts, most LGBTQ student-athletes indicated that it hurt their feelings and distanced them from their teammates. One lesbian student-athletes reported more severe harassment that her car was vandalized with an insulting note "die dyke" attached [41]. Both homophobic language and outright harassment seem to be common and tacitly accepted in the college sports context, and these behaviors prevent a proper understanding of the dangers of discrimination against LGBTQ student-athletes.
Threats. Threats usually from coaches, teammates, and athletic department staff. One coach threatened lesbian student-athletes to comply with traditional gender norms or be expulsion [5]; another coach threatened to disclose one lesbian student-athlete as a punishment, "In the meeting he told me he wasn't afraid to pull the gay card" [31]. Threats also came from teammates, causing great distress to LGBTQ student-athlete: "… She made my life a living hell for a semester …. Somehow she found out how I didn't want my parents to know I was gay …. She started acting like she was going to tell them … like just to hurt me or something. That would have been the worst thing ever …." [31].
In addition, from the content of the threat we can learn that disclosure was the thing that LGBTQ student-athletes feared the most and that family was the most challenging environment for LGBTQ people to come out.
Rejection. When LGBTQ status was revealed, student-athletes faced an even worse situation. LGBTQ student-athletes described plenty of experiences of being excluded from competitions and being alienated by teams [37]. One transgender athlete was criticized by her coach for not wearing a skirt, "you're not respecting yourself, you're not respecting your team" [47]. One lesbian student-athlete indicated that some team coaches refused to recruit lesbian athletes: "I know, especially women's basketball that it is a huge no-no. [The coach] makes it very clear to the players that will not be … that they will not be open. Moreover, that's obviously something that's not talked about or well known, but they'll tell you that it's not accepted" [40].
Likewise, some athletic department staff and coaches threatened student-athletes with having their scholarships revoked if they go to a gay bar [42]; Some lesbian student-athletes had even been expelled from the team, "He cut five people, four of them were gay" [5]. Another rejection that came with coaches was the non-recognition of LGBTQ student-athletes. For example, in Anderson and Bullingham's study [41], one coach stated, "nice girl from a 'nice family' couldn't dare be like that." To make it even worse, one coach even asked a lesbian student-athlete to reject her sexual identity, "she (strength coach) hates it if you're gay, and always pushes you to Fig. 4. Model of the stress process of LGBTQ student-athletes in college sports.
go to her Bible studies … she says things like 'God will get me' and 'I need to turn from my sinful ways'" [31]. For those who do not accept LGBTQ individuals, "a homosexual is not accepted as an individual. They see homosexuality as a choice" [38]. Meanwhile, teammates' rejection added to LGBTQ student-athletes' frustration. One gay student-athlete returned to his team after a suicide attempt and was excluded by his former teammates, "Like, I was told that if I played any sports, they'd make my life living hell" [45]. In addition to being rejected by the entire team, one gay student-athlete reported lost friends due to the disclosure, "Someone had found out that we were gay and had a fit over it.…I'd say he was one of our good friends … he no longer spoke to me" [45].
Silence. The silence described by LGBTQ student-athletes refers to the fact that neither teammates nor coaches normally talk about topics related to gender and sexual orientation in college sports, "nobody talks about it …. Everyone knows about everyone else, but no one talks about it …. It's not a big gay thing; you go, you dive, and you leave" [45]. One lesbian student-athlete argued that the silence had made LGBTQ-related topics taboo in college sports: "… I feel like they talk about, you know, race, and you know, like international acceptance, the international athletes, everything but kind of gender and sexuality. You know, I feel like it's taboo like it's kind of not talked about" [40].
The silence was sometimes forced, and pressure came from the coaches and the athletic department, "… but it was like your coaches and the administrators who would be like, 'just don't talk about it'" [40]. One lesbian student-athlete was even asked by their coaches to remain silent, "My coaches have these rules for what I can do and what I can't do" [41]. Silence limited LGBTQ student-athletes' communication with teammates, coaches, and the athletic department and restricted the inclusive development of college sports.
Neglect. Studies showed that student-athletes' LGBTQ identities were neglected in favor of students' athletic identities in the athletic department [5,31,44].
"With (coach), he (pauses) just doesn't want to know (if a player is a lesbian). He just wants you to be a (sport) player, he doesn't want to know about anything else, unless it makes him or the team look good" [31].
In Anderson and Bullingham's study [41], one coach neglected the needs of one lesbian student-athlete and did not offer help, which led the student to leave the team. The description of these studies showed that gender and sexual identity were still a topic that could not be openly discussed in sports teams. Most teammates and coaches chose to refuse or remain silent, which made it impossible for LGBTQ student-athletes to face up to their gender or sexual identity. They were forced to hide or even try to escape it, which made them face more psychological pressure than heterosexual athletes.
Restriction. Restrictions refer primarily to the barriers to sports participation encountered by transgender student-athletes. Although transgender participation is allowed in college sports in some countries, gender expectations and rules restrict transgender athletes from participating. One transgender student-athlete described how they often experienced the embarrassment of being in a locker room that did not conform to their gender identity because they were restricted from participating in the male game: "… we would like all change on one side of the locker room (laughing), and all the girly-girls changed on the other side of the locker room. And like we can't touch and don't look, but the girls on the other side of the locker room could do whatever they want" [47].
In another case, one transgender student-athlete left the sport due to a difficult process of hormone therapy [48]. Consequently, transgender student-athletes face additional challenges in college sports participation, both in the face of the restrictions of sports competition rules and the difficulties of physical transition, and the evidence of these experiences is insufficient in the literature.
Therefore, the first theme exposed various forms of discrimination and violence against LGBTQ student-athletes in college sports. The discrimination stemmed from the specific nature of the sports context as well as the social culture; however, according to LGBTQ student-athletes, educational institutions and athletic departments did not pay attention and intervene, which greatly affected the benefits that sports could bring to LGBTQ youth.
Traditional Gender Role Ideology. Gender role ideology is how individuals' attitudes towards the role of women and men and how this attitude is shaped by sex. In addition, gender role ideology determines the distribution of women and men in social roles in society [49]. The traditional gender role ideology required men to do the bulk of the labor while women were expected to take care of the family. In sports, corresponding to traditional gender role ideology, people expect different performances from female and male athletes. Male athletes were supposed to be aggressive, while female athletes should keep feminine. In this case, when one transgender student-athlete moved from the men's team to the women's team, they found it difficult to fit into the norms of traditional gender roles: "I feel the big difference was that people seemed to use their bodies. Like, [for] women a lot of it is about technique. A lot of it in guys' sports, for better or worse, [is] they use their bodies almost sacrificially …. So that aggression and physicalness did not help my cause when I switched to a girl's team" [47].
With traditional gender role ideology, female athletes must dress femininely and wear make-up and skirts [5,31,47]. One lesbian student-athlete said, "because that's just how society is. You're supposed to be straight; you're supposed to be girly" [40]. Another lesbian student-athlete also expressed frustration with the preservation of femininity in sports: "We had to look nice because boosters were going to be there …. that meant you had to wear a dress or skirt, no exceptions. Even the straight girls didn't like it. It's ridiculous to have to wear a dress when it's freezing outside (the event was in February)" [31].
Women in sports had the job of not only being athletes but also maintaining the image expected of females, which sometimes carried more weight than the role of an athlete [5]. On the other hand, Male student-athletes need to show masculinity in sports. In Saraç and McCullick's study [38], one gay student received a reprimand for acting feminine, "I was criticized because of my actions. I was told that I looked like a girl". Influenced by traditional gender role norms, transgender student-athletes also feel the need to preserve their gender roles. Strong muscles seem to be representative of the male role, "My body is really important to me, I need to build it in a way that's going to make me feel good about myself" [48].
Stereotypes. Stereotypes were defined as general images or characteristics people perceive as fixed for LGBTQ athletes. First, female athletes who were masculine and successful would be put on the "lesbian" label. One lesbian student-athletes detailed how people view lesbian athletes' "butch' appearance," "They're super muscular. Maybe shorter haircuts … and appearance, not only their clothes but the way they walk, the way they carry their body, their posture … don't wear makeup or as much makeup as their straight teammates" [42]. In addition, lesbians were usually connected to unhealthy habits and characteristics such as drinking [42].
Labeling female athletes as lesbians divided the female team. Some female athletes distanced themselves from the lesbian label by denigrating lesbian athletes, and some lesbian athletes labeled others to avoid lesbian stigma [5]. Some sports have also been labeled as "lesbian games," such as basketball, baseball, soccer, and rugby [5,42,44]. One lesbian student-athlete joked "… that it would actually be more shocking if a (women's) basketball player came out that they were 'straight'" [42]. The stigma attached to women's sports was also confirmed in the South African study, "If you are a female and you play soccer or rugby, you are automatically labeled as lesbian. Whether you are straight or not, they don't care, they don't even ask" [37]. The lesbian label tried to deprive women of success through belittling and stigmatizing. At the same time, the label was discriminatory and demeaning to women, and all-female athletes suffered from the unfair treatment caused by the label.
In contrast, gay men were viewed as feminine, weak, inferior, and not good at sports. One gay student-athlete mentioned, "I think with sports …. you are supposed to be manly and then how can you be manly if you like guys. I guess because it's kind of the connotation that gay guys are feminine" [46]. "They think that homosexuals are only about being in bed; according to them, homosexuals only have sex and, then, get up" [38]. It should be noted that less evidence of stereotypes of gay athletes was found in included studies, possibly because this review only included three articles about the experiences of gay athletes.
Culture and Religion. Evidence showed that culture and religion were essential in the LGBTQ student-athlete experience [38,42,44]. The study in Turkey found that a PE student's most significant barrier to self-acceptance was religion. Because homosexuality is a sin in Islam, the student described his experience of rejection by his religion: "Of course, I rejected it at first. The reason that I didn't accept [my homosexuality] was religion, because of religion.… I was thinking that religion always neglects homosexuality, God would not like me, and he would put me in a bad place, in the other world" [38].
In line with the influence of Islam on LGBTQ individuals, studies in the US have found that Christianity was also the cause of LGBTQ athletes' struggles with their identity. "Religiously, I don't believe that it's right, but not everyone has the same religious values … I wish that I believed it was okay, but I don't believe that it's okay as far as religion" [44]. Similarly, another lesbian student-athlete claimed the need to hide her identity and play another role in the Christian Athletic Association in college [42].
Tolerance. When faced with a hostile environment, some LGBTQ student-athletes choose to tolerate it [40,45]. They believed that negative comments about LGBTQ individuals were not serious. The homophobic language was considered a "joke," "not some evil thing," and "they didn't mean it." Some would even join in with the "joke," "If everybody laughed at a gay joke or something, just laughed" [5].
LGBTQ student-athletes had a high tolerance for hostile climates because they had low expectations. They anticipated that they would experience "awkward" or "weird" situations. Some even felt "surprised" if the situation exceeded expectations: "I went out there and was kinda scared, but everyone kept being the same. You know, they kept being my friends, and there were like only two or three that stopped talking to me … and one of them, I used to be best friends with him … and as soon as he found out, he stopped talking to me" [45].
Hiding.
LGBTQ student-athletes often hid their gender or sexual identity when they felt unsafe in their environment [40,43]. One lesbian student-athlete declared, "I mean I never deny my sexual orientation, but I don't outwardly offer the information to people" [44]; one gay student-athlete also admitted, "When people guess it, I don't want them to make me feel that they know it" [38]. In addition, Some led segmented lives in a different context, "I think I was negotiating the representation I was putting out there of myself in each community" [42]. Some hid their sexual orientation by engaging in heterosexual topics [45] or acting heterosexual: "I had to pretend to like girls, like, make out with girls in public and then take them back to my room and pretend to have sex with them and in reality they would want to have sex and … just trying to show people you're not gay and you're straight can be very exhausting. And then, make you even unhappier when you can't be yourself and you have to fake being someone else" [46].
The hiding most came from the fear of being disclosed, "I think the biggest scare was just like the unknown. I didn't know how people would react …; so that made me fearful" [43]. Furthermore, hiding can put much psychological pressure on LGBTQ student-athletes, "It was stressful in the sense that I couldn't be honest with them …" [46]. Lying and pretending to be heterosexual led to exhaustion and frustration for LGBTQ student-athletes [39].
Self-hating. LGBTQ student-athletes developed self-loathing and self-hatred for their minority identity in a hostile external climate. Some blamed the problems they encountered on their minority identity: "I immediately think that the problem comes out of my homosexuality, and that's why they behave this way" [31]. Some refused to accept the gay identity: "I was still depressed and I was still self-hating. I still didn't want to be gay even though I knew and accepted it at that point before I came out. I didn't. I still didn't want to be gay at all. I was like nope, nope I know I'm gay but I don't wart to be. I was still at that point where I was very unhappy with being gay" [46].
After being rejected by peers, one gay student-athlete said, "so that kind of made me really hate myself more" [45]. One lesbian student-athletes also expressed regret for her identity: "I'd give anything to take it all back … I don't know, maybe if I was straight life would be better. I know my life would be better" [31].
Team Support. While many LGBTQ student-athletes reported many negative experiences in college sports, some described an experience of being supported by the team. For example, heterosexual student-athletes wore "gay pride socks" with their gay teammates in competition [45]; transgender athletes felt "belonged," "awesome" [48], and "supportive" [47] in their college teams. Similarly, in women's teams, lesbian student-athletes also found support from teammates, "So I told everyone on my team … they were in the background like cheering me on, screaming, waving signs …" [40]; "they have been so supportive of me, that they even went to some gay pride events with me" [41]. Mann and Krane [43] found two types of team climates: inclusive climates and transitioning climates; among these teams, ostensibly accepting of diverse sexual orientations and introducing inclusive norms, LGBTQ students felt accepted and appreciated. One lesbian student-athlete described what it was like to be supported by the team: "I have been in an athletic environment, and I have been on teams that have been really supportive, and that has allowed me to come out and maintain the rest of my identity, without feeling like there was something wrong with me [uh huh]. So, I mean I am just incredibly grateful for it. And you know, I have definitely been lucky in the teammates that I have had and the environments that I have been in" [39].
Notably, what could explain this kind of team support in women's team may be that lesbian was common in the women's teams [43]. For example, one lesbian student-athlete stated, "It's a community where a lot of women who play hockey are lesbians" [39]. Lesbians were common in women's teams, "I think we had at one point out of 15 or 16 girls, we had like seven gay girls … so it wasn't a big deal …" [42]; "… that it would actually be more shocking if a [women's] basketball player came out that they were 'straight" [42].
Similarly, one lesbian student-athlete felt the situation would be better "when all your coaches are gay I guess it really doesn't matter … they were probably more understanding when they found out and probably could connect to you almost a little more than when they didn't know" [44]. A similar phenomenon was found in Carr and Krane's study [47], "And shoot, you go to a rugby party and there is a lot of androgyny happening (laughs). If you have to come out in a sport, it's a pretty safe sport to come out in".
Therefore, when interpreting team support, the population of LGBTQ individuals needs to be considered, especially in women's teams.
Resilience. Resilience was defined as LGBTQ student-athletes' ability to adapt and cope with negative experiences. First, LGBTQ student-athletes accepted their minority identity and found it a "huge part of my life." One lesbian student-athletes considered it a special status: "I never really felt special. But, when I figured out I was gay it made me feel like I was finally a part of something (pauses) that I belonged. It made me feel like I was different. In a good way" [31].
Another lesbian student-athlete felt the same way, "But, you know it was just accepting myself really, and once that happened I haven't really had a problem [disclosing] since then" [39]. Once LGBTQ student-athletes accepted their identity, they would become confident, "when it is called into play you have a self-confidence about yourself that is not gendered to hold and maintain through that situation" [47] and strong, "Some things change in you … after accepting myself, I started to believe that I was more powerful. I thought that I could handle the reactions [of other people]" [38].
Therefore, the team support and personal resilience seemed to offset some of the negative effects of the external environment and positively impact the mental health of LGBTQ student-athletes.
Model of the stress process of LGBTQ student-athletes in college sports
The meta-ethnographic synthesis used a line-of-argument approach to generate a model of the stress process of LGBTQ student-athletes in college sports (Fig. 4). The model illustrates four main dimensions of the experiences of LGBTQ student-athletes. First, LGBTQ student-athletes experience constant discrimination and violence in the college sports environment, often from teammates and coaches; abuse, harassment, threats, rejection, silence, neglect, and restriction are the primary manifestations of discrimination. Meanwhile, LGBTQ student-athletes perceive stigma in the sports context, including traditional gender roles, stereotypes of LGBTQ individuals, and pressures from cultural and religious, which make LGBTQ student-athletes vigilant about their environment. As a result, high levels of stigma can lead to chronic pressure, which leads LGBTQ student-athletes to endure hostility from the external environment, hide their identity, and even self-hating. Under certain conditions, such as receiving team support, some LGBTQ studentathletes would develop resilience and become confident and strong. Each dimension is interrelated, and ultimately, these experiences can have different health effects on LGBTQ student-athletes. It is worth noting that this is a dynamic process: People's attitudes towards LGBTQ athletes evolve with the culture and the climate of the college sports context changes with the renewal of teammates or coaches. More importantly, LGBTQ student-athletes' acceptance of their minority status also influenced the perceptions of their identity.
Discussion
This study used a meta-ethnographic approach to synthesize qualitative studies that focused on the experiences of LGBTQ studentathletes in college sports. We conducted this review to identify ways in which athletic staff, coaches, and others can support this population to participate safely in sports.
The results show that most of the studies are from the United States and therefore lacked evidence of the experiences of LGBTQ student-athletes from other regions. The results also show a dominance of research on lesbians and a lack of research on gay, bisexual, and trans youth. Therefore, there are considerable gaps in the literature, and scholars need more effort to increase knowledge in this area across different regions and LGBTQ subgroups.
The results of this meta-ethnography are generally consistent with the results of the quantitative study [14,[50][51][52][53][54][55]; there is solid qualitative evidence that LGBTQ youth are discriminated against in sports. Especially gay student-athletes have been found to experience even more egregious acts of discrimination accompanied by physical abuse. Roper and Halloran [56] found that male student-athletes have a more negative attitude towards lesbians and gay men than female student-athletes. Compared to women's teams, male teams had a less tolerant climate, and gay male athletes were more worried about being alienated from their teams [57]. As a result, sexual minority men were less likely to engage in physical activity or participate in team sports than heterosexual men [58]. In addition, the qualitative evidence for transgender youth in this study is consistent with the review of sport and transgender people [59], which found that the sport's policies impose many restrictions that affect transgender youth's sports participation opportunities and benefits.
Evidence of homophobic language continues to require attention. Scholars have found frequent homophobic language was used in youth team sports [60], and homophobic language was an important tool for stigmatizing gays and lesbians in sports [55]. Homophobic language can maintain heteronormativity in sports, and the frequent use of homophobic language contributes to the perception of gay male identity as an inferior form of masculinity, marginalizing all non-heterosexuals individuals in sports [45]. While scholars have called for the use of interventions to reduce the occurrence of homophobic language in sports [60], a recent study of a social cognitive education intervention in youth rugby teams found that the intervention did not significantly reduce the use of homophobic language or change related norms and attitudes [61]. Therefore, research on homophobic language needs to continue, and scholars need to explore more effective ways to intervene in the appearance of homophobic language in sports contexts.
The results show some positive experiences for LGBTQ student-athletes, such as team support. However, we must interpret this result cautiously because the results also revealed a significant number of LGBTQ teammates in team-supported settings [39,42,43,47]. Therefore, these positive experiences may not indicate that the sports environment is inclusive of LGBTQ individuals. This support and acceptance may come from LGBTQ teammates. On the other hand, due to the presence of numerous LGBTQ athletes on the team, heterosexual students will have more opportunities to engage with LGBTQ students, therefore enhancing the communication generated between the two groups. Roper and Halloran [56] found that student-athletes who reported having contact with gay men or lesbians had significantly more positive attitudes toward gay men or lesbians; Pariera et al. also inferred that greater exposure to LGBTQ athletes might help reduce negative assumptions held by heterosexuals [62]. In conclusion, future research needs to pay attention to the number of LGBTQ individuals on the team when examining the climate of inclusive teams, as this is a key factor in interpreting the results.
Furthermore, the results of this review are primarily consistent with the minority stress theory [28].
LGBTQ student-athletes are continually exposed to discrimination and violence in the sport context; after perceiving identity stigma, LGBTQ student-athletes would develop internalized prejudices, which affect their mental health. Although team support can reduce mental health risks for LGBTQ student-athletes, there is insufficient evidence that the sport context is inclusive of LGBTQ individuals in this study. In addition, consistent with previous studies [63,64], this meta-ethnography found that resilience was a key factor affecting the health of LGBTQ student-athletes. However, the mechanism by which this resilience is generated in a sports context is unclear, and further research is needed to explore this area. Moreover, Given the important positive role of recovery factors for LGBTQ individuals, future relevant research could explore the experiences and mental health effects of LGBTQ individuals using combined minority theory and other identity theories, such as the homosexual lifespan development model [65].
This study highlights the importance of interventions for LGBTQ-related issues in sports participation. The development of prevention and interventions needs to include the interrelationship of the four components of advocacy, policy, education, and research [66]. Therefore, educational institutions should incorporate inclusion into relevant curricula and work to increase campus-wide dialogue on LGBTQ-related topics [67]. Educational institutions and athletic departments should incorporate policies and procedures, such as "prejudice response team" orientation [68], to ensure a safe and affirming environment for LGBTQ student-athletes. Provide professional development for athletic teachers and coaches, increasing their awareness and knowledge of the importance of sensitive language in sports [69]. Meanwhile, given that LGBTQ-related education resources cannot be proven effective in sports contexts at this time, educational resources need to be assessed to ensure effectiveness [70]. Lastly, to advance the field, it is necessary for researchers and funding agencies to conduct research, for example, using minority stress theory and other related theories to study LGBTQ youth-related issues in different sports contexts and regions.
Implication and limitation
Based on existing knowledge about LGBTQ individuals in sports participation, this study synthesized qualitative research which explores the experiences of LGBTQ youth in college sports. In addition, building on the literature on minority stress theory, this study establishes the stress process model for LGBTQ student-athletes, emphasizing the importance of educational institutions and athletic departments understanding and intervening in LGBTQ-related issues. We must continue our theoretical and conceptual exploration of the experiences and mental health-related issues of LGBTQ individuals. A more comprehensive understanding of the LGBTQ youth's experience allows us to fully develop policies and practices that protect the safe sport participation of LGBTQ youth.
This study also has some limitations. The variation in the literature and resources on LGBTQ student-athletes was widespread.
LGBTQ and sexual minorities were utilized for the criteria of search, which could remove a few articles that used the terminologies such as non-cisgender and non-binary. In addition, this review included only qualitative studies and may have lost a small portion of the evidence for LGBTQ student-athletes in mixed studies.
Conclusion
This study used meta-ethnography to synthesize the experiences of LGBTQ student-athletes in college sports. We hope this study provides a valuable overview of qualitative evidence that can serve as a foundation to support inclusive and diverse policies and practices in educational institutions and athletic departments. Although more scholars are focusing on issues related to LGBTQ youth in sports contexts, these studies are unevenly developed, and there are many regions in the world where LGBTQ youth are discriminated against in sports contexts due to cultural, religious, and other factors, and we do not have a clear understanding of their actual situation. Therefore, we call on more regional scholars to engage in this field and work together to build a safe and inclusive sports context for LGBTQ youth.
Author contributions
Conceptualization, MX.; methodology, MX; data collection, MX and YX.; data analysis, MX and YX; data curation, MX; writing-original draft preparation, MX; writing-review and editing, MX, YX, and SA; supervision, KS, SA, and NZ. All authors have read and agreed to the published version of the manuscript.
Data availability statement
Data included in article/supplementary material/referenced in article.
Additional information
Supplementary content related to this article has been published online at [URL].
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2023-06-03T15:13:29.412Z
|
2023-06-01T00:00:00.000
|
{
"year": 2023,
"sha1": "693ed662b94bd7570a6623bb095f4ef976c2ead3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e16832",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8618af8fb734a24cb72fb3ad1974d856163c2ff9",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219685262
|
pes2o/s2orc
|
v3-fos-license
|
Fractionation, Phytochemical Screening and Free Radical Scavenging Capacity of Different Sub-Fractions from Pituranthos scoparius Roots
The purpose of this study was to prepare three sub-fractions from Pituranthos scoparius roots (PSR), characterize their phytochemicals contents and to investigate their free radical scavenging activity by 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) and hydroxyl scavenging activities. Tannins, flavonoids, steroids, and other bioactive compounds were found in the different sub-fractions. The Ethyle acetate extract (EAE) and chloroform extract (ChE) exhibited the highest antioxidant activity using ABTS (17.8 ± 0.87 μg/mL and 18.15 ± 0.68 μg/mL), respectively. Whereas, Crude extract (CrE) have been presented strong hydroxyl scavenging activity (14.9 ± 0.8 μg/mL). This study indicates that PSR extracts has potent free radical scavenging, and may prove to be of potential health benefit as well as additional resources for natural antioxidants.
INTRODUCTION
Antioxidants are substances that significantly delay or inhibit oxidation of an oxidizable substrate when present at low concentrations in comparison with those of the substrate 1 . Nowadays, the scientists have casted some toxicological doubts on synthetic antioxidants due to their adverse faction effects, and people are more concerned about food safety and quality 2 .
Evidences showed that natural antioxidants deliver better effectiveness as compared to synthetic antioxidants. Natural products have always remained a profile source for the discovery of new drugs 3 .
In this context, the present research aims to investigate one of the medicinal plants used for various therapeutic purposes in Algerian folk medicine, which is Pituranthos scoparius, commonly known as "Guezzah" 4 . This plant used for the treatment of asthma, measles, digestive disorders, jaundice and rheumatism. Preliminary phytochemical screening has indicated the presence of tannins, flavonoids, sterols and other phytochemical components in the roots 5 . However, few studies have investigated its free radical scavenging activities. Previous studies showed that the phytochemical analysis of ethyl acetate extract from P. scoparius roots revealed the isolation of two isocoumarins: 3n-propyl-5-methoxy-6-hydroxy-isocoumarin and 3-npropyl-5,7-dimethoxy-6-hydroxy isocoumarin 6 . In addition, Benalia et al. 7 have shown a high in vitro anti-urolithiatic effect of P. scoparius roots extracts.
The current study was undertaken to evaluate the in vitro antioxidant potential of Algerian Pituranthos scoparius root extracts (PSRE) by ABTS radical scavenging and hydroxyl scavenging activity. In addition, phytochemical of different extracts were also measured, to establish any relationship between the antioxidant activities and these compounds.
Plant collection and identification
The Roots of Pituranthos scoparius were collected from the mountain Djebel Zdimm located about twenty kilometers south of Setif (Algeria) at an altitude of 1212 m above sea level. Setif, Algeria) under voucher specimen (013/DBEV/UFA/18), then air-dried under shadow at room temperature to preserve their properties, then powdered and stocked in darkness until use.
Bioactivity-guided fractionation
The three sub-fractions of Pituranthos scoparius roots (PSR) were prepared according to 8 method, using solvents with different polarities. Dried plant material was macerated in methanol/water 85/15 (v/v), in a vegetal material/solvent ratio 1:10 (w/v) and the mixture was subjected to agitation during an overnight at 4°C with occasional shaking ( Figure 1). All the solvents were eliminated by evaporation under reduced pressure.
Qualitative detection of phytochemical constituents
Qualitative tests for the presence of different phytochemical compounds include: tannins, flavonoids, quinones, anthraquinones, saponins, steroids, glycosides, terpenoids and carbohydrates were carried out on the roots extracts using the procedures of 9 .
Antioxidant capacity by ABTS radical assay
The colorimetric analysis of ABTS + radical scavenging assay was determined according to 10 method with slight modifications. The ABTS + solution was formed by the reaction of 7 mM of ABTS solution in 2.45 mM potassium persulfate. The mixture was saved in the dark at room temperature for 16 h before use. The solution was diluted with absolute ethanol and equilibrated at room temperature to give an absorbance of 0.7 at 734 nm. Then, 20 μL of the extract dilutions was mixed with 2 mL of ABTS + solution and kept for six min at room temperature. The absorbance was measured at 734 nm. The scavenging capability of ABTS + radical was calculated according to the following formula : Where Ablank is the absorbance of the solution except the test compound, and Atest is the absorbance of the tested compound.
Hydroxyl radical scavenging test
The hydroxyl radical scavenging ability was estimated using the spectrometric method 11 . Briefly, A mixture contained one mL of FeSO4 (1.5 mM), 0.7 mL of H2O2 (6 mM) was mixed with varying concentrations of samples or ascorbic acid as a positive control. Then, 0.3 mL of sodium salicylate (20 mM) was added, the resulting mixture was incubated at 37°C for 20 min. After that, the absorbance of the hydroxylated salicylate complex was measured at 562 nm. The percentage scavenging effect was calculated as scavenging rate (Hydroxyl radical scavenging activity % or I %) by the following equation : Where A0 was the absorbance of the control (without sample) and As was the absorbance in the presence of the sample, Ac was the absorbance without sodium salicylate.
Statistical analysis
Statistical analysis was performed by using the Graph Pad Prism (version 5.03 for Windows). In this study, statistical analysis was analyzed by one-way analysis of ANOVA. All determinations were carried in triplicate, and all results were estimated as the mean ± standard deviation (SD). Tests of significant differences were determined by multiple range tests at p < 0.05.
Phytochemical screening
The phytochemical screening results of the PSR extracts are reported in table 1. The roots were observed to contain tannins, flavonoids, free quinones, steroids, keto compounds were detected in all extracts. However, anthraquinones, glycosides, saponins, terpenoids, reducing sugars were not found in the all extracts. Phytochemical analyses carried out by 5, 12 on the roots part of Pituranthos scoparius extracts revealed the presence of reducing sugars, flavonoids, tannins and steroids. Moreover, terpenoids are also present in almost all the studied extracts. These differences may be related to different conditions of extraction, time of collection…etc.
ABTS radical scavenging assay
The free radical scavenging activity of PSRE was also determined using ABTS radical. The results revealed that all extracts scavenged the ABTS cations with IC50 values varying from 17.8 to 51.47 µg/mL (Figure 2). These values are close to that of Vit C with an IC50 value 0.075 ± 0.001 µg/mL. As seen in Figure 2, the highest ABTS radical scavenging activity was exhibited with EAE and ChE followed by CrE with IC50 values of 17.8 ± 0.87 µg/mL, 18.15 ± 0.68 µg/mL and 51.47± 1.01 µg/mL, respectively.
The high ABTS radical scavenging ability of EAE and ChE can be attributed to the presence of phenolic compounds. The earlier studies reported that ABTS radical scavenging capacity of bioactive compounds depends on their molecular weight, structure and presence of aromatic groups 13 .
Figure 2:
Free radical scavenging activity of different PSRE for ABTS assay. Data were presented as means ± SD (n=3). ***: P < 0.001 compared to Vitamin C as standard.
Hydroxyl radical scavenging activity
Hydroxyl radicals generated by the Fenton reaction could oxidize Fe 2+ into Fe 3+ which is reflected by the degree of decolorization of the reaction solution. In this assay, OH • radicals were generated using a system containing FeSO4 and H2O2 and detected by their ability to hydroxylate salicylate. Vitamin C was used as a standard antioxidant for comparison (IC50 = 83.6 ± 1.4 µg/mL). The radical scavenging activity of PSRE decreased in the following order CrE (14.9 ± 0.8 µg/mL), ChE (290.3 ± 0.02 µg/mL) and then EAE (458.4 ± 0.61 µg/mL). The results are summarized in Figure 3. Hydroxyl radical is a potent cytotoxic factor able to attack almost every molecule in the body resulting in peroxidation of cell membrane lipids and in the formation of malondialdehyde, which is mutagenic and carcinogenic 14 . Therefore, the scavenging of hydroxyl radical by extracts may provide significant protection to biomolecules by their ability to remove hydroxyl and superoxide free radicals due to inhibition of respective mechanisms involved in the formation of radicals 15 .
The higher potency of the scavenging hydroxyl radicals may be attributable to the presence of the hydrogen donating ability phenolic compounds in the extracts; which is highly related to the presence of hydroxyl groups 16 . Furthermore, the potent radical scavenging effects of the extracts may be related to the expanded steric obstacle 17 in compounds contained in these extracts.
CONCLUSION
The findings of the present study indicate that Pituranthos scoparius could be a new source of natural antioxidant drugs. The data highlights the good free radical scavenging properties of different extracts from Pituranthos scoparius roots. This antioxidant potential is probably associated with the presence of various secondary metabolites which may have many benefits in treating oxidative stress-related diseases. These results lay the preparation for further studies on the molecular mechanisms underlying the biological profile of these extracts, isolation and purification of more active principles in each extract as well as clarification of their mode of action. These in vitro results should be validated in vivo to develop a potent antioxidant agent from this plant.
|
2020-05-28T09:16:16.458Z
|
2020-05-15T00:00:00.000
|
{
"year": 2020,
"sha1": "876e963fc11a82b40189eade862de7056d849e1c",
"oa_license": "CCBYNC",
"oa_url": "http://jddtonline.info/index.php/jddt/article/download/4100/3116",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6d081fa9df49669fcf89cedf432eb34825e939b5",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
5788946
|
pes2o/s2orc
|
v3-fos-license
|
Early postpartum maternal morbidity among rural women of Rajasthan, India: a community-based study.
The first postpartum week is a high-risk period for mothers and newborns. Very few community-based studies have been conducted on patterns of maternal morbidity in resource-poor countries in that first week. An intervention on postpartum care for women within the first week after delivery was initiated in a rural area of Rajasthan, India. The intervention included a rigorous system of receiving reports of all deliveries in a defined population and providing home-level postpartum care to all women, irrespective of the place of delivery. Trained nurse-midwives used a structured checklist for detecting and managing maternal and neonatal conditions during postpartum-care visits. A total of 4,975 women, representing 87.1% of all expected deliveries in a population of 58,000, were examined in their first postpartum week during January 2007-December 2010. Haemoglobin was tested for 77.1% of women (n=3,836) who had a postnatal visit. The most common morbidity was postpartum anaemia--7.4% of women suffered from severe anaemia and 46% from moderate anaemia. Other common morbidities were fever (4%), breast conditions (4.9%), and perineal conditions (4.5%). Life-threatening postpartum morbidities were detected in 7.6% of women--9.7% among those who had deliveries at home and 6.6% among those who had institutional deliveries. None had a fistula. Severe anaemia had a strong correlation with perinatal death [p<0.000, adjusted odds ratio (AOR)=1.99, 95% confidence interval (CI) 1.32-2.99], delivery at home [p<0.000, AOR=1.64 (95% CI 1.27-2.15)], socioeconomically-underprivileged scheduled caste or tribe [p<0.000, AOR=2.47 (95% CI 1.83-3.33)], and parity of three or more [p<0.000, AOR=1.52 (95% CI 1.18-1.97)]. The correlation with antenatal care was not significant. Perineal conditions were more frequent among women who had institutional deliveries while breast conditions were more common among those who had a perinatal death. This study adds valuable knowledge on postpartum morbidity affecting women in the first few days after delivery in a low-resource setting. Health programmes should invest to ensure that all women receive early postpartum visits after delivery at home and after discharge from institution to detect and manage maternal morbidity. Further, health programmes should also ensure that women are properly screened for complications before their discharge from hospitals after delivery.
INTRODUCTION
Little rigorous research has been conducted on maternal morbidity in low-resource settings. Most studies are hospital-based, and if community-based, these tend to be retrospective and are based on selfreported symptoms, which are unreliable for estimating the prevalence of specific morbidities in the population. One study in the Philippines that validated self-reporting of obstetric complications found that the sensitivity of recall of haemorrhage, dystocia, sepsis, and eclampsia was 70%, 69%, 89%, and 44% respectively (1).
The early postpartum period is a time of heightened risk for both mothers and newborns. While significant progress has been made in developing community-based approaches for promoting neonatal health, similar attention has not been paid to improving maternal health during the postpartum period.
India witnesses the largest number of maternal and neonatal deaths in any single country, with over 63,000 maternal deaths and over one million neonatal deaths per year (2,3). Within India, Rajasthan has among the highest rates of maternal and neonatal death. Rajasthan is a large state in India, with a population of 68.6 million, 75% of which lives in rural areas (4). It has a high maternal mortality ratio of 318 per 100,000 livebirths (5) and a high neonatal mortality rate of 44 per 1,000 livebirths (6). Most maternal and neonatal deaths occur in the first seven days after delivery.
To develop standard integrated interventions for mothers and newborns, a sound understanding of conditions affecting both is essential. Action Research and Training for Health (ARTH), a nongovernmental organization, aimed to develop an intervention to reduce maternal and neonatal mortality and morbidity in its rural field area with a population of 58,000 by providing integrated care to mothers and newborns within the first week after delivery. Services under this intervention are provided by trained nurse-midwives to all women and newborns, irrespective of the place of delivery. This paper presents the findings on the prevalence of various early postpartum maternal and neonatal conditions examined at home by trained nursemidwives in a rural interior area in southern Rajasthan, India.
MATERIALS AND METHODS
Since 1997, ARTH has implemented a field-level health service programme in a rural population of 58,000 in southern Rajasthan. Its field area comprises 49 villages surrounding two health centres that provide 24-hour delivery and newborn-care services through nurse-midwives. This intervention has been described earlier in details (7). Southern Rajasthan is hilly, and villages are scattered across several hamlets. While most villages are linked to roads, several hamlets are situated up to 3-4 km from the main village.
ARTH's village health workers (VHWs), the Government's accredited social health activists (ASHAs), and key-informants provide reports of all deliveries in the field area to the health centres, irrespective of the place of delivery. At each health centre, two or three nurse-midwives (trained auxiliary nursemidwives-ANMs and general nurse-midwives-GNMs) provided reproductive and child-health services, including 24-hour delivery service and maternal and neonatal care while one nurse-midwife visits homes of all women as soon as possible after receiving the reports of deliveries.
Collection of data
The programme started in April 2006. During April-December 2006, we developed the intervention, including pretesting and finalizing the struc-tured examination checklist, training of nursemidwives, developing formats for reporting of births, identifying village-level personnel to report births and orienting them, and developing datamanagement systems.
Forty-five ASHAs and six VHWs residing in the field area of ARTH were trained over two days to register all pregnant women, motivate them to seek at least four antenatal check-ups, and report all births in the field area, irrespective of the place of delivery. The ASHAs and VHWs visited each village and sought information about all recent births. In places where the ASHAs or VHWs were not active, various other persons, such as traditional birth attendants, jeep-drivers, and prominent women in the village, were identified as keyinformants. They reported all deliveries in these 49 villages to the nearest ARTH health centres by telephoning at the centre or travelling there. They received an incentive for reporting deliveries early-Rs 50 (US$ 1) if reported within 24 hours, Rs 40 (US$ 0.8) if reported between 24 and 72 hours, and Rs 10 (US$ 0.2) if reported after seven days. Family members were also motivated through personal contacts, posters, and wall-paintings to inform about delivery. If the family reported a delivery, they were provided a free set of clothes for the newborn baby (costing ~Rs 40).
Six nurse-midwives employed in the ARTH's health centres were given a six-day additional training on postpartum maternal and neonatal care. The preservice training of these nurse-midwives included either an 18-month course for ANMs or a threeyear diploma in general nursing and midwifery. A structured checklist was developed and translated into Hindi for use by the nurse-midwives to detect complications in mothers and newborns as early as possible and manage these problems.
After the information on delivery was recorded, a nurse-midwife visited the home of the woman who recently delivered; the first visit was made as early as possible after delivery, preferably within 2-3 days after delivery and the second visit at 6-9 days after delivery. Arrangements were made to provide a transport to the nurse-midwives through a motorcycle for reaching the home of women since homesteads in the area are very scattered. The nurse-midwife used the detailed structured checklist to inquire about each condition of the mother and the baby and carried out a detailed physical examination. Examination of the mother included general examination, examination of breasts and abdomen, and a haemoglobin test at home using Sahli's method. Perineal and pelvic examinations were carried out only if the woman reported a symptom relating to these areas since these exami- nations were carried out in the home setting. Table 1 shows the activities carried out by the nurse-midwives to detect and manage postpartum maternal and neonatal complications, and the diagnostic criteria for various conditions are shown in Table 2.
If a woman or a newborn was detected to have a complication, the nurse-midwife treated it or, if severe, advised referral and informed about arrangements of free transport and treatment. Referrals were made either to the ARTH's RCH centres or to a referral hospital in the city, depending on the severity of the condition. To assess the quality of care at postnatal visits, a senior nurse-midwife or a physician accompanied the nurse-midwife during 5% of postnatal visits and assessed the quality of care provided. A research manager also visited 70% of women at 4-8 weeks after delivery and made enquiries, using a standard checklist regarding some procedures, such as measuring blood pressure, haemoglobin, weighing of the newborn, and counselling. If any gap in care was detected, they immediately gave feedback to the nurse-midwives. Additionally, the intervention was discussed once a month among the researchers, supervisors, and some of the nurse-midwives. The nurse-midwives could not be called for all meetings since they were in the health centres or in the field, and their attendence in a meeting would mean disruption of 24x7 delivery service or postnatal visits for that day. The feedback of the nurse-midwives was sought frequently during field-visits by the project personnel.
On visiting a woman's home, the nurse-midwife informed her of the purpose of the visit and sought verbal consent for interview and examination. A minority (1.1%) of women visited by the nurse-midwives refused the postnatal checkup. During examination, 12.8% of the women refused haemoglobin test. For another 10.2% of the women, the nurse-midwives did not offer haemoglobin test. This happened particularly in the first year when nurse-midwives did not conduct a haemoglobin test if the woman had undergone a haemoglobin test during the antenatal period. Subsequently, during review meetings, the need to conduct the haemoglobin test during postpartum visits was emphasized, even if the woman had a recent haemoglobin test in the antenatal period.
Hence, haemoglobin was eventually tested for 77.1% of women during postnatal visits (Fig. 1). Data were analyzed using the Epi Info and Stata software (version 11).
The implementation of the programme started in January 2007 and is continuing till date. In this paper, we are presenting data for the January 2007-December 2010 period.
Number of reported births and postnatal visits
Over a four-year period from January 2007 to December 2010, we expected 5,712 births in the field area based on the birth rate of 26 per 1,000 people. Of these, 5,266 deliveries (92.2% of the expected births in the community) were reported to the nurse-midwives within 28 days after delivery, and 5,042 births (88.3% of the expected deliveries) were reported within 14 days. The median interval between delivery and reporting was one day. Sixteen postpartum maternal deaths also occurred during the four-year period in this field area, nine of which occurred within seven days after birth. However, we did not present data on these maternal deaths in this paper as it focuses on morbidities detected during the postpartum visits.
After the initiation of a national scheme called Janani Suraksha Yojana to provide cash incentives to women delivering in government institutions in 2006, there has been a major shift in the place of delivery from home to institutions-starting with 53% in 2007 and increasing to 82% in 2010. Figure 2 shows average data of the respective years. Overall, 68.2% of the reported deliveries occurred in institutions, 31 The nurse-midwives made home-level postnatal visits to 4,975 women (94.5% of the women whose births were reported and 87.1% of the expected number of births in the area). In 5.5% of the women, the postnatal care (PNC) visits could not occur despite receiving a report of delivery. This was primarily due to the leave schedule of nurse-midwives: when more than one nurse-midwife was on leave at a health centre, 24x7 delivery service of the health centres received priority over postnatal visits. An attempt was made to ensure that the visit occurred as early as possible. The median in- terval between delivery and the first PNC visit was five days in the first year and three days in the fourth year, with an average of four days. In some cases, the PNC visits occurred late either because the delivery report came late, or the nurse-midwife or the motorcycle-driver was on leave, or because the two-wheeler broke down on a given day. In some cases, when the nurse visited the home on day 2 or 3 after an institutional delivery, the woman had not yet returned home. In such cases, the visit was made again after 2-3 days.
The large majority (78%) of the women were in their twenties, and 60.2% belonged to the socioeconomically-underprivileged scheduled tribes or scheduled castes (Table 3). For one-fourth of the women, this was their first delivery, and for nearly 30%, it was their fourth or subsequent delivery.
Types of maternal morbidities detected
Nearly three-fourths of the women were detected to have a morbidity after delivery. The most common problems were postpartum anaemia, sepsis, and breast and perineal infections ( Table 4).
The most common serious morbidity detected was severe anaemia present in 7.4% of women whose haemoglobin was tested (5.7% of all women). Fever was present in 4.0% of the women, although signs of uterine infection were present in only 1.3% of the women. The remaining women with fever had an upper urinary tract infection or respiratory infections. The incidence of puerperal sepsis was 1.4% following home-delivery and 1.2% following institutional delivery. The incidence of any kind of infective illness after delivery was 6.0% following home-delivery and 5.7% following institutional delivery.
Conditions relating to breasts (breast engorgement, mastitis, or flat nipple) were detected in 4.9% of the women-none, however, had a breast abscess on the day of the postnatal visit. Breast infections were also more frequent among women who had institutional deliveries. Additionally, breast conditions were more common among women with perinatal death than among those with a surviving neonate (13.1% and 4.3% respectively). Conditions relating to the perineum (perineal pain, tear, or infection) were detected in 4.5% of the women. The prevalence of perineal conditions was significantly more frequent among women who had institutional deliveries (6%) than among those who had home-delivery (1.1%). On further analysis for the one-year period for which there were data on episiotomy, we found that the incidence of any perineal condition was 28% among those with an episiotomy compared to 3.0% of those without episiotomy.
Urinary incontinence was reported by 0.1% of the women. None of the women had genito-urinary fistula. Life-threatening complications, such as severe anaemia, uterine infection, secondary postpartum haemorrhage (PPH), and severe hypertension or eclampsia, were experienced by at least 7.6% of the women, of which 5.7% had severe anaemia, and 1.8% had one of the other conditions. Since haemoglobin was not tested in 22% of the women, it is possible that the actual prevalence of life-threatening conditions was higher. Life-threatening conditions were present in 9.7% of those who had homedeliveries and 6.6% in those who had institutional deliveries. A large proportion (28%) of the women also reported lower abdominal pain, backache, or pain in arms and legs.
Since anaemia was the most common postpartum morbidity among the women, we looked at correlations for severe anaemia (Table 5). We found that severe anaemia was more prevalent among women who had home-deliveries and among women from the scheduled castes and tribes. Furthermore, multiparous women (having 3 or more children) were more likely to have severe anaemia than those with 1-2 child(ren). It was, however, not significantly different between women who received antenatal care and those who did not. We do not have information on whether women consumed iron during pregnancy or not or whether they had anaemia before delivery.
The severity of anaemia had a linear correlation with perinatal mortality (Fig. 3). Compared to women with no anaemia, women with severe anaemia were 3.7 times more likely to have a perinatal death while compared to mild anaemia, they were 2.
DISCUSSION
This study adds valuable knowledge on postpartum morbidity affecting women in the first few days after delivery in a low-resource rural setting. The results of the study showed that, during the first week after delivery at home or institutional delivery, women suffered a high burden of morbidities.
The most common postpartum maternal morbidities were moderate and severe anaemia in our study. While many studies have assessed the prevalence of anaemia in the antenatal period, very few have assessed it in the postpartum period. In one study in rural Bangladesh, more than 10% of women were severely anaemic in the postpartum period, at both 48 hours after delivery and two weeks later (8). A cross-sectional survey in Viet Nam found that the prevalence of anaemia in the postpartum period was higher among pregnant women (9). A community-based study from north India has shown that 70% of women were anaemic at six weeks postpartum (10). Studies from high-income countries have shown a lower prevalence of postpartum anaemia (11). The high prevalence of anaemia in our study is not surprising as it has long been reported that South-East Asia has the highest prevalence of anaemia among pregnant women in the six regions of the World Health Organization (WHO) (12).
Given these findings, it is important to address anaemia in the postpartum period and prenatally because it contributes to maternal deaths both directly and indirectly-acute onset of anaemia can lead to rapid cardiac decompensation and heart failure (13). It also aggravates the effects of sepsis and haemorrhage-in the latter, anaemia puts women at risk of hypotension and death with even moderate bleeding. Studies in India have found a high proportion of maternal deaths attributable to anaemia which was reported to be a cause in al-most one-fifth (19.0%) of maternal deaths in rural India (14) while the WHO's analysis of causes of maternal deaths stated that anaemia contributed to 12.8% of all maternal deaths in Asia (15). One hospital study in Rajasthan reported that anaemia contributed to 24% of all maternal deaths (16). Results of a verbal autopsy study in rural Rajasthan showed that anaemia was the second biggest cause of postpartum maternal deaths, responsible for 26.3% of postpartum maternal deaths (17), and all deaths due to anaemia occurred in the postpartum period.
Although controlling antenatal anaemia is likely to reduce the prevalence of postpartum anaemia, there is still a need to detect and manage postpartum anaemia because anaemia can result or worsen as a result of blood loss during delivery. According to the Global Burden of Disease 2003, postpartum anaemia is the most important consequence of postpartum haemorrhage (18). The report estimated that the incidence of PPH (defined as >1,000 mL of blood loss within one hour postpartum) was 2.9% in women whose third stage was actively managed using oxytocin, 5.7% among those who were managed expectantly by a skilled birth attendant, and 11.4% among births without skilled attendance (18).
Women who are moderately anaemic in pregnancy are likely to become severely anaemic if they had blood loss of even moderate amounts during delivery. Since very few of these women in lowresource settings are likely to receive treatment or blood transfusion after delivery, they are likely to remain severely anaemic and suffer from mortality or morbidity due to this. For example, only 1.3% of the women in our study received blood transfusion in their recent pregnancy, labour, or first week after delivery.
A recent meta-analysis showed that correcting anaemia of any severity was associated with reduced risk of death; for each gram percentage of increase in haemoglobin, the risk of death is reduced by 20% (19). However, very little attention is currently given to postpartum anaemia, and there is no programme to detect and manage postpartum anaemia in developing-country settings.
Our study showed a linear correlation between the level of postpartum anaemia and perinatal mortality. Results of a meta-analysis of health risks revealed that iron-deficiency anaemia was associated with 24% of perinatal deaths (19). Anaemia in the antenatal period leads to low birthweight and preterm births, which, in turn, contribute to perinatal death (20,21). However, it is not clearly established whether supplementation of iron in pregnancy reduces perinatal mortality (22).
In our study, severe anaemia positively correlated with home-delivery, multiparity, and scheduled tribe or caste. However, no significant difference was observed in rates of postpartum anaemia between women who received antenatal care and those who did not. This is not surprising since all women do not get iron supplementation, and even when they get, the compliance is poor. Only 53.6% of all pregnant women received iron supplementation for 100 days or more in Rajasthan (23).
In our study, fever was detected in 4.0% and puerperal sepsis in 1.3% of women. The incidence of puerperal sepsis in our study is comparable with findings of other studies, although most studies of puerperal sepsis are hospital-based. In a hospitalbased study covering 75,497 women, endometritis was detected in 0.17% of women (24). Since most cases of sepsis develop after discharge, Yokoe and others followed a comprehensive post-discharge surveillance procedure method and noted that the incidence of sepsis following vaginal birth in facilities was 2.5% (25). In Nigeria and Malawi, three hospital-based studies showed an incidence of puerperal sepsis of 1.34-1.49% (26)(27)(28). Some studies have compared the rates of genital sepsis following births in the home and facility and reported higher rates of sepsis following home-deliveries (29,30).
In a community-based study in a rural midwestern Indian state, fever was detected in 8.9% of women by village health wokers on self-reports (31). Dolea and Stein estimated that the burden of maternal sepsis in six countries of the South-East Asian Region is 4.5 per 100 livebirths (32). In our study, the incidence of puerperal fever among women who delivered at home was 3.7% compared to 4.2% in those who delivered in facility. We feel that the higher incidence of fever after institutional deliveries could be related to sub-optimal aseptic conditions, such as multiple pelvic examinations (33). Since postpartum fever usually develops a few days after delivery, postpartum home-visits are necessary to detect and manage this condition.
During the first few weeks postpartum, although pain in the perineum and vulva is an important problem, research on this issue is scanty. In Egypt, 2.1% of women reported dyspareunia after childbirth (34). Some hospital-based studies have shown a much higher incidence of perineal pain after childbirth. One study in Canada found that 38% of women with intact perineum and 71% of women with episiotomies suffered from perineal pain seven days after childbirth (35). In addition, results of a study in Nigeria showed that 28% and 69% of wom-en with intact perineum and those with episiotomy respectively had perineal pain three days after delivery (36). Our study found that 4.5% of the women had conditions relating to perineum (pain, tear, or infection) during the first week postpartum, and it was five times higher among women who had institutional deliveries. A relatively low prevalence of perineal pain observed in our study compared to the aforementioned studies could be due to the low incidence of episiotomy (7%). Episiotomy or perineal tears during childbirth are associated with significant pain, infection, and loss of mobility during the immediate postpartum period (37). The avoidance of unnecessary episiotomies can also reduce perineal pain (38) and infections. Currently, perineal pain is inadequately managed and needs greater attention.
Breast conditions (engorgements, infections, abscess, or retracted nipples) were detected in 4.9% women in our study, with 1.3% having mastitis. The prevalence of mastitis varies depending on the definition and the number of weeks postpartum (39); the highest incidence has been reported at four and 12 weeks. Mastitis is reported to occur in 2-24% of breastfeeding women from several weeks up to one year after delivery in women who continue to breastfeed (40). Results of studies from developed countries showed that the reported cumulative incidence of mastitis varies from a few to 33% of lactating women but it is usually below 10% (41). Estimates of incidence of breast abscess from developed countries showed that the incidence varies from 0.04% to 0.4% (42,43). Very few studies have been conducted on the incidence of mastitis in developing countries. The lower incidence of mastitis in our study could be due to two reasons: first, we collected data in the first week after delivery, and second, breastfeeding is nearly universal in our study area. It is crucial to manage breast conditions because women suffer from pain due to these conditions. Breast infections form a considerable burden of disease, involve substantial costs (44), and can occasionally be fatal if untreated (41). Furthermore, breast conditions are often a reason for stopping breastfeeding (41) and hence, a higher risk to the neonate.
We did not find any case of fistula in the 4,975 rural women. The incidence of fistula is not clearly known. It appears that there are important geographic variations in the prevalence of obstetric fistulae; specifically, it appears to be more common in sub-Saharan Africa than in other parts of the developing world (45). Many studies on fistulae are based on hospital-records, or on reports by gynaecologists or surgeons who provided information on the proportion of fistula cases from among total admissions. While they provide good indication of the existence of fistulae in particular areas of the world, they do not furnish adequate data on its true incidence. Reports from Nigeria have shown that about 1 in 1,000 deliveries is complicated by obstetric fistula (46). No comprehensive data on the epidemiological trends are available for the South Asian region. A survey conducted in 2003 to investigate the fistula situation in Bangladesh found that the number of fistula cases per 1,000 ever-married women was 1.69 (47). In a community-based study in rural India, no cases of fistula were reported when women were contacted in their homes during the first few days after delivery (31). The absence of fistula in our study is perhaps related to reduction in cases of prolonged obstructed labour because of improvement in the road network and increase in skilled attendance at delivery.
Limitations
One limitation of our data was that haemoglobin was tested for only 77.1% of the women. Comparison of women whose haemoglobin was tested and not tested revealed that the proportion of women with home-deliveries was higher among those whose haemoglobin was not tested. Since the prevalence of postpartum anaemia was higher among women with home-deliveries in this study, the actual proportion of women with postpartum anaemia could be higher. Another limitation of the study was that we did not have information about women's problems prior to delivery.
Conclusions
Life-threatening conditions, such as severe anaemia, puerperal sepsis, severe hypertension, and secondary PPH, were experienced by 7.6% of the study women. Since most women do not receive any postpartum care at the home level, there is a risk of maternal and late maternal death and longterm consequences for such women in the next few months. A large proportion of women also suffer from other less-serious morbidities, which take a toll on women's day-to-day performance. There is evidence that women with severe and less-severe maternal complications in the early postpartum period suffer from many physical, mental, social and economic consequences (48) and a higher risk of death and infant mortality. Hence, it is essential that health programmes make investments to provide postpartum care to all women starting from the first week so that these conditions can be detected and managed in time. This is especially important for those delivering at homes. However, even women who delivered in institutions suffered from many health problems, including life-threatening complications. This suggests that there is a need to screen all women properly before discharge from the facility. The results of our intervention also sug-gest that it is feasible for skilled birth attendants to visit women's homes and provide postpartum maternal and neonatal care to them in an integrated manner.
|
2017-04-20T06:59:47.247Z
|
2012-06-01T00:00:00.000
|
{
"year": 2012,
"sha1": "0e47a6bea4fdc196bfdc4f93293d449f8f69728c",
"oa_license": "CCBY",
"oa_url": "https://www.banglajol.info/index.php/JHPN/article/download/11316/8265",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "955e7da7d6beab6505746663624efb66b407af82",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216495387
|
pes2o/s2orc
|
v3-fos-license
|
On the Path to Innovation: Analysis of Accounting Companies› Innovation Capabilities in Digital Technologies
Purpose – This article examines the innovation capability of accounting firms in the Brazilian market, in their use of digital technologies, based on technology development, operation, management, and transaction capabilities. Design/methodology/approach – We carried out interviews with the main managers of the companies and collected institutional documents and external documents on the national and international context of accounting business innovation. Findings – Accounting has begun on the traditional path towards digital innovation, demonstrating the quality and value that technology-related solutions can generate when exploited in business and especially in processes. This increase in technology causes changes in accounting business models. Originality/value –The paper contributes to the theoretical body of work on innovation and accounting, identifying that this area is on the way towards innovation by using new technologies in the creation of operations and transaction management. It is clear that the process of innovation and digital transformation already presents a real challenge to be managed.
Introduction
The central role of digital technologies in changing societies and the business environment has aroused managers' interest in dealing with innovation and the creation of digital products, services, and processes (Nylén & Holmström, 2015). According to the Accenture Digital Density Index survey, this growing trend towards the use of digital technologies in different products and services will add more than $ 1 trillion to global economic activity by 2020 (Accenture, 2015). With the rapid change of the business environment as digital technologies have been introduced, their integration into business processes has proven to be essential for contemporary organizations looking to address challenges and create opportunities for competitive advantage in a digital economy (Liu, Chen, & Chou, 2011;Tongur & Engwall, 2014).
In this regard, the MIT Sloan Management Review survey in conjunction with Deloitte found that nearly 90% of managers report experiencing the impact of digital technologies on their industries; however, less than half report doing enough to prepare for this digital revolution (Kane, Palmer, Phillips, Kiron, & Buckley, 2015). In this context, different business areas, especially the traditional ones, face one of the most challenging periods in the market (Guthrie & Parker, 2016), and the accounting sector is an example of a segment that has been disturbed due to the new digital technologies that are changing accounting activities (Pan & Seow, 2016). Fawcett (2015) points out that different digital technologies will significantly impact accounting services in the coming years, revealing their leading role in the process of changing the accounting market. This is because many routine accounting processes can be replaced by different digital technologies, which allow greater flexibility, agility, and security in daily actions (Bygren, 2016).
According to the Cokins and Angel (2017) report on the disruptive impact on accounting, routine tasks performed by accounting professionals such as data entry and bookkeeping are work processes that are increasingly vulnerable to digitization and automation. Digital technologies create opportunities to provide quality, high value-added solutions, rather than just reporting information after the fact has occurred and complying with legal obligations (Baron, 2016;Basova, 2017). Understanding the changes in the accounting market and their influences on business is fundamental to ensure the survival of companies operating in this area (Frey & Osborne, 2013). This forces accounting businesses to position themselves not only in relation to the introduction of new technologies, but also in relation to changes in services and products offered as well as to strategic business elements (Bygren, 2016).
Due to the arrival of digital technologies in accounting functions, the demand for skills related to the best use of these technologies has been growing among business and accounting professionals (Pan & Seow, 2016). Guthrie and Parker (2016) point out that accounting professionals will be challenged to go beyond the traditional skills used to perform mandatory and routine tasks, looking for new ways to create value for customers that ensure business sustainability through the exploration of new digital technologies. This is because these technologies allow the orchestration of new products, processes, services, platforms, and even new business models, enabling the digital innovation process (Nambisan, Lyytinen, Majchrzak, & Song, 2017). Given this current context, the development of innovation capability in accounting firms is critical, as it presses businesses in the area to pursue continuous innovation in response to the changing environment (Slater, Hult, & Olson, 2010), with digital technologies being the key major drivers of these environmental transformations.
Along these lines, the ability to innovate enables businesses to unite technological efforts with improved firm performance and can be seen as a global ability to absorb, adapt, and transform On the Path to Innovation: Analysis of Accounting Companies' Innovation Capabilities in Digital Technologies particular technology into specific managerial, operations, and transaction routines, which drives innovation and competitive advantage (Zawislak, Alves, Tello-Gamarra, Barbieux, & Reichert, 2012). In light of this, this article presents the following research question: How is capability for innovation exploited by companies from different accounting sectors, using digital technologies? Thus, the objective is to examine the innovation capability of accounting firms in the Brazilian market that use digital technologies focusing on technology development, operations, managerial, and transaction capabilities. These capabilities are verified in the framework of Zawislak et al. (2012), which groups them into business and technology-driven capabilities. To do so, a multiple case study was conducted with six prominent companies operating in different areas of accounting. Interviews were conducted with the main managers of these companies and institutional documents were collected. Moreover, other external documents about the national and international context of accounting business innovation were collected to support the proposed analyses. Data were analyzed using the Nvivo 11 software, with content analysis using codes from the innovation capabilities framework. Guthrie and Parker (2016) point out that it is necessary to pay attention to this emerging accounting scenario due to changes in the accounting business caused by the introduction of new technologies and digital innovations in the market. Thus, this research aims to present a better understanding of the innovation capabilities of accounting firms, which exploit digital technologies in their business, by analyzing leading companies in today's market. These analyses provide the field and managers with a more comprehensive overview of the accounting context, since they portray aspects of management and business innovation that are references in the current accounting market. Moreover, this research aims to contribute to the theory by presenting an empirical study on innovation capabilities in accounting firms, where the conservatism of the profession and the aversion to changes in business conduct are aspects that distance innovation from the accounting context (Chang, Hilary, Kang, & Zhang, 2013), an aspect also found in the literature on the subject.
Innovation and Digital Transformation
Digital technologies are being incorporated into a wide range of products and services, and are present in individuals' social, personal, and work relationships (Nambisan, 2013). In this context, the way digital technologies are being employed in different products and services ends up influencing and changing business (Demirkan, Spohrer, & Welser, 2016). This is occurring as digital technology is increasingly being introduced and exploited in business to meet the different goals of organizations, leading to profound changes across entire industries (Nylén & Holmström, 2015).
Therefore, companies today face the challenge of innovation and digital transformation. While digital innovation is characterized by the creation of new products, services, and processes, among others, digital transformation combines the effects of various digital innovations, bringing new agents, structures, practices, values, and beliefs that change, threaten, replace, or complement existing rules within organizations and industries (Hiningsa, Gegenhuberb, & Greenwoodaa, 2018). As a result, digital technologies open up new business opportunities, but they also create competitive pressure (Abrell, Pihlajamaa, Kanto, Brocke, & Uebernickel, 2016), stimulating the digital innovation of products and services. Nylén and Holmström (2015) reveal that the potential of digital technologies to generate innovative products and services that enable managers to achieve a competitive advantage in the marketplace arouses their interest in addressing the challenges behind innovation and digital transformation.
Giovana Sordi Schiavi / Fernanda da Silva Momo / Antonio Carlos Gastaud Maçada / Ariel Behr To overcome these challenges, it is essential to develop strategies that seek new ways of integrating and using digital technologies in business (Hess, Matt, Wiesböck, & Benlian, 2016). Therefore, companies need to create strategies and management forms for the changes that come with innovation and digital transformation (Nylén & Holmström, 2015). However, this is not a simple task for companies operating in the "pre-digital" economy that today need to adapt to the digital economy (Sebastian et al., 2017). It is observed that companies should not assume reactive behaviors only when new technologies are introduced, but also act on their strategic business elements, from operational to managerial, in order to contribute to the business innovation process. One way to organize the analysis of business models and innovations can be found in the Business Model Canvas tool. This model organizes strategy analysis by taking the following into account: the 'customer segment' served, the 'distribution channels' of products/services, the form of 'relationship' and communication of a company with its customers, the 'revenue generation' strategy, the description of the main 'resources, processes, and partners' in carrying out the company's activities, and a further description of the 'cost' structure required; so that in the end it is possible to indicate which 'value proposition' the company wants to deliver to its customers (Osterwalder & Pigneur, 2011).
In order to investigate how companies organize themselves to enter the digital economy, Sebastian et al. (2017) analyzed 25 firms that were introducing the process of innovation and digital transformation in their business. The authors identified that strategies aimed at providing digitized solutions and customer engagement enabled these companies to enter the process of innovation and digital business transformation. In addition, the authors pointed out that the digital technologies needed to execute these strategies would be digital service platforms (which support business agility and rapid innovation) and digital technologies for business operations (which support efficiency and operational excellence). This case reveals how companies established in the "pre-digital" economy can compete in digitized environments by pointing out that digital innovation is an organizational capability that can be developed by any company today.
Innovation Capability
Innovation has always been related to the achievement of competitive advantage, which is usually attained when organizations develop their technological capabilities (Kim, 1999;Afuah, 2002;Reichert et al., 2011;Zawislak et al., 2012). However, as highlighted by Zawislak et al. (2012), not all companies that invest in technological capability are innovative, just as organizations that invest little in technological resources may present innovative performance. In this sense, it is emphasized that innovation capability is the meta-capability that can best explain innovation and the achievement of competitive advantage, becoming crucial for the achievement of the latter when highly unstable market conditions exist (Ramanathan & Hui, 2018).
Thus, it is understood that capability for innovation pressures organizations into continuously developing innovations in response to the changing environment (Slater, Hult, & Olson, 2010). This is because capability for innovation is embedded in all strategies, systems, and structures that support innovation in an organization (Gloet & Samson, 2016). Laforet (2011) points out that innovation only happens when the company has the ability to innovate, making it a valuable asset for organizations to provide and sustain competitive advantage (Rajapathirana & Hui, 2018). Innovation capability makes it easy for companies to introduce new products and services, and innovation performance can be explained as a combination of assets and resources (Guan & Ma, 2003;Lawson & Samson, 2001).
Given these concepts, we have in the literature the framework developed by Zawislak et al. (2012) regarding innovation capability ( Figure 1). In this model, the organizational overview is based on two complementary theoretical approaches: Transaction Cost Theory, which conceptualizes the organization as a relationship of contracts (treaties) that have certain limits and are in accordance with a certain governance structure (Coase, 1937;Williamson, 1985); and Capability Theory, which conceptualizes the organization as a union of resources, knowledge, experience, skills, and routines (Richardson, 1972;Chandler, 1992). Rev. Bras. Gest. Neg.,São Paulo,v.22,n.2,p.inicial-final, Apr/Jun. 2020. (Guan & Ma, 2003;Lawson & Samson, 2001).
Given these concepts, we have in the literature the framework developed by Zawislak et al. (2012) regarding innovation capability (Figure 1). In this model, the organizational overview is based on two complementary theoretical approaches: Transaction Cost Theory, which conceptualizes the organization as a relationship of contracts (treaties) that have certain limits and are in accordance with a certain governance structure (Coase, 1937;Williamson, 1985); and Capability Theory, which conceptualizes the organization as a union of resources, knowledge, experience, skills, and routines (Richardson, 1972;Chandler, 1992). For the construction of the framework the authors gather the concepts related to innovation capability and emphasize that it must be understood as a meta-capability incorporated into four different complementary capabilities: technology development capability; operations capability; managerial capability; and transaction capability (Table 1). Thus, capability is a technological learning process translated into technology development and operations capabilities, supported by managerial and transactional routines (Zawislak et al., 2012).
Giovana Sordi Schiavi / Fernanda da Silva Momo / Antonio Carlos Gastaud Maçada / Ariel Behr Anyone's ability to interpret the current state of the art and absorb and eventually transform a technology to create or alter its capability to operate, as well as any other capability to achieve higher levels of technical and economic efficiency.
Technological Innovation
This type of innovation encompasses the development of new designs, new materials, and new products. In addition, it includes the development of machinery, equipment, and new components.
Operations
The ability to execute productive capability, which is shown through the assortment of daily routines that are embodied in knowledge, skills, and technical systems at any given time.
Operational Innovation
This type of innovation encompasses new processes, improvements to existing processes, the introduction of modern techniques, new layouts, etc. It allows the company to produce with quality, efficiency, and flexibility at the lowest possible cost.
Managerial
The ability to transform the result of technological development into coherent operations and transaction arrangements.
Managerial Innovation
This type of innovation encompasses the development of management skills that reduce "internal friction" between different areas of the company. It aims to create new management methods and new business strategies and improve decision making and cross-functional coordination.
Transaction
The ability to reduce marketing, outsourcing, trading, logistics, and delivery costs; in other words, transaction costs.
Transaction Innovation
This type of innovation involves developing ways to minimize costs in transactions with suppliers and customers. It is sought to create new business strategies, improve relationships with suppliers, streamline market knowledge, etc. The innovation capability framework presents these four capabilities as complementary capabilities relating to innovation capability, indicating that in order to achieve innovation, it is necessary to build such a set of complementary capabilities (Guan & Ma, 2003;Zawislak et al., 2013). Thus, the structure of the innovation capability framework, based on technology, operations, managerial, and transaction capabilities, synthesizes the main organizational aspects that support innovation, with innovation capability being present in each one of them (Lawson & Samson, 2001;Guan & Ma, 2003;Gloet & Samson, 2016). The framework by Zawislak et al. (2012) also reveals that these four complementary capabilities are divided into two groups, corresponding to their focus: i) technology-driven, which represent the firm's accumulated experience in technical changes and production processes; and ii) business-driven, which denotes the assembly of organizational and transactional routines.
Based on a case study of four organizations to assess their innovation capabilities, Zawislak et al. (2013) identified that all the organizations analyzed had the four proposed capabilities, and that one of them predominated over the others, thus characterizing the innovativeness of each organization. Moreover, it was possible to see the need of the organization, over time, to change its technological, managerial, operational, or transactional knowledge, in order to perpetuate in a particular market. Therefore, to innovate, an organization's capabilities need to be specific and integrated to generate income during the period between the introduction of an innovation and its successful diffusion (Zawislak et al., 2013).
Method
This qualitative and descriptive research performs a multiple case study examining the innovation capability of accounting firms in the Brazilian market stemming from the use of digital technologies, based on technology development, operations, managerial, and transaction capabilities. The multiple case study involves the individual study of each company operating in one accounting area, allowing for the verification of similarities and differences between the businesses analyzed within the same context (Yin, 2015). The units of analysis were prominent companies operating in the Brazilian market in different areas of accounting, which have innovative characteristics in their business, namely: financial, managerial, tax, systems, forensic, and auditing. Note that the choice of one company in each area was due to the diversity of accounting businesses, which are characterized according to the areas of expertise in accounting. It also warrants mentioning that the public and academic areas were not analyzed because they do not fit the scope of the research, which describes business models.
The units of analysis were selected for their prominence on a national level, and obtained through open Google searches and expert referrals, not being limited to any region of the country. Google searches were performed using the terms 'inovação nos modelos de negócios contábeis' ('innovative in accounting business models') and the possible derivations 'modelo de negócio disruptivo' and 'novo modelo de negócio' ('disruptive business model' and 'new business model') and accounting, or one of its areas (financial, managerial, tax, systems, forensic, and auditing), without quotation marks. From these searches, we identified 3 companies that showed potential characteristics of innovation in their business (financial, tax, and systems), based on the descriptions provided by the companies about their business. The second technique involved the contribution of six specialists (academic professionals with market experience in each area) to choose accounting firms with the potential for innovation in their business, considering their experience in the accounting market. In this stage, 3 companies (managerial, forensic, and auditing) were selected, which, according to the experts interviewed, have characteristics of innovation, thus being relevant for analysis in this study.
The units of analysis were six accounting companies, with headquarters located in the southern region of Brazil, but operating nationwide. After defining the units of analysis, we started to collect data. There are several ways to collect data in a case study, and combining more than one type of collection technique in the same study is indicated (triangulation of data collection techniques) as it contributes to the breadth and validity of the research construct (Flick, 2009;Yin, 2015). In-depth semi-structured interviews were conducted with the main managers of the six companies. Since the objective was to examine innovation capability in the business models, it was important to know the company decision makers' views and the structuring of the business model strategies. Institutional/internal documents were also collected, composed of formalized communications on the institutional websites, other websites, blogs, magazines, and reports, where the company itself highlights the elements of its strategy (which may differ from those verified in the interviews, thus justifying the triangulation strategy). In addition, 47 external documents were collected to complement the data with formalized third party communications about companies and the national and international context of innovation in the accounting business. Those external documents were selected through Google searches using the terms 'innovation in accounting business models' (7 search pages on Google) and 'innovation in accounting business models' (31 search pages on Google; after page 15, there was data saturation), without quotation marks.
All these different data collection techniques enabled data triangulation for analysis, which is essential for case studies and Giovana Sordi Schiavi / Fernanda da Silva Momo / Antonio Carlos Gastaud Maçada / Ariel Behr for the rigorous strategy of qualitative research (Flick, 2009;Yin, 2015). In addition, note that the data collection strategy of this article focused on the research question and the diversity of data collection sources, as indicated by Eisenhardt (1989) and Mintzberg (1979), who reveal that it is not the sample size that defines the quality of a case study and its contribution to theory. Therefore, data collection strategies should be defined in order to answer the research question, preventing the researcher from being overwhelmed by the volume of data (Eisenhardt, 1989;Mintzberg, 1979).
The interviews were conducted in the second half of 2017, and the interview script was prepared based on the literature on business model innovation, which uses the elements of innovation capabilities as its basis and addresses the relevance of digital technologies in the innovation process, as well as on the work of Osterwalder and Pigneur (2011) on strategic elements of business models (Business Model Canvas). The choice of these elements took into consideration the results of Bonazzi and Zilber (2014), who characterized a company's innovation process, linking it to the concepts of organizational and business model development strategies. This roadmap allowed for the identification of the practices of each organization analyzed, serving as a basis for the analysis of innovation capabilities.
After collecting this material, the data were processed and analyzed. The interviews were recorded and transcribed to allow for better treatment and manipulation of the material, using the data processing software Nvivo 11. Documentary data were also processed and manipulated in Nvivo 11, in order to enable a contextual analysis of the innovation trends in accounting business models. For the analysis of the interview data, we used content analysis, seeking to describe the meaning of the qualitative data by assigning codes to the material collected in a coding framework (or code book) that presents all aspects of description and interpretation (Schreier, 2013), as shown in Appendix 2. The coding framework was built from codes derived from the literature on innovation capabilities by Zawislak et al. (2012) and the literature on the strategic elements of business models (Osterwalder & Pigneur, 2011). During the content analysis, the initial codes based on the literature were refined, elaborated on, and related or interconnected, so axial categorization was used in this research (Gibbs, 2009).
Results
The results are organized in order to present, according to the analysis of the strategic elements of the innovative business models of the companies studied, the innovation capabilities based on the framework of Zawislak et al. (2012), which are: technology development, operations, managerial, and transaction. To represent the reports of the six managers of the companies studied, the following code was used: G_fin to refer to the financial area manager, G_man for the managerial area one, G_tax for the tax area one, G_sys for the systems area one, G_for to the forensic area one, and G_aud for the audit area manager.
Financial Accounting Company
As noted in the institutional documents, the company in the area of financial accounting has been in the market for 5 years and has excelled in providing accounting services to clients. As with other accounting firms operating in this area, G_fin points out that the company provides all services necessary to maintain customer accounting, meeting the required legal obligations: "we have to deliver all that is required by law, by the government, to the customer, so we perform all the necessary obligations, bookkeeping, tax calculations, everything." However, this company differentiates itself by focusing on a specific customer segment (micro and small companies), providing its services in a fully online manner and at low cost. Table 2 illustrates the strategic elements identified in the company, which allow us to evaluate the way the company organizes and structures its business. The use of different digital technologies (notably cloud technologies and digital platforms for communication and document transfer) has enabled the company to change its strategies and business model to fully operate online and bring more accessibility and agility in providing accounting services to the client segment with which the company operates (Demirkan et al., 2016;Pan & Seow, 2016;Sebastian et al., 2017). G_fin points out that "it was through new technologies that we were able to offer the service we offer today, in the cloud, to thousands of companies and in various cities across the country, which was not possible before without these technologies." Thus, the company's technology development capability is noteworthy: "we work a lot with technology, we invest heavily in technology, and we believe technology is what differentiates us from our competitors" (G_fin). This capability enables the company to use digital technologies for strategic purposes, primarily modifying its processes to provide a new service, which is fully online and different from that of the competition. As highlighted by Zawislak et al. (2012), technology development capability is a result of the learning process in which companies internalize new knowledge to produce technological changes that lead to new products and services.
The ability to work these technologies properly in order to produce marketable goods and services is also identifiable within the enterprise. This capability requires companies to implement production systems that are appropriate for the products or services offered, for the company's capacity, and for customer needs (Zawislak et al., 2012). In this sense, G_fin highlights that the key operational processes are the "development of the platform, which is used to deliver the service to the customer, and the client assistance, which is very important," and is also performed by digital platforms. Note that the production system is structured around the developed digital platforms, which facilitate the work of the accounting team responsible for the supervision and execution of accounting services. This structure allows the company to offer an accounting service to more than 10,000 customers at an average price of R$ 190 ($ 45), which is well below the market price. This enables the business to be scalable, ensuring cost savings, flexible actions and, consequently, speedy responses to customers and greater total revenue.
Regarding managerial capability, G_fin emphasizes that management actions are focused not only on administrative issues, but mainly on improving the service provided, seeking better ways to use platforms that reflect the service delivered to the customer and the innovation continuity: "we are always working to ensure we do a good job in delivering accounting to micro and small companies." Regarding transaction capability, G_fin points out that the structuring of the business around digital platforms, which allows the company to offer scalable and low cost online services, enabled the firm to provide its clients with "more simplicity, practicality, and especially savings." Giovana Sordi Schiavi / Fernanda da Silva Momo / Antonio Carlos Gastaud Maçada / Ariel Behr Thus, it is noted that the company has a combination of the four capabilities analyzed, with the technology development, operations, and transaction capabilities being more impactful in the company, ensuring that the company is in fact innovative, since at least one of the four capabilities is predominant. These results are in agreement with the findings of Zawislak et al. (2012), which show that innovative companies that have a predominance of technology development or operations capabilities at the beginning of their activities may need to develop other capabilities (managerial and transaction) as the market matures. Note that this shift to other capabilities can come from the company itself, regardless of market dynamics (Zawislak et al., 2012).
In short, the company analyzed has technology-driven capabilities. It is possible to observe the 'entrepreneurial feature' in the company, which is verified by the relationship between technology development and operations capabilities (Zawislak et al., 2012). This reflects the fact that the exploration of new technologies allows for the creation of new operations, contributing to the company's innovativeness.
Managerial Accounting Company
The company analyzed stands out in the market by offering management consulting services for micro and small companies for over 10 years, according to institutional documents presented. Typically, managerial accounting services are not accessible to this market segment because of the price charged for this type of service. However, by using digital communication and management platforms and restructuring business configurations, the company was able to innovate in the managerial area, making the service faster, agile, and cheaper: "in our company, we have process and management model innovation" (G_man), corroborating the findings of Sebastian et al. (2017). Table 3 illustrates the strategic elements of the company analyzed. When assessing these strategic elements, we once again note the presence of digital technologies supporting the business. Such technologies employed in accounting routines allow work processes to be more dynamic and secure (Bygren, 2016). The technology development capability of this company was most evident at the beginning of the company's operations, when technologies were installed and exploited within the business: "since I started the company I have spent a lot on technology, […] because I always wanted the BEST technology […], so I wanted the cloud, I wanted all document management in the cloud, and nobody used it" (G_man).
The company's operations capability is seen more prominently in the business today as it seeks to efficiently employ the technology On the Path to Innovation: Analysis of Accounting Companies' Innovation Capabilities in Digital Technologies used to improve the quality of service provided (Zawislak et al., 2012). According to G_man, "technology is fundamental to the work we do and it is fundamental as a means. For example, the relationship channel we use is an online platform where the customer logs in and places their entire request, and we have a maximum of 24 hours to answer." In this case, the company organized its operations and staff so that they are always prepared to receive any kind of customer request. To do so, the company makes each of its employees responsible for a small client portfolio (account manager), allowing that professional to have a complete and in-depth view of all the companies in their portfolio. This is made possible by the managerial capability of the organization, which sought new management methods and new business strategies, improving cross-functional coordination (Zawislak et al., 2013;Alves et al., 2017): "I have worked in others offices, so I saw that people wanted to deliver what they wanted, and the customer didn't want what was being delivered, so we reversed that logic, we listen to the customers and deliver what they want, […] it was a business change" (G_man). Regarding transaction capability, it is noted that the company has been looking for new business strategies, especially when dealing with public universities and incubators: "I have extremely traditional clients, who like my service delivery model, but I also have a portion of customers who could no longer see value in that accounting model, which is delivering things that don't make sense to them" (G_man).
It is observed that the company has a combination of the four capabilities analyzed. However, operations and managerial capabilities are more impactful across the business, ensuring that the business is innovative. These results are in agreement with the findings of Zawislak et al. (2012), showing that at the beginning of the activities of the company analyzed, the technology development capability predominated, then that capability migrated to operations and managerial capabilities according to changes that the company deemed necessary for the business. In this case, it is noted that the capabilities of the company analyzed are more focused on business, although there are still technology-driven traits. It is also possible to perceive the "operations guarantee," observed in the relationship between operations and managerial capabilities (Zawislak et al., 2012).
Tax Accounting Company
As noted in the institutional documents, the company analyzed has been in the market for over 5 years and stands out for offering services related to tax strategy, such as mapping tax opportunities and automatic monitoring of tax rules, which are supported by the intense use of digital technologies (anticipatory activities). This allows the company to differentiate itself from other businesses in the area, which focus on tax credit recovery and tax review activities (reactive activities). Table 4 illustrates the strategic elements identified in the company. Giovana Sordi Schiavi / Fernanda da Silva Momo / Antonio Carlos Gastaud Maçada / Ariel Behr As noted by G_tax, the company performs "technology-based tax accounting, […] using database, artificial intelligence in programming, systematic barcode taxation, QR Code taxation of some products, which governments might end up using in the future." Thus, the technology development capability stands out, which allows the company to use several digital technologies for strategic purposes, mainly modifying its processes to offer a new service unlike that offered by the competition.
In this case, the operating capability to work with these technologies is also identifiable within the company, requiring the implementation of production systems suitable for the execution of services (Zawislak et al., 2012). G_tax points out that working with "a platform that runs a web service, a computing engine that works around the clock […]" is essential because the company can capture data and information from customers at any time. This allows accounting advisors to always have access to up-to-date information for executing tax strategies quickly and proactively.
Along these lines, the managerial capability can also be observed within the company, which seeks to adapt the business by valuing new management methods and business strategies that ensure business innovativeness: "to work with a digital model, you have to be prepared to be open-minded to change everything you have seen in traditional terms, you have to think about technology and about people, […] if you don't have these, it doesn't work" (G_tax).
On transaction capability, Zawislak et al. (2012) point out that once a company has developed a technology solution, it needs to be able to do whatever it takes to favor its transactions and sales. Along these lines, G_tax points out that the company "created markets that did not exist, created needs that perhaps the companies did not even know they had; they […] did not even imagine that they could have so much tax loss due to not controlling the register correctly." Although the company acts with the advantage that its customers recognize the value of the service provided, and this alone favors the company's transactions, other actions related to transactions are not noteworthy within the company analyzed.
It is noted that the company has a combination of the four capabilities analyzed, and the technological development, operations, and managerial capabilities are impactful, ensuring that it is indeed innovative. In this case, it is possible to assess that the analyzed company has more technology-driven capabilities rather than business-driven ones. It is also possible to perceive the company's performance in the "entrepreneurial function" (relationship between technology development and operations capabilities), "technology management" (relationship between technology development and managerial capabilities), and "operations guarantee" (relationship between operations and managerial capabilities), as according to Zawislak et al. (2012).
Systems Accounting Company
The company analyzed has been in the market for 6 years and has stood out by offering a low-cost online financial control platform to an underexplored market segment (micro and small entrepreneurs), according to institutional documents. The exploration of digital technologies, such as digital platforms and cloud computing, allowed the company to create a new product to be offered in the market, financial control software, corroborating the research by Demirkan et al. (2016). In this context, the company's technology development capability is noteworthy, enabling it to use digital technologies for strategic purposes, allowing for the creation of a new and unmarketed product: "we have the first cloudborn management system for small companies; we were the first company to offer automatic and banking integration […] with a focus on data entry automation. We want the system […] to be as simple and intelligent as possible" (G_sys). The technology development capability comes from the learning process, in which internal knowledge is exploited to produce technological changes that lead to new products and services (Zawislak et al., 2012).
The ability to operate these technologies is also identifiable within the enterprise. In this sense, G_sys highlights the importance of maintaining and constantly updating the platform to maintain product quality, as well as bringing more features to the product so that it keeps its prominent position in the market: "we have people improving the software every day, so we are offering more and more integrations, the […] format of sending financial information to accounting is also evolving, we are always looking for new ways to be even more automated." Regarding the managerial capability, G_sys highlights that management actions are focused not only on administrative issues, but also on improving the product offered: "our focus is on the user, so we need to streamline the traditional way of doing business, […] to hire and for ERP in the past we needed a server, a license, and a technician to do the deployment and training, our software doesn't need any of that." The transaction capability, in turn, is evidenced by G_sys, who points out that the exploitation of digital technologies to create a new product and the organization of business has allowed "the practicing of lower prices than the traditional model, because it is less bureaucratic, less resources are needed and you can practice lower prices, scale […] the business model and gain leadership." Still, G_sys underscores the importance of investing in marketing to conquer the market: "we work hard activating the needs of the public so that they see the advantages of using the management system." It is observed that the company has a combination of the four capabilities analyzed, all of which are observed in the company. This confirms the innovative characteristics of the business, which allowed the company to gain over 800,000 customers in less than 6 years. In short, it is noted that the company analyzed has technology and business-driven capabilities, which makes its innovative capability more sustainable. Above all, it is possible to perceive the "technology selling" feature in the company, observed in the relationship between technology development and transaction capabilities (Zawislak et al., 2012). The exploration of new technologies enables the creation of new products, contributing to the continuity of the company.
Forensic Accounting Company
The company analyzed has been engaged in forensic accounting for a long time (over 50 Giovana Sordi Schiavi / Fernanda da Silva Momo / Antonio Carlos Gastaud Maçada / Ariel Behr years) and innovates by using in-house software that supports the expert's work process and facilitates communication between clients and partners, as stated in institutional documents. One of the great features provided by the software is the updating of real-time proceedings between clients, experts, and lawyers. In addition, the system interface allows clients and experts to communicate quickly and easily. The company also develops innovative applications for this area (which will not be mentioned in the analysis so as not to hinder the development of this innovation). The strategic elements identified in the company are shown in Table 6. From assessing these strategic elements, we once again note the presence of digital technologies supporting the business. Such technologies employed in accounting routines allow work processes to be more dynamic and secure (Bygren, 2016). The technology development capability of this company was most evident at the beginning of the business's operation, when technologies were installed and exploited to create software that would meet internal demands, as well as integrate and facilitate communication with clients and other partners: "our internal software is the tool that orients our work, it has been designed for the internal benefit of the organization" (Gaper).
The operations capability can currently be observed in the business, as the company seeks to efficiently operate the technology used to improve the quality of service provided (Zawislak et al., 2012). According to G_for, the software helps to bring more agility in conducting activities, since it is possible to follow all the progress of the processes in which the analyzed office operates: "here in the software we have all processes that we downloaded from the clients we have, so there is a massive amount of processes and data on expert calculations." This allows the company to provide quick results to customers and maintain different deadline controls. This is made possible by the managerial capability, which sought new management methods and business strategies, such as the implementation of software that aided internal control and facilitated communication with customers and partners: "the software gives a better dimension and real-time working insights of how we work, so it allows us to better direct ourselves, and that makes us more efficient, […] it is an advantage for us to have an online view of things" (G_for). Regarding transaction capability, it is noted that the company does not currently invest in this capability, which was more present when the software was implemented. Regarding the creation of online communication channels for customers and other partners: "whenever I have to send something or talk to my client, I will send or talk via the online communication channel provided by the software" (G_for).
It is verified that the company has a combination of the four capabilities analyzed, with the operations and managerial capabilities being On the Path to Innovation: Analysis of Accounting Companies' Innovation Capabilities in Digital Technologies the most present in the business today. These results are in agreement with the findings of Zawislak et al. (2012), showing that at the beginning of the activities of the company analyzed, the technology development capability predominated, then that capability migrated to operations and managerial capabilities according to changes that the company deemed necessary for the business. In this case, the capabilities of the company analyzed are focused on business rather than technology, evidencing the "operations guarantee" relationship, which is observed through the correlation between the operations and managerial capabilities (Zawislak et al., 2012).
Audit Accounting Company
The company analyzed innovates in how to provide its services to customers. It is a small company, founded in 2003, that uses automation technologies to assist in work processes and enable the execution of more efficient audit tests and analysis, according to the institutional documents. Table 7 illustrates the strategic elements identified in the company, which allow us to evaluate the way the company organizes and structures its business. The use of digital technologies (especially automation technologies) has enabled the company to change its processes in order to automate much of the daily auditing work, which has brought more agility and security to the activities performed by the auditors (Bygren, 2016;Cokins & Angel, 2017). Regarding this, G_aud highlights the following: "we use a lot of technological tools today; we use R, which is a programing language, to create 'scripted' tests whenever possible." Along these lines, it is noted that the company has technology development capabilities. This enables the company to use digital technologies for strategic purposes to modify its processes and offer a high-quality service, with greater security and agility (Bygren, 2016).
This technology development capability reflects positively in operations capability. G_aud points out that new digital technologies have made it possible to reduce manual processes and automatize processes: "I have an action plan, a scripted work program, and I perform the tasks in there, […] I do some tasks inside R, and some tasks outside, which are more manual, such as interviewing and process mapping jobs, but today most of our demand is going to the process automation line." This agility brought about by automation technologies allows the auditors to better focus on thinking and evaluating the client's business: "the accountant's own role is changing in a way, and we hope we have more time to think about how to predict problems, how to anticipate problems, how to control things" (G_aud).
Regarding managerial capability, G_aud points out that management actions are focused on administrative issues and the customization of the service provided: "most technology companies work with standardization, with a product that can be delivered to more customers at a lower cost, Giovana Sordi Schiavi / Fernanda da Silva Momo / Antonio Carlos Gastaud Maçada / Ariel Behr […] we want to work with customization, and we differentiate ourselves precisely in the ability to plan tests as efficiently as possible for the specific customer problem." Regarding transaction capability, G_aud emphasizes seeking commercial strategies to sensitize the customer, pointing out the added value in the service provided: "with automation technologies we reduced a lot of work that was done in terms of handling databases […] we now use this time to provide customer coverage, to help identify […] what are the critical points of their business." It is noted that the company has a combination of the four capabilities analyzed, with the technological development and operations capabilities being more impactful, ensuring that the company is indeed innovative. It is possible to observe that the company has technologydriven capabilities. Moreover, it is noted that the "entrepreneurial feature" stands out, which is observed through the relationship between the technology development and operations capabilities (Zawislak et al., 2012), where the exploitation of new technologies allows the creation of new operations, contributing to the company's innovativeness.
Discussion
Considering the strategic elements highlighted in the cases studied and their most prominent innovation capabilities, Figure 2 presents the results in relation to the composition of the companies' innovation capability in the various accounting areas. It illustrates the relationship between the innovation capabilities already present in the literature, the main practices that denote innovation capability, and which areas of accounting are evidencing this capability. Rev. Bras. Gest. Neg., São Paulo, v.22, n.2, p.inicial-final, Apr/Jun. 2020. pointing out the added value in the service provided: "with automation technologies we reduced a lot of work that was done in terms of handling databases […] we now use this time to provide customer coverage, to help identify […] what are the critical points of their business." It is noted that the company has a combination of the four capabilities analyzed, with the technological development and operations capabilities being more impactful, ensuring that the company is indeed innovative. It is possible to observe that the company has technology-driven capabilities. Moreover, it is noted that the "entrepreneurial feature" stands out, which is observed through the relationship between the technology development and operations capabilities (Zawislak et al., 2012), where the exploitation of new technologies allows the creation of new operations, contributing to the company's innovativeness.
Discussion
Considering the strategic elements highlighted in the cases studied and their most prominent innovation capabilities, Figure 2 presents the results in relation to the composition of the companies' innovation capability in the various accounting areas. It illustrates the relationship between the innovation capabilities already present in the literature, the main practices that denote innovation capability, and which areas of accounting are evidencing this capability. On the Path to Innovation: Analysis of Accounting Companies' Innovation Capabilities in Digital Technologies Accounting market innovation, mainly due to the different technologies that emerge to support and optimize accounting activities, is leading academics and market professionals to pay attention to the new business possibilities that emerge from this conjuncture (Guthrie & Parker, 2016). In the case of the companies analyzed, it is noted that to some extent (since not all innovation capabilities are always verified), businesses have been able to digitally innovate by presenting new organizational structures in their business models, which allow the optimization of professionals' time and flexibility to perform higher value-added tasks (such as data analysis, for example), as well as offering new products and services to the market. However, the limitations for innovation, except in the systems area, are also presented: i) in the scalability of accounting activities, which provides cost reductions (given the efficiency resulting from the specialization of professionals in certain activities, and also the possibility of better input trading, considering bulk deals); and ii) the increase in revenues, due to the diversity and large quantity of products and services offered.
From examining the capabilities inherent in their innovation capability, the companies analyzed present, at first, only technology-based innovation capabilities (left side of Figure 2). From the cases analyzed, it is clear that these companies are following the trend of the digital innovation process (Nambisan et al., 2017), as presented by the different external documents. That is, there is the exploration of new digital technologies, working on the 'technology development capability' to take advantage of the innovation opportunities (products, services, and processes, among others) provided by these technologies (Hiningsa et al., 2018). It is also noted that the use of different technologies enables the analyzed companies to offer new services and products with higher added value for the client, as it was possible to verify in the recurrence of the terms 'data use,' 'consulting,' and 'customize/personalize' in the external documents analyzed (see Appendix 1).
It is noteworthy that the 'operations capability' was the only one observed in all the companies analyzed. This is because, although the companies analyzed deliver services similar to others in the market, considering the great normative influence of accounting activities, these businesses stand out in the way they provide their services, which is made possible, for example, by the support and use of different digital technologies in communication processes and other processes that characterize service delivery.
A second step in the development of innovation capability is that it was then possible to verify the movement from technology-driven capabilities to business-driven capabilities (to the right side of Figure 2). This move is consistent with the digital transformation process, which comprises a set of digital product, service, and process innovations; giving rise to new structures, practices, and values that change the business (Hiningsa et al., 2018).
Thus, it is noted that the companies analyzed at first worry about making the best use of information and digital technologies, and only then worry about a more radical change in their way of doing business, as can be observed by the presence of 'managerial capability.' This is because accounting is a very traditional branch of knowledge, so innovation tends to start with exploiting technology-driven capabilities and then involves building business-related modifications (Zawislak et al., 2012).
It warrants mentioning that the external documents analyzed concern mostly technologydriven capabilities, and business-driven capabilities are still barely discussed. Therefore, it can be noted that the 'transaction capability' of the accounting firms analyzed is still hardly verified. In this regard, only companies related to the financial and systems areas demonstrate this capability in a relevant way, since they are businesses that achieve scalability, cost reductions, and increased revenue; based on the characteristics of the products and services offered.
Final Considerations
This research achieved its objective by examining the innovation capability of accounting firms in the Brazilian market that use digital technologies, based on technology development, operations, managerial, and transaction capabilities. In all companies analyzed, the presence of the four capabilities was identified, each of which is explored in different ways, considering the differences between each business model. These findings match the results of Zawislak et al. (2012Zawislak et al. ( , 2013, which reveal that companies have all four capabilities, that is, none of them is absent; however, for the company to be innovative, at least one of these four capabilities must be predominant. It was observed that the most prevalent capabilities in the different cases studied are more geared towards technology than towards business, confirming the observation by Zawislak et al. (2012), who state that in innovative companies there is initially a predominance of the technology development or operations capability. In this sense, the results presented suggest that accounting has started on the path that traditionally moves towards digital innovation, confirmed by the changing trend of the accounting market, demonstrating the quality and value that solutions related to digital technologies can generate, due to these technologies being exploited in business, and especially in processes. It is also worth considering that this increase in technological artifacts has consequently caused changes in accounting business models, corroborating the studies by Baron (2016) and Basova (2017).
As limitations of this research, it is noted that, although it was not the intention of this research, only companies based in the southern region of Brazil were consulted. The study of companies from other locations may provide new findings on the topic discussed. Another limitation of the research lies on the fact that only the main manager of each company was interviewed. Interviews with other members of the companies, as well as interviews with clients, might have contributed to the subject with new discussion points. In addition, since they are specific cases, it is not possible to generalize the results highlighted. However, the results presented can be used to increase the body of theoretical knowledge on innovation and accounting. This is because they identify that the accounting industry is already on the way to innovation by adhering to the use of new technologies that allow the creation of new operations and the management of transactions.
Another contribution is the realization that the process of innovation and digital transformation already presents a real challenge to be managed by accounting firms, in an area that is considered traditional. The companies analyzed have already started using new digital technologies, focusing on accounting services innovation, and now they are moving towards digital transformation, creating new businesses from the innovation of higher value-added accounting products and services, which generate market gains and competitive advantage (Zawislak et al., 2012;Nylén & Holmström, 2015). Thus, this research contributes to the field by presenting accounting business innovation trends involving new digital technologies, revealing how innovation capabilities are employed in contemporary and prominent cases in the market. Finally, the analyses performed in this research provide managers with a greater context regarding the changes in the current accounting environment, through the discussion related to new digital technologies and business innovation capabilities.
We suggest future studies to conduct research that seeks to identify factors that may be inhibiting the development of innovation capabilities in each accounting area. In addition, we suggest conducting research that seeks to identify the characteristics of accounting areas that facilitate this process of digital transformation.
|
2020-04-23T09:09:32.219Z
|
2020-04-07T00:00:00.000
|
{
"year": 2020,
"sha1": "1f5a7a934bc87ce8d5578fb33a3d077cb4e3e1cd",
"oa_license": "CCBY",
"oa_url": "https://rbgn.fecap.br/RBGN/article/download/4051/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "eb47ee29de719594e48be9fbefc8f9550b89975b",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
236413006
|
pes2o/s2orc
|
v3-fos-license
|
Progress in the Use of Biobutanol Blends in Diesel Engines
: Nowadays, the transport sector is trying to face climate change and to contribute to a sustainable world by introducing modern after-treatment systems or by using biofuels. In sectors such as road freight transportation, agricultural or cogeneration in which the electrification is not considered feasible with the current infrastructure, renewable options for diesel engines such as alcohols produced from waste or lignocellulosic materials with advanced production techniques show a significant potential to reduce the life-cycle greenhouse emissions with respect to diesel fuel. This study concludes that lignocellulosic biobutanol can achieve 60% lower greenhouse gas emissions than diesel fuel. Butanol-diesel blends, with up to 40% butanol content, could be successfully used in a diesel engine calibrated for 100% diesel fuel without any additional engine modification nor electronic control unit recalibration at a warm ambient temperature. When n-butanol is introduced, particulate matter emissions are sharply reduced for butanol contents up to 16% (by volume), whereas NO X emissions are not negatively affected. Butanol-diesel blends could be introduced without startability problems up to 13% (by volume) butanol content at a cold ambient temperature. Therefore, biobutanol can be considered as an interesting option to be blended with diesel fuel, contributing to the decarbonization of these sectors. when butanol blends are introduced. Regarding NO X emissions, 46% of the studies concluded that introducing butanol-diesel blends is beneficial, and only 36% of them observed an increasing trend. A total of 18% of the publications reviewed reported that NO X emissions remained constant when butanol blends are introduced.
Introduction
In recent decades, new restrictions on emissions have been legislated in many developed countries, as fast economic growth and urbanization have led to a substantial increase in the size of the vehicle fleet, causing harmful effects on the environment and human health [1]. Specifically, in 2009, European Directive 2009/28/EC [2] proposed a scenario where transport fuels will include up to 10% of biofuels in 2020. In 2015, European Directive (EU) 2015/1513 [3] proposed that at least 0.5% of this renewable fraction should be advanced biofuels (indicative target). A few years ago, European Directive (EU) 2018/2001 [4] promoted the use of biofuels, increasing the mandatory renewable energy in the transport sector up to 14%, including electrification, and the minimum content of advanced biofuels and biogas up to 3.5%, for 2030. The contribution of heavy-duty vehicles in the transport sector, tractors in the agricultural sector and cogeneration sector to greenhouse gas emissions are nowadays increasing but electrification is not yet a viable option for the usual distances and the current infrastructure. Consequently, researchers are focused on the replacement of fossil fuels with renewable fuels, which have lower emissions than greenhouse gases and other pollutants, and are compatible with modern diesel engine technologies [5,6].
Among the renewable options for substituting partially or totally diesel fuels in the mentioned sectors, biodiesel fuel, generally obtained from conventional feedstocks such as vegetable oils or animal fats through a transesterification process with methanol obtaining a mixture of fatty acid methyl esters (FAME), has been widely used for a couple of decades. Several studies concluded that the power output from biodiesel was similar to that of diesel fuel [7,8]. However, the conventional production of biodiesel fuel together with filter plugging problems, caused by some biodiesel components (sterol glycosides and saturated monoacylglycerols), and storage difficulties derived from fast oxidation, has encouraged researchers to focus on the development and the implementation of advanced biofuels to be blended with diesel fuels or even with biodiesel-diesel blends [9][10][11].
Alcohols produced from waste or lignocellulosic materials through advanced production techniques constitute a sustainable alternative. Among alcohols, ethanol and butanol have been proven to reduce the life-cycle greenhouse gas emissions when produced from biomass and waste feedstocks [12]. Advanced processes to produce lignocellulosic biobutanol achieve lower greenhouse gas (GHG) emissions than those from conventional feedstocks (around 40%) [13]. Although biobutanol can be produced from both biological and chemical routes [14], the ABE (acetone-butanol-ethanol) fermentation route in which sugar, glycerol or lignocellulose feedstocks are fermented by microorganisms to produce n-butanol, ethanol and acetone, is the most widely used [15]. The low final butanol concentration, the limitations in the butanol recovery during the fermentation process, the presence of unwanted products such as butyrate and acetate (apart from acetone and ethanol) and high feedstock costs hinder the economic competition of biobutanol with respect to petrochemical synthesis [16]. Therefore, research efforts are focused on these limitations to improve the economic competitiveness of ABE fermentation [17].
Bioalcohols are not only used to replace gasoline in spark-ignition engines but also to replace diesel fuels in diesel engines [18,19]. Although ethanol was traditionally used as a blending component in the transport sector, ethanol shows some problems related to its cold start (high vapor pressure) and its distribution since it cannot be transferred through the existing pipeline infrastructures without corrosion and damage to the rubber seals [20]. Nowadays, the scientific community shows an emerging interest in studying n-butanol as a blending component. The safer character of biobutanol with respect to ethanol for transportation, fuel handling and storage [21], together with its higher cetane number [22], higher heating value [23], lower volatility [21], higher flash point [24], better lubricity [25] and better miscibility with diesel fuels (especially at a low temperature) [26] of n-butanol have contributed to such interest. Figure 1 shows scientific papers published in the last years regarding alternative fuels in diesel engines, ethanol in diesel engines and butanol in diesel engines. This schematic diagram shows that the scientific community is aware of the interesting opportunities and scenarios derived from the use of biofuels in diesel engines in road freight transportation, in agricultural applications such as tractors, harvesters and self-propelled sprinklers and in cogeneration applications, among others. Furthermore, this figure confirms the increasing interest in n-butanol as a blending component for diesel engines independently of the growing number of publications in the last years. In fact, the black line scaled in the secondary axis represents the increasing number of butanol publications over the total ethanol and butanol publications. Regarding the use of butanol as a blending component for diesel engines and vehicles, most of the tests found in the literature were carried out under steady conditions [19,28], and only a few of them were conducted following driving cycles [29,30]. The authors of these papers generally concluded that the introduction of n-butanol in diesel fuel sharply decreases particulate matter (PM) emissions due to the oxygen content of the butanol molecule [31,32], and there is an increase in total hydrocarbons (THC) when n-butanol is used [29,33]. However, there was no consensus on carbon monoxide (CO) and nitrogen oxides (NOX) emissions [34,35]. The fuel consumption has been reported to increase for increasing butanol contents due to its lower heating value [36] but without significant penalty in terms of energy consumption for butanol blends with respect to diesel fuel [37]. In terms of startability, cold startability problems have been reported for butanol blends, especially at cold ambient conditions [38].
Since butanol has a significant potential to reduce the life-cycle greenhouse gas emissions with respect to diesel fuel and to introduce a renewable blending component for diesel engines, the aim of this study is to review the properties of butanol, to study their effect on combustion and to compare them with those of ethanol and reference diesel fuel. Additionally, in this review, the different butanol-diesel mixing techniques and their limitations to introduce n-butanol without engine modifications are mentioned. N-butanol benefits in terms of combustion and emissions in diesel engines and vehicles under stationary or transient conditions in the engine test bench or in the chassis dynamometer have been discussed.
The novelty of this review with respect to those previously published regarding biobutanol is mainly focused on: (i) a review of the GHG emissions of biobutanol and bioethanol from different feedstocks compared to those of fossil diesel fuel; (ii) unlike previous studies, this one is focused just on biobutanol as a blending component for diesel engines [39] but has explored this topic in much more detail than was previously carried out, considering a range of different butanol concentrations and the implications on fuel distribution, storage and combustion compared with ethanol blends in diesel engines; (iii) differently to previous reviews about regulated and unregulated emissions from diesel and gasoline engines [40,41], this study focuses on regulated emissions using biobutanol as a blending component, with emphasis on the effect of recent after-treatment technologies.
Sustainability of N-Butanol
N-butanol can be produced from biomass via the acetone, butanol, ethanol (ABE) fermentation process. Prior to the development of petrochemical production routes to nbutanol in the 1950s, the majority of n-butanol worldwide was produced through the ABE fermentation of sugars [42]. Today it is understood that the cultivation of food and feed crops for fuel production can cause environmental impacts due to crop cultivation and Regarding the use of butanol as a blending component for diesel engines and vehicles, most of the tests found in the literature were carried out under steady conditions [19,28], and only a few of them were conducted following driving cycles [29,30]. The authors of these papers generally concluded that the introduction of n-butanol in diesel fuel sharply decreases particulate matter (PM) emissions due to the oxygen content of the butanol molecule [31,32], and there is an increase in total hydrocarbons (THC) when n-butanol is used [29,33]. However, there was no consensus on carbon monoxide (CO) and nitrogen oxides (NO X ) emissions [34,35]. The fuel consumption has been reported to increase for increasing butanol contents due to its lower heating value [36] but without significant penalty in terms of energy consumption for butanol blends with respect to diesel fuel [37]. In terms of startability, cold startability problems have been reported for butanol blends, especially at cold ambient conditions [38].
Since butanol has a significant potential to reduce the life-cycle greenhouse gas emissions with respect to diesel fuel and to introduce a renewable blending component for diesel engines, the aim of this study is to review the properties of butanol, to study their effect on combustion and to compare them with those of ethanol and reference diesel fuel. Additionally, in this review, the different butanol-diesel mixing techniques and their limitations to introduce n-butanol without engine modifications are mentioned. N-butanol benefits in terms of combustion and emissions in diesel engines and vehicles under stationary or transient conditions in the engine test bench or in the chassis dynamometer have been discussed.
The novelty of this review with respect to those previously published regarding biobutanol is mainly focused on: (i) a review of the GHG emissions of biobutanol and bioethanol from different feedstocks compared to those of fossil diesel fuel; (ii) unlike previous studies, this one is focused just on biobutanol as a blending component for diesel engines [39] but has explored this topic in much more detail than was previously carried out, considering a range of different butanol concentrations and the implications on fuel distribution, storage and combustion compared with ethanol blends in diesel engines; (iii) differently to previous reviews about regulated and unregulated emissions from diesel and gasoline engines [40,41], this study focuses on regulated emissions using biobutanol as a blending component, with emphasis on the effect of recent after-treatment technologies.
Sustainability of N-Butanol
N-butanol can be produced from biomass via the acetone, butanol, ethanol (ABE) fermentation process. Prior to the development of petrochemical production routes to n-butanol in the 1950s, the majority of n-butanol worldwide was produced through the ABE fermentation of sugars [42]. Today it is understood that the cultivation of food and feed crops for fuel production can cause environmental impacts due to crop cultivation and land-use change [43]. Therefore, the ABE fermentation process is being developed to use waste or lignocellulosic feedstocks. A review of the greenhouse gas (GHG) emissions of bio-butanol illustrates that it can have a substantial GHG reduction compared to fossil diesel, in particular when derived from waste or lignocellulosic materials.
The life-cycle GHG emissions of a fuel are calculated by taking into account emissions from the extraction or cultivation of raw materials, annualized emissions from carbon stock changes caused by land-use change and emissions from processing, transport, distribution and the use of the fuel. Under the methodology laid out in EU Directive (EU) 2018/2001 (RED II) [4], the emissions from fuel use are taken to be zero for biofuels.
Several greenhouse gas assessments of n-butanol produced from sugars can be found in the literature, including [44][45][46], of which the results of Wu et al. [46] provide the most useful comparison with calculations made using the RED II method due to their use of energy allocation. The production of n-butanol from waste and lignocellulosic sugars has only been demonstrated at a pilot scale, but German et al. [13] provide an assessment of the GHG intensity of lignocellulosic butanol if this process was scaled up to a commercial scale. Whilst there is a wide range of results due to the uncertainty in the scale-up and development of the process, they estimate that the GHG emissions of lignocellulosic biobutanol could be as low as 38 gCO 2 eq./MJ.
The GHG emissions from sugar-based butanol [46] and from lignocellulosic butanol [13] are compared in Figure 2. The carbon intensity of butanol is compared with diesel, corn ethanol and lignocellulosic ethanol based on typical values for these fuels provided in Directive (EU) 2018/2001 [4].
bio-butanol illustrates that it can have a substantial GHG reduction compared to fossil diesel, in particular when derived from waste or lignocellulosic materials.
The life-cycle GHG emissions of a fuel are calculated by taking into account emissions from the extraction or cultivation of raw materials, annualized emissions from carbon stock changes caused by land-use change and emissions from processing, transport, distribution and the use of the fuel. Under the methodology laid out in EU Directive (EU) 2018/2001 (RED II) [4], the emissions from fuel use are taken to be zero for biofuels.
Several greenhouse gas assessments of n-butanol produced from sugars can be found in the literature, including [44][45][46], of which the results of Wu et al. [46] provide the most useful comparison with calculations made using the RED II method due to their use of energy allocation. The production of n-butanol from waste and lignocellulosic sugars has only been demonstrated at a pilot scale, but German et al. [13] provide an assessment of the GHG intensity of lignocellulosic butanol if this process was scaled up to a commercial scale. Whilst there is a wide range of results due to the uncertainty in the scale-up and development of the process, they estimate that the GHG emissions of lignocellulosic biobutanol could be as low as 38 gCO2eq./MJ.
The GHG emissions from sugar-based butanol [46] and from lignocellulosic butanol [13] are compared in Figure 2. The carbon intensity of butanol is compared with diesel, corn ethanol and lignocellulosic ethanol based on typical values for these fuels provided in Directive (EU) 2018/2001 [4]. Figure 2 supports the conclusions made across many studies (including [44,47,48]) and reflected in the default values of the RED II [4] that advanced processes to produce lignocellulosic bioethanol and biobutanol can achieve lower GHG emissions than the production of the same fuel from crops. Concretely, lignocellulosic butanol can reduce GHG emissions by 60% with respect to diesel fuel, whereas the reduction for butanol from corn reaches 35%. The higher GHG emissions of lignocellulosic butanol compared to lignocellulosic ethanol are likely due to the earlier stage of development of this technology and the lower yield of butanol compared to ethanol [14]. In addition, oxygenated fuels such as alcohols are an effective way to reduce particle emissions [49]. Nowadays, particle emissions are recognized as one of the most important contributors to climate change. However, these emissions are not taken into account in European Directives, and therefore, the environmental benefit of using alcohols comparatively with fossil fuels could be even greater than suggested by the current method for greenhouse gas assessment. Figure 2 supports the conclusions made across many studies (including [44,47,48]) and reflected in the default values of the RED II [4] that advanced processes to produce lignocellulosic bioethanol and biobutanol can achieve lower GHG emissions than the production of the same fuel from crops. Concretely, lignocellulosic butanol can reduce GHG emissions by 60% with respect to diesel fuel, whereas the reduction for butanol from corn reaches 35%. The higher GHG emissions of lignocellulosic butanol compared to lignocellulosic ethanol are likely due to the earlier stage of development of this technology and the lower yield of butanol compared to ethanol [14].
In addition, oxygenated fuels such as alcohols are an effective way to reduce particle emissions [49]. Nowadays, particle emissions are recognized as one of the most important contributors to climate change. However, these emissions are not taken into account in European Directives, and therefore, the environmental benefit of using alcohols comparatively with fossil fuels could be even greater than suggested by the current method for greenhouse gas assessment.
Studies on N-Butanol Properties
Ethanol or n-butanol can be used together with diesel fuel through different mixing techniques. The most common methods are blending and fumigation [39]. In the blending method, alcohol and diesel fuels are premixed before being injected through the diesel fuel injector into the cylinder. In the fumigation method, the alcohol is introduced into the intake air upstream of the manifold either by carbureting, vaporizing or injecting [50].
With the fumigation method, higher alcohol content (up to 50% in energy [51]) can be introduced in the mid-load range without being limited by alcohol miscibility problems or affecting the base diesel fuel properties since it is not directly blended. However, at low loads and high loads, the alcohol content introduced should be reduced. High alcohol content at low loads could lead to misfiring. At high loads, introducing high alcohol content could result in preignition and engine knock.
In terms of combustion benefits, the alcohol evaporation in the intake air reduces the intake temperature increasing its density and, consequently, the air available. Therefore, higher power could be reached. The turbocharger boost pressure can be useful for the atomization of the fumigated alcohol. Nevertheless, potential mechanical problems have been reported in turbocharged diesel engines using the fumigation technique due to the impact of the liquid spray on the turbocharger. The alcohol evaporation is not complete when the alcohol is introduced downstream of the compressor. Furthermore, the fumigation technique requires the addition of a vaporizer or injector and an additional fuel injection system and fuel tank adaption, which increases the engine weight [52,53]. On the contrary, blending alcohols with diesel fuels allows introducing a renewable component in the diesel engine without any engine modification. For the reasons aforementioned, this review is only focused on blending.
Although the alcohol most commonly used as a blending component in the transport sector is ethanol, the higher cetane number of n-butanol, together with its higher heating value, better viscosity, better lubricity, higher flash point and better miscibility with diesel, particularly at a low temperature, suggest that n-butanol is a better renewable component than ethanol in diesel blends [25,54,55].
The main properties of ethanol and n-butanol are listed in Table 1. Although the physicochemical properties of n-butanol are more similar to those of diesel than ethanol, it still cannot replace diesel fuel at 100% [24]. The literature reports that butanol-diesel blends can be tested up to 40% butanol content (volume basis) without engine modifications [37,60].
The following points summarize the properties of n-butanol that make it more attractive from a technical point of view than ethanol as a blend component in diesel engines.
•
Higher density. Density affects the spray formation, the injection timing, the atomization and the combustion characteristics, among other effects [21]. The density of n-butanol is lower than that of diesel fuel. Therefore, a smaller amount of alcohol is pressurized and injected by the fuel pump since the dosage is volumetric [35,61]. However, n-butanol density is higher than that of ethanol [55]. Since the excess volume of liquid blends, which is an indication of the presence of molecular interactions, has strong implications on the fuel consumption and on the sizing of fuel tanks, detailed knowledge of the density of blends is required. There is not much literature about butanol-diesel systems. The excess volume of butanol-diesel blends has been observed to be higher than that of ethanol-diesel blends [62]. Since the carbon chain is larger for butanol with respect to ethanol, the non-polar part of the molecule (aliphatic chain) dominates, reducing the polar character of butanol [63]. Consequently, in butanoldiesel blends, the interaction between the hydroxyl group of the alcohol molecule and the aromatic hydrocarbons is weaker (dispersive forces). Particularly, Aissa et al. [64] and Dubey et al. [65] reported positive excess volume for blends of n-butanol and one of the most usual diesel surrogates (n-hexadecane) at 298. 15 In the case of alcohol-biodiesel blends since strong interactions are formed between the hydroxyl group of alcohols and the ester group of biodiesels (hydrogen bonds), the excess volume is lower than for alcohol-diesel blends. The positive excess volume is even lower for ethanol-biodiesel blends [67,68]. • Higher viscosity. The viscosity affects the atomization of fuel when it is injected into the combustion chamber, the size of the fuel droplets, the formation of engine deposits and the lubricity of the fuel [69,70]. High-viscosity fuels require more energy in the fuel pump and increase wear in the injection system [71]. On the contrary, fuels with excessively low viscosity may not provide sufficient lubrication for the injection system leading to higher pump and injector leakage, increasing the fuel return and, thus, the fuel consumption associated with the higher pumping power. Viscosity values decrease for increasing alcohol contents in alcohol-diesel blends [55]. Results also show that viscosity is not proportional to the volumetric, mass or molar alcohol content [55]. According to the EN 590 standard of diesel fuels, which establishes that viscosity values should be higher than 2 cSt [72], only ethanol-diesel blends with ethanol content up to 36% (v/v) fulfill this requirement [73]. Since the viscosity of alcohol increases with a longer carbon chain, n-butanol blends from 0% to 100% in diesel fuel would have no restriction [73]. However, in the study carried out by Kuszewski [74], where the viscosity of diesel fuel at 40 • C is closer to the lower limit of the EN 590, only butanol-diesel blends up to 7% (v/v) fulfill this requirement. The reduction in viscosity from blending alcohols (ethanol or n-butanol) with diesel fuel can be compensated by adding biodiesel [75,76]. Although alcohols have been widely used in chemical and petroleum industries, accurate and reliable knowledge of their viscosity is required for the design of transport equipment or pipelines [77]. Therefore, generalized correlations for the prediction of the viscosity of liquid mixtures are needed. Among the different methods, Cano-Gómez et al. [78] studied different butanol-biodiesel blends and reported that Grunberg-Nissan fit better to experimental data than other modeling methods such as Kendall-Monroe or Bingham equations. The Grunberg-Nissan equation has also been used in different studies [55,73] to model the viscosity of different alcohols (methanol, ethanol, propanol, n-butanol and n-pentanol) with diesel and biodiesel fuels, respectively. • Better lubricity. Controlling the fuel lubricity is essential to protect some engine components with direct contact with fuel, such as injectors, fuel pumps and fuel rails, against wear problems. Pure n-butanol shows better lubricity than pure ethanol [55]. Vinod Babu et al. [60] reported that, in general, the lubricity of pure alcohols improves (leading to a lower wear scar) for increasing molecular weight. For intermediate alcohol concentrations, diesel blends with long carbon chain alcohols (n-butanol and n-pentanol) showed worse lubricity (larger wear scar) than those with a short carbon chain (ethanol and propanol). The lubricity of ethanol-diesel blends at an intermediate ethanol content was shown to be better than expected as a consequence of the alcohol evaporation from the lubricating layer [79,80]. A detailed study about the lubricity of blends of different alcohols (ethanol, propanol, n-butanol and n-pentanol) with diesel fuel [55] reported that, following EN 590 standard [72], which requires a wear scar lower than 460 µm at 60 • C, only those ethanol-diesel blends with an ethanol content higher than 92% or butanol-diesel blends with butanol content above 35%, both volume basis, would not fulfill this standard. • Higher heating value. Alcohols show lower heating value than diesel fuels. Therefore, a higher amount of alcohol is required to produce the same power output in the engine. However, the heating value increases for increasing carbon atom number. Comparing ethanol and n-butanol, the latter has 25% more energy density in volume than ethanol, reducing the fuel consumption needed to keep a specific load in diesel engines [40,81]. The study carried out by Kuszewski [74], where butanol-diesel blends with 5%, 10%, 15%, 20% and 25% (v/v) butanol content were tested, concluded that introducing 25% butanol content reduced the lower heating value by 6% with respect to that of diesel fuel. Since the lower heating value of diesel fuel often ranges from 41 to 44 MJ/kg, butanol-diesel blends up to 17% (v/v) and ethanol-diesel blends up to 10% can be considered within this range [73]. • Better blend stability. Alcohol-diesel blends can be separated into different phases under specific conditions. This stability strongly depends on the temperature, humidity and fuel composition. In fact, when the temperature decreases, the unstable region becomes wider. Additionally, the presence of moisture negatively affects the miscibility. Alcohols with a long carbon chain show better blending stability than those with a low carbon chain [55]. The polarity of alcohols is induced by the hydroxyl group (R-OH), which is among the most polar chemical groups. Since the carbon chain of butanol is higher than that of ethanol, its global polarity is lower. Therefore, better blending stability is observed between butanol and the mainly non-polar structures of diesel fuels [63]. In particular, low blend stability was reported for ethanol-diesel blends, specifically at intermediate ethanol contents (from 15% to 75% ethanol content) [82,83]. In fact, Kwanchareon et al. [84] reported the appearance of two liquid phases for ethanol-diesel blends with ethanol content from 20% to 80% by volume for temperatures below 10 • C. However, butanol blends showed better blend behavior. In fact, butanol-diesel blends did not show blend stability problems along the whole butanol range for temperatures above 0 • C [26]. Butanol-diesel blends do not need emulsifying agents since the blend does not separate even after several days [85]. Ethanol-diesel blend stability problems can be compensated by additivation or adding biodiesel to the blend [54,84]. Strong interactions are formed between the hydroxyl group of ethanol and the ester group (R-COO-R') of biodiesel. The intensity of these interactions is even enhanced by the formation of hydrogen bonds [86]. Apart from adding biodiesel to ethanol-diesel blends, miscibility problems in these blends could be conducted by adding an emulsifier or a co-solvent. Emulsifiers allow suspending small droplets of ethanol within the diesel fuel. In order to generate the final blends, emulsification usually requires previous steps such as heating and blending. Cosolvents act as a bridging agent, influencing the molecular bonding and thus leading to a more homogeneous blend [79]. • Better cold-flow properties. Bioalcohols, with a low freezing temperature, have proven to be a sustainable alternative to improve the cold flow properties of diesel fuels (especially biodiesel) [87]. Recent biodiesel filter plugging problems were reported in mild and cold weather countries, causing operating problems mainly attributed to the crystallization of monoacylglycerols of saturated fatty acids, sterol glycosides and other impurities [88][89][90]. As a consequence, additional requirements have been proposed in both European [72] and non-European [91] countries to limit operability problems in (FBT). Among alcohols, the benefits of blending light alcohols such as methanol and ethanol with diesel fuels are limited by their aforementioned weak miscibility. The intersolubility of ethanol-diesel blends decreases for decreasing temperatures, with the cold flow properties (CFPP, CP and PP) being consequently affected by the formation of a gelatinous phase or by phase separation [26]. As a consequence of its better blend stability over a wide range of temperatures for the whole concentration range, n-butanol improves the cold flow properties of diesel fuels (especially for high alcohol content). Regarding alcohol-biodiesel blends, Makareviciene et al. [87] reported that the addition of n-butanol to biodiesel resulted in a gradual decrease in the cloud point and the cold filter plugging point. Bouaid et al. [92] justified that n-butanol improves more significantly the cold-flow properties of diesel and biodiesel fuels than ethanol as a consequence of its less polar character. • Higher cetane number. Among the properties affecting the combustion process, the cetane number is a limiting one. In general, alcohols exhibit low cetane numbers, and therefore, only limited concentrations of these alcohols in the blends are recommended for use in unmodified diesel engines because the cetane number significantly affects the engine efficiency [56]. The higher cetane number of n-butanol with respect to ethanol suggests that its maximum concentration in diesel blends could be increased with respect to that recommended for ethanol [57,61]. Based on the cetane number, the literature reports that butanol-diesel blends with n-butanol content up to 40% (v/v) can be used in diesel engines without any engine alteration [60]. Higher butanol content in the blend leads to excessively high ignition delay [22]. The large, premixed phase derived from the high ignition delay results in excessive heat release rates and incylinder pressure peaks [93]. However, taking into account limits proposed by the EN 590 standard, only butanol blends with diesel fuel up to 3% fulfill this limit [22]. This limitation can be compensated with the use of cetane improvers [79]. The mentioned increase in ignition delay for butanol blends is similar when it is blended with diesel or biodiesel fuels. However, some differences appear when ethanol is blended with diesel or biodiesel fuels, with larger delay times in the former case [22]. • Lower enthalpy of vaporization. Since ethanol and butanol have a higher enthalpy of vaporization than diesel fuel, more heat is needed to evaporate the liquid alcohol, resulting in a smaller increase in the gas temperature, which may derive into starting difficulties [61]. Among alcohols, the lower enthalpy of vaporization of n-butanol (620 kJ/kg) with respect to ethanol (944 kJ/kg), suggests that a diesel engine can start more easily operating with butanol than with ethanol at cold ambient conditions [81]. • Better distribution and storage. As n-butanol has a higher flash point and lower volatility than ethanol, butanol blends are safer for transportation, fuel handling and storage than those of ethanol [21,40]. Corrosion in pipelines is mainly attributed to the polarity and the hygroscopic character of the alcohol molecule. Some metals such as magnesium, lead and aluminum are susceptible to chemical attack by alcohol. Furthermore, wet corrosion (mainly caused by the moisture absorption capacity of alcohol) oxidizes most metals. Non-metallic components, especially elastomeric components, are also affected by alcohols [79]. Corrosion acts over the materials used in the fuel delivery and injection systems, among others. Alcohols with high polarity and high water content enhance the corrosive action in the materials. Since ethanol is more polar [62] and more soluble in water than butanol, butanol shows better tolerance to water contamination and, therefore, is more suitable to be distributed through existing pipelines. Furthermore, the less corrosive character of n-butanol with respect to ethanol also contributes to improved storage over longer time periods [20]. Yanai et al. [94] reported that butanol could corrode plastic parts and cause the swelling of rubber components. Nevertheless, the latter could be solved by substituting the rubber sealing material for a material more alcohol tolerant.
The n-butanol properties previously mentioned have a strong influence on combustion parameters. The presence of n-butanol affects the fuel-air mixing process and the injection spray development. The lower density and the lower kinematic viscosity of n-butanol with respect to diesel fuel lead to a better atomization quality for butanol-diesel blends. In addition, the higher volatility of n-butanol leads to a faster evaporation process. Both better atomization and faster evaporation contribute to form more homogeneous fuel-air mixtures, thus decreasing soot formation [35,60].
Cetane number is another important parameter for combustion quality since it can help the optimization of combustion timing. For n-butanol blends, the maximum pressure reached in the combustion chamber during combustion decreases for increasing butanol contents as a consequence of three effects: the energy effect represented through the reduction in the heating value, the chemical effect that is related with the reduction in the equivalence ratio and the dilution effect represented through the over-dilution and caused by their large delay times. Both chemical and dilution effects contribute to reducing the flame velocity, and therefore to enhance the heat transfer to the chamber walls during combustion, making the quality of the combustion poorer [22].
Studies on N-Butanol Use in Diesel Engines and Vehicles
This section reviews the use of n-butanol as a blending component in diesel engines and vehicles and reports its effects on combustion, performance and gaseous and particle emissions.
Fossil fuels have often been partially replaced by renewable fuels to reduce both the environmental impact and the dependence on conventional fuels in internal combustion engines. Most of the butanol-diesel emission results found in the literature were tested under steady conditions in a Euro 5 (or inferior) engine test bench under warm ambient conditions [19,85,95].
In general, under steady conditions, the authors observed a sharp decrease in PM emissions for butanol blends (due to the role played by the oxygen content to inhibit soot formation and to enhance soot oxidation) [36,96] with respect to 100% diesel fuel. In terms of gaseous emissions, the literature reports an increase in total hydrocarbons (THC) emissions for butanol blends [34,97]. However, there is no consensus regarding CO and NO X emissions. In the study by Choi et al. [32] CO emissions increased with respect to diesel fuel, whereas in the study presented by Chen et al. [98], the opposite was reported. NO X emissions remained constant in the tests carried out by Siwale et al. [31], whereas in other studies, slight increases [6] and decreases [99] were observed. Rakopoulos et al. [19,100] reported that these blends tend to reduce both particle and NOx emissions simultaneously. Most of the studies observed an increase in fuel consumption for butanol blends associated with its lower heating value [96,98,101,102]. In addition, the lower cetane number of butanol-diesel blends leads to an increase in the ignition delay [103]. The delayed start of combustion will prolong the combustion process, reducing the energy that can be efficiently converted into effective power in the cylinder [104]. This increase reached around 10% for Bu5D (5% butanol 95% diesel, volume basis) and around 14% for Bu25D (25% butanol 75% diesel, volume basis) in tests carried out by Atmanli et al. [99]. The differences in fuel consumption, described above when butanol is introduced, almost disappear in terms of energy consumption [37]. Since the energy consumption (inversely proportional to the engine efficiency) is determined as the product of the fuel consumption by the lower heating value, the lower heating value of butanol blends practically compensates for the increase in fuel consumption [95].
In studies following transient conditions, trends previously described for steady conditions were confirmed. The effect of n-butanol addition on the performance and emissions following the New European Driving Cycle (NEDC) was studied by Armas et al. [33] and Kozak [30] from different Euro 4 diesel engines and by Lapuerta et al. [29,38] Energies 2021, 14, 3215 10 of 22 in a Euro 6 diesel engine. Some of these studies were carried out in an engine test bench simulating the NEDC driving cycle [29,33], whereas others have studied the use of nbutanol-diesel blends in a chassis dynamometer [30,38]. Only one study was found in the literature testing a Euro 6 vehicle in the chassis dynamometer under NEDC cycle at a cold ambient temperature [38]. Similar to stationary tests, these studies concluded that THC emissions increase and particulate matter sharply decreased for increasing butanol content. Concretely, the literature has reported that in both engine and vehicle tests, the particle number and particle mass emissions were reduced as the blend of butanol increased to 16% (v/v), leading to fewer and finer particles. However, for butanol blends higher than 16% (v/v), particle number and particle mass increased [29,38]. Therefore, particle emissions were found to be minimized for this blend (16% butanol 84% diesel, volume basis). Regarding gaseous emissions, there is no consensus about CO and NO X gaseous emissions for butanol blends. When the engine is fueled with butanol blends, Kozak [30] reported that CO emissions increase, whereas Armas et al. [33] concluded a reduction in CO emissions. NO X emissions remained constant in tests carried out by Lapuerta et al. [38], where a Euro 6 light-duty diesel vehicle was tested following the NEDC at warm and cold ambient conditions and in tests carried out by Kozak [30] testing a Euro 4 passenger car following the NEDC cycle under warm ambient conditions. However, NO X emissions increased in the study carried out by Armas et al. [33], where a Euro 4 engine was tested under the simulated NEDC in the engine test bench at a warm ambient temperature.
For those regulated emissions with no clear trend (CO and NO X emissions), a schematic diagram is shown in Figure 3 summarizing trends described by authors about the use of butanol-diesel blends in diesel engines. In terms of CO emissions, 52% of studies reviewed concluded an increase, whereas 43% of the authors reported a reduction in CO emissions when butanol blends are introduced. Regarding NO X emissions, 46% of the studies concluded that introducing butanol-diesel blends is beneficial, and only 36% of them observed an increasing trend. A total of 18% of the publications reviewed reported that NO X emissions remained constant when butanol blends are introduced.
[33] and Kozak [30] from different Euro 4 diesel engines and by Lapuer a Euro 6 diesel engine. Some of these studies were carried out in an e simulating the NEDC driving cycle [29,33], whereas others have studied tanol-diesel blends in a chassis dynamometer [30,38]. Only one study w literature testing a Euro 6 vehicle in the chassis dynamometer under NED ambient temperature [38]. Similar to stationary tests, these studies con emissions increase and particulate matter sharply decreased for increas tent. Concretely, the literature has reported that in both engine and vehi ticle number and particle mass emissions were reduced as the blend of b to 16% (v/v), leading to fewer and finer particles. However, for butanol bl 16% (v/v), particle number and particle mass increased [29,38]. Therefo sions were found to be minimized for this blend (16% butanol 84% diese Regarding gaseous emissions, there is no consensus about CO and NOX g for butanol blends. When the engine is fueled with butanol blends, Koz that CO emissions increase, whereas Armas et al. [33] concluded a reduc sions. NOX emissions remained constant in tests carried out by Lapuerta a Euro 6 light-duty diesel vehicle was tested following the NEDC at wa bient conditions and in tests carried out by Kozak [30] testing a Euro 4 p lowing the NEDC cycle under warm ambient conditions. However, N creased in the study carried out by Armas et al. [33], where a Euro 4 e under the simulated NEDC in the engine test bench at a warm ambient t For those regulated emissions with no clear trend (CO and NOX em matic diagram is shown in Figure 3 summarizing trends described by a use of butanol-diesel blends in diesel engines. In terms of CO emission reviewed concluded an increase, whereas 43% of the authors reported a emissions when butanol blends are introduced. Regarding NOX emiss studies concluded that introducing butanol-diesel blends is beneficial, them observed an increasing trend. A total of 18% of the publications re that NOX emissions remained constant when butanol blends are introdu Regarding the cold startability of butanol-diesel blends, Miers et al tanol-diesel blends with 20% and 40% butanol content (v/v) in a light-du transient conditions at a warm ambient temperature, concluding that but Regarding the cold startability of butanol-diesel blends, Miers et al. [34] studied butanol-diesel blends with 20% and 40% butanol content (v/v) in a light-duty vehicle under transient conditions at a warm ambient temperature, concluding that butanol blends with butanol contents lower than 40% (v/v) could be successfully used in a diesel engine calibrated for 100% diesel fuel without startability problems. However, the vehicle driveability decreases noticeably when Bu40D is introduced, and an ECU recalibration would be needed for a satisfactory engine operation. The vehicle reported increasing roughness occurs during both steady conditions and acceleration events. This study is in agreement with the study carried out by Lapuerta et al. [38], where a Euro 6 light-duty diesel vehicle was tested in a chassis dynamometer following the NEDC under two different ambient conditions (24 and −7 • C). That study concluded that butanol-diesel blends up to 20% butanol content (%v/v) could be introduced without startability and driveability problems at 24 • C, whereas only butanol-diesel blends up to 13% (v/v) could be introduced without startability problems at −7 • C. At −7 • C, some driveability difficulties were also reported for this n-butanol blend (13% n-butanol 87% diesel fuel, volume basis). Table 2 shows a detailed summary of the different studies found in the literature focused on the performance and regulated emissions of butanol-diesel blends in diesel engines and vehicles under stationary or transient conditions. Since in most of the studies, the reviewed engines were water-cooled, the cooling system is specified only when it is not water-cooled. Since the number of studies found in the literature blending butanol with other fuels or studying unregulated emissions is lower, these are not included in Table 2 but discussed below.
Since the butanol content in the blend is mainly limited by the cetane number, the flashpoint and the heating value, only low alcohol contents are interesting because for higher alcohol contents, the heating value and the cetane are decreased. The latter is even below the lower limit established in the EN 590 standard [72]. Following the target established in the last directives promoting biofuels, the optimal range selected could range up to 20% (v/v) butanol content. Table 2 shows that most authors tested butanol-diesel blends up to 16-20% (v/v), reporting no negative effect on energy consumption for these blends with respect to reference diesel fuels. Higher butanol concentrations are generally discarded due to the reasons mentioned above. Although particle emissions find the minimum at 16% (v/v) butanol content, the workable range is reduced up to 13% (v/v) diesel substitution by butanol when startability is studied [38].
Although few studies were found blending butanol with other biofuels, the literature also reports engine tests with butanol-biodiesel blends. Jeevahan et al. [81] studied butanolbiodiesel blends with 10%, 20%, 30%, 40% and 50% of butanol content (%v/v) under four different engine loads concluding that the addition of butanol reduces specific fuel consumption (defined as the fuel consumed by the vehicle per distance traveled), CO, THC and NO X gaseous emissions. Yilmaz et al. [105] also studied butanol-biodiesel blends at 5%, 10% and 20% (volume basis) under different load conditions reporting that n-butanol increases CO and THC and reduces NO X emissions. Cedik, et al. [106] studied ternary blends (butanol-biodiesel-diesel) with 10% n-butanol, 20% biodiesel, 70% diesel and 20% n-butanol, 20% biodiesel and 60% diesel at different stationary conditions. Tests showed an increase in CO and THC emissions and a decrease in NO X and particle emissions.
Apart from regulated gaseous emissions, there are a minor number of studies measuring unregulated pollutants, which are generally emitted from the engine exhaust at much lower concentrations. These emissions are also important because they have potential health effects on humans and animals [41].
Among unregulated emissions, carbonyl compounds have received the highest attention. They are mainly formed by aldehydes and ketones, and they have a carbonyl group (a carbon atom linked to an oxygen atom by a double bond). Formaldehyde and acetaldehyde, which are the predominant carbonyls in the exhaust for vehicles, are toxic contaminants, mutagens and carcinogens [107]. Although few studies were found in the literature regarding unregulated emissions from butanol blends, it was concluded that, in general, alcohol blends with diesel fuels lead to higher carbonyl compound emissions than diesel fuel [41,108]. The high volatility of alcohols makes them partially escape from the combustion chamber mixed with the exhaust gas without being completely oxidized. Ballesteros et al. [109] reported that carbonyl emissions are slightly higher for butanol-diesel blends than for ethanol ones.
Exhaust emissions from diesel vehicles include aromatics such as benzene, toluene and xylene, often called BTX (benzene-toluene-xylene). According to the California Air Resources Board, benzene is a human carcinogen and may cause leukemia [110]. In general, BTX emissions decrease when alcohols are blended with diesel fuels, especially at a high engine load, and consequently, high exhaust temperature [111]. BTX emissions are influenced by the reduction in the combustion temperature when the alcohol is introduced (making BTX oxidation difficult) and by the oxygen in the alcohol molecule, which promotes BTX oxidation and thus contributes to the reduction in benzene emissions [112]. No information regarding BTX was found specifically for butanol-diesel blends.
In terms of the soot reactivity, the literature [113,114] concludes that butanol-diesel blends reduce the soot primary particle diameter and the soot mass density with respect to that of diesel fuel. The regeneration process of the diesel particle filter (DPF) is mainly affected by the exhaust gas composition, the flow rate, the temperature, the flow profiles through the filter channels and the physicochemical characteristics of the soot [115]. Therefore, the particle reduction, together with the better soot reactivity when butanol blends are used [116,117], contributes to a decrease in the DPF regeneration frequency, and therefore, lower oil dilution [118], lower fuel consumption and a longer after-treatment lifetime [119] can be achieved.
After describing and discussing trends derived from the introduction of butanol as a blending component for diesel fuel in diesel engines, reasons used by the authors to explain these trends in terms of regulated gaseous (CO, THC and NO X ) and particle emissions, are discussed in the following points.
•
CO and THC emissions are highly influenced by the ambient temperature, load, turbocharging and fueling system. Chen et al. [98] reported that CO emissions increase at a low load and they are reduced at a high load.
Although there is no consensus regarding CO emissions, most of the authors justified the increases in CO and THC emissions for butanol blends with respect to diesel fuel is because of the high enthalpy of vaporization of n-butanol [120]. The fuel evaporation contributes to reducing the in-cylinder temperature, in particular during a cold start. Miers et al. [34] reported that n-butanol enhances the diesel oxidation catalyst (DOC) activity. This study concluded that, although the exhaust gas temperature upstream of the DOC was lower for butanol-diesel blends than for reference diesel fuel, the trend downstream of the DOC was reversed due to the oxidation activity inside the catalyst. This trend was confirmed in a Euro 6 diesel engine following an NEDC cycle [29]. These results are very promising for DPF and lean NO X trap (LNT) systems, which need a high temperature for their regeneration.
• Nitrogen oxide emissions are strongly dependent on temperature, local oxygen concentration and combustion duration [61]. Among the different strategies to reduce NO formation, delaying the fuel injection timing (thus affecting engine efficiency) and recirculating the exhaust gas is the most commonly used. In the latter, the introduction of cooled exhaust gas into the combustion chamber results in the dilution of the air charge by replacing O 2 with the non-reacting CO 2 and H 2 O. Therefore, the in-cylinder local combustion temperatures are reduced, thus inhibiting the NO formation.
Regarding NO X emissions from butanol-diesel blends, the literature reports no clear trend because there is compensation between several factors when the engine is tested. Among the factors that contribute to reducing NO X emissions [61]: The higher enthalpy of the vaporization of butanol with respect to diesel fuel, which means that a lower amount of heat is available to increase the gas temperature [33]. The low adiabatic flame temperature of n-butanol is derived from its lower C/H ratio [121].
On the other hand, an engine calibrated for diesel fuel operating with butanol-diesel blends requires higher fueling to achieve the demanded power. Since acceleration position is one of the inputs of the engine calibration maps, a decrease in the exhaust gas recirculation (EGR) rate is established in order to increase the air mass flow. Consequently, the NO formation increases. • N-butanol contributes to significant benefits in particulate matter emissions: The butanol molecule contributes to the increase in oxygen concentration in the butanol-diesel blend, enhancing the soot oxidation process [49]. The higher reactivity of n-butanol blends with respect to diesel fuel contributes to improving the soot oxidation process [116].
Since the soot formation mainly takes place in the fuel-rich zone at high temperature and pressure conditions, the oxygenated character of n-butanol leads to a local reduction in fuel-rich regions and thus limiting soot formation [61].
In fact, the hydroxyl group of the butanol molecule contributes to reducing soot formation and consequently particulate emissions, even more than other functional groups with similar oxygen content [49]. Molecules with oxygen atoms single-bonded to a carbon atom (such as alcohols and ethers) are more effective at reducing PM emissions than those having double-bonds (such as alkyl esters) because the oxygen in the alcohol or ether is more effective at suppressing soot than the oxygen in the ester for equivalent oxygen content [122]. This was confirmed by Barrientos et al. group contribution method [123], in agreement with other authors [50]. Blending diesel fuel with n-butanol reduces the aromatic and sulfur content (the latter does not have a significant influence because of the low sulfur content in current diesel fuels [72]) in the blend leading to a reduction in particulate matter emissions since these compounds are generally considered as soot precursors.
Although n-butanol blends reduce PM emissions, the biological activity of the soluble organic material from the PM emissions of butanol-diesel blends promotes more genotoxicity PM than diesel fuel. This could be a barrier to the butanol penetration in the market. However, further research is necessary to validate this conclusion because limited information was found in the literature. In any case, genotoxicity PM levels are higher for ethanol blends than for butanol blends [124]. Diesel fuels can be blended with bioalcohols, particularly with n-butanol, as a means to introduce a renewable fraction and to provide certain oxygen content. Oxygenated fuels such as alcohols are an effective way to reduce particle emissions. In fact, the butanol molecule contributes to increasing the oxygen concentration in the butanol-diesel blend, enhancing the soot oxidation process and also contributes to reducing the fuel-rich regions limiting the soot formation [123]. Introducing butanol leads to fewer and smaller particles, and thus to slower mean diameters [38]. The reduction in soot is beneficial for users as the frequency of active particulate regeneration is decreased, and thus the extra fuel consumption and the consequent eventual annoyance caused by the after-treatment maintenance.
Conclusions
This section summarizes the main conclusions derived from the use of butanol as a biofuel for diesel engines used in road freight transportation, tractors, harvesters and cogeneration. Specifically, the sustainability of lignocellulosic butanol, the properties of butanol-diesel blends and their influence on combustion parameters, transportation, fuel handling and storage, as well as butanol applications as a blending component for diesel fuels in commercial engines are summarized in this section.
N-butanol, produced from biological processes such as ABE fermentation, was reported to have a significant potential to reduce life-cycle greenhouse gas emissions with respect to fossil diesel fuel. Advanced processes to produce lignocellulosic biobutanol can achieve 60% lower GHG emissions than diesel fuel, whereas the reduction for the same fuel from conventional feedstock reaches 35%.
Although the alcohol most commonly used as a fuel component in the transport sector is ethanol, the higher cetane number of n-butanol, together with its higher heating value, better viscosity, better lubricity, better cold-flow properties and better miscibility with diesel, particularly at a low temperature, suggest that n-butanol is a better renewable component than ethanol in diesel blends. Furthermore, since n-butanol has a higher flash point, lower volatility and less corrosive character than ethanol, butanol blends are safer for transportation, fuel handling and storage than those of ethanol.
When butanol is introduced directly by blending with diesel fuel in the fuel tank, additional engine modifications or ECU recalibrations are not needed in a diesel engine calibrated for 100% diesel fuel up to 40% (v/v) butanol content. In most of the studies found in the literature testing butanol-diesel blends in diesel engines and vehicles, tests were made in a Euro 5 (or inferior) diesel engine, under steady conditions and under a warm ambient temperature. In general, the authors concluded that the presence of n-butanol contributes to a sharp decrease in PM emissions up to 16% butanol content (% v/v) and to an increase in THC emissions for increasing butanol content. However, there was no consensus regarding CO and NO X emissions. Most of the studies observed an increase in fuel consumption for butanol blends in line with the lower energy content of butanol compared to diesel fuel. The literature also concludes that a high butanol content in diesel can cause startability problems due to the very low cetane number of butanol. Concretely, startability problems are reported for butanol-diesel blends from 13% butanol onwards at cold ambient temperature, whereas no startability problems up to 40% butanol content are concluded for tests at a warm ambient temperature.
|
2021-07-27T00:05:08.578Z
|
2021-05-31T00:00:00.000
|
{
"year": 2021,
"sha1": "efe3109f7dbd0a120aa218668d5f53fdb8ff412a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/14/11/3215/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c2581a4c37952103129c3fd773ba0feb952e2460",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
258929678
|
pes2o/s2orc
|
v3-fos-license
|
Implementation of a Triage Protocol Outside the Hospital Setting for Timely Referral During the COVID-19 Second Wave in Chennai, India
India experienced a surge in COVID-19 cases during the second wave in the period of April-June 2021. A rapid rise in cases posed challenges to triaging patients in hospital settings. Chennai, the fourth largest metropolitan city in India with an 8 million population, reported 7564 COVID-19 cases on May 12, 2021, nearly 3 times higher than the number of cases in the peak of COVID-19 in 2020. A sudden surge of cases overwhelmed the health system. We had established standalone triage centers outside the hospitals in the first wave, which catered to up to 2500 patients per day. In addition, we implemented a home-based triage protocol from May 26, 2021, to evaluate patients with COVID-19 who were aged ≤45 years without comorbidities. Among the 27,816 reported cases between May 26 and June 24, 2021, a total of 16,022 (57.6%) were aged ≤45 years without comorbidities. The field teams triaged 15,334 (55.1%), and 10,917 (39.2%) patients were evaluated at triage centers. Among 27,816 cases, 19,219 (69.1%) were advised to self-isolate at home, 3290 (11.8%) were admitted to COVID-19 care centers, and 1714 (6.2%) were admitted to hospitals. Only 3513 (12.7%) patients opted for the facility of their choice. We implemented a scalable triage strategy covering nearly 90% of the patients in a large metropolitan city during the COVID-19 surge. The process enabled early referral of high-risk patients and ensured evidence-informed treatment. We believe that the out-of-hospital triage strategy can be rapidly implemented in low-resource settings.
Introduction
India experienced a surge of cases and deaths in the second wave of COVID-19 in 2021.The highest number of daily cases was 0.4 million on May 6, 2021, which was 4 times the highest number of reported cases in the first wave [1].The rapid rise in cases overwhelmed the health system.There was a surge in hospital admissions, and many patients required either an oxygen bed or intensive care units during the peak [2].The sudden surge in the number of cases was attributed to the highly infectious Delta mutant variant of COVID-19 (B.1.617lineage) and lack of compliance with COVID-19-appropriate behaviors [3].Tamil Nadu, one of the Southern States in India, had nearly 1.2 million cases and 17,855 deaths between May 01, 2021, and June 24, 2021 [4].
Local Setting
With a population of 8 million, Chennai city is the capital of Tamil Nadu state in India.Greater Chennai Corporation (GCC) is the administrative authority of Chennai, with 15 administrative zones and 200 divisions.Chennai reported a maximum of 2358 cases on June 30, 2020, during the first wave [1].During the second wave, the highest reported cases were 7564 on May 12, 2021, nearly thrice as high as the number of cases in the peak of the first wave [1].
Having no preplanned pandemic response, GCC adopted a community-centered, patient-friendly strategy devised by a multidisciplinary team of public health experts to combat the first wave of the COVID-19 pandemic [5].The key strategies included surveillance, testing, contact tracing, triage centers, facility-based isolation, supervised home isolation, and quarantine [5].These strategies were implemented at once in the early phases of the first wave of the pandemic [5].Establishing triage centers on an ad hoc basis outside the hospital settings for early identification of severe COVID-19 was one of the strategies.GCC established 12 triage centers across all parts of Chennai in 2 weeks in the first wave (April-June 2020), which remained functional until the end of the second wave of the pandemic [5].The triage centers were conveniently located in public buildings, such as educational institutions, community halls, and stadiums for easy access.Information about these triage centers was communicated through social media.All the services were free of charge in public sector facilities and under the Government Comprehensive Health Insurance scheme in private health facilities for eligible beneficiaries.There were challenges in implementing these strategies, especially due to the scarce workforce.Tamil Nadu state has a well-structured, well-staffed state-level public health department with a highly trained workforce catering predominantly to the rural population.This workforce was mobilized to support various managerial and field-level activities, such as surveillance, contact tracing, sample collection, clinical care, and strategic planning in Chennai.These mobilized, trained workforces joined the health care team of GCC and implemented the strategies.Health care workers were used for technical activities, such as testing and treatment.Additionally, a non-health care workforce and trained community volunteers were used for strategic planning and logistics.GCC used these health care and non-health care workforces for the functions of triage centers.The combination of public health strategies, test-trace-isolate, and appropriately timed restrictions helped control the first wave without burdening the health system [5].
The sudden surge of cases overwhelmed the health system.More than 90% of oxygen beds were occupied on May 24, 2021 [4].The 12 triage centers could evaluate up to 2500 cases per day, which was inadequate during the second wave.Inability to assess patients within 24 hours after diagnosis led to panic among patients.They either rushed to hospitals or called helplines for evaluation, treatment, and hospital beds.The Government of Tamil Nadu introduced a new triage protocol to evaluate patients at home or in field-based settings (Figure 1) [6].GCC developed a patient-centric, field-based strategy to assess confirmed cases of COVID-19 at home or in settings close to their homes according to the protocol.This paper describes the feasibility, challenges, and lessons learned from a patient-centric, outside-the-hospital triage strategy.
Description of the Triaging Intervention
On May 26, 2021, we implemented an evidence-informed triage protocol for COVID-19 cases at a field level.Beside the 12 triage centers established in the first wave of the pandemic, we formed field triage teams with a repurposed workforce of 800 paramedics and 200 doctors.Each field triage team could screen 80-100 cases per day; altogether, the 200 field triage teams had the capacity to screen an additional 20,000 patients per day.Each field triage team had 4 paramedics and 1 doctor and catered to one of the 200 divisions.
Patients were tested first in one of the facilities with real-time reverse transcription polymerase chain reaction testing.These facilities included mobile testing teams, public sector walk-in testing centers, private sector labs, and hospitals.Irrespective of the testing center, all results were uploaded to an integrated web-based data management portal.GCC emergency operation center collected the line list, stratified them by divisions and sent them to the respective triage teams.Each team was supplied with a line list of COVID-19-positive cases.A member of the field triage team contacted the allotted cases, informed the time of the home visit, and screened the patient at the doorstep.Each team member assessed a maximum of 30 COVID-19 cases at the doorstep.Every team member had a thermal scanner, pulse oximeter, sphygmomanometer, glucometer, personal protective equipment, hand sanitizer, and vehicle for logistics.The equipment were procured through pandemic relief funds and donations.The protocol enabled the paramedic to do the initial evaluation and make decisions regarding home isolation or hospitalization after consultation with a doctor.
The triage team referred patients ≥45 years of age and patients <45 years of age with comorbidities to triage centers directly.The team visited patients ≤45 years without any comorbidities.A paramedic asked for the symptoms and comorbidities and measured oxygen saturation with the pulse oximeter.If the patient had oxygen saturation (SpO 2 ) <94% or high-grade fever >38.8 °C, they were referred to triage centers for detailed evaluation.
The standalone triage centers outside the hospitals had 4 doctors and 8 paramedics.The centers were equipped with pulse oximeters, thermal scanners, sphygmomanometers, a chest x-ray unit, and a cell counter.The doctors referred patients with mild to moderate illness or those with a lack of adequate facilities for home isolation to facility-based isolation units known as "COVID Care Centers" (CCC).Patients with COVID-19 who had severe infections were referred to the hospital.
The criteria for home isolation for patients triaged at home or standalone triage centers were mild symptoms and SpO 2 >94%.Other criteria for home isolation were the availability of a separate room with an attached bathroom or toilet and a caregiver.Patients were given a home isolation kit with medications, namely paracetamol, vitamin C, and zinc tablets [6].Doctors in the telemedicine center followed up with patients in home isolation for 10 days.If they reported red flag signs, they were transferred to the hospital by special ambulances.
Ethical Considerations
This study has been approved by the institutional human ethics committee of Indian Council of Medical Research-National Institute of Epidemiology, Chennai, India (NIE/IHEC/202004-07). Informed consent was not obtained because the survey team did not interview any human participants.We did not handle any information containing personal identifiers.No monetary or other benefits were given to the participants of the study.
Results
We analyzed the triaging data from May 26, 2021, to June 24, 2021.Overall, Chennai reported 27,816 cases during this period.Among the reported cases, 16,022 (57.6%) were aged <45 years with no comorbidities, hence eligible for triaging at home.The field teams triaged 15,334 (55%), and the rest opted for evaluation in the facility of their choice.Among those who underwent triage at home, 13,386 (48%) were recommended home isolation, and 1948 (7%) were referred to triage centers (Figure 2).
Of the 27,816 cases, 11,794 (42.4%) were ≥45 years or <45 years with comorbidities.Nearly one-third of the patients (n=8969) were directly evaluated at standalone triage centers, in addition to 1948 referred after triaging at home.Only 2825 (10%) sought treatment at facilities of their choice (Figure 2).
Overall, the doctors advised home isolation for 19,219 (69.1%) of the patients in this period.All the patients in home isolation were followed for 10 days through telemedicine and home visits, if required.Among those in home isolation, we identified 271 (1.4%) with SpO 2 <94% and referred them to secondary or tertiary centers for treatment.
Nearly one in 10 (11.8%) patients were admitted to CCCs, which catered to patients who did not have facilities for home isolation or had symptoms requiring monitoring of SpO 2 in the range of 90%-94%.According to the treatment protocol, the CCC was equipped with oxygen concentrators and medications.Only 1714 (6%) patients required hospitalization (Figure 2).
Lessons Learned
We executed a field-based triage strategy that enabled access to timely evaluation for nearly half of the cases at home and another 40% in the nearest standalone triage center during the surge of the COVID-19 cases.The strategy was feasible and rapidly scalable in a large metropolitan city, such as Chennai (Textbox 1).The process enabled early identification and referral of patients with moderate to severe illness, including silent hypoxia.The protocol was pragmatic and based on the oxygen saturation levels using pulse oximeters, an objective tool for clinical evaluation [7].Globally, the experts recommend pulse oximetry as one of the best triaging tools [7].Timely procurement and distribution of pulse oximeters to all triage teams played a significant role in the successful implementation of the protocol.This approach ensured that ambulances and scarce oxygen beds were allocated to patients who needed it the most, irrespective of their affluence or socioeconomic status.
Ensuring access to health care facilities is essential and critical in preventing deaths and panic among patients [8].The process reduced crowding in the hospital clinics.Low-and middle-income countries, like India, often experience a shortage of workforce during the routine functioning of health care, and such shortages may worsen during times of surge [9].This approach prevented the sudden influx of all patients to the hospitals, and the precious human resources were well used for COVID-19 management.On August 6, 2021, the protocol was modified, and the admission criteria were relaxed after the number of cases decreased.The patients were referred to secondary or tertiary care institutions if they required hospitalization [10].Tamil Nadu state adopted Government of India COVID-19 treatment guidelines.The Government of India has modified the treatment guidelines several times based on the best available evidence, and the latest guidelines were published on January 5, 2023 [11].
The community acceptance of the intervention was high.The good handling of the first wave of the pandemic by GCC gained the trust of the community [5,12].Besides, the high cost of the private health care system was concerning [13].There were reports of overuse of computed tomography scans and irrational medications in the second wave in India [14].On the other hand, the services in the public health system were of good quality and easily accessible at no cost.These reasons could have motivated the community to use the services in the public health system.The protocol included only evidence-informed evaluation and treatment, hence minimized the use of irrational diagnostics and drugs.The early isolation of cases reduced the disease spread in the community.Globally, the telemedicine approach for the assessment, evaluation, and follow-up of patients with COVID-19 is well accepted [15].This protocol incorporated telemedicine-based follow-up of patients until recovery for identifying red flags and timely referrals.
One limitation was lack of data regarding 3513/27,816 (12.7%) of the patients who opted for facilities of their choice.We might have underestimated the overall hospitalizations due to a lack of information about this group.Similarly, we could not analyze the data of 80 (0.3%) patients evaluated in triage centers due to reporting errors.
The strength of our approach was that we generated real-time evidence of a triaging protocol for patients in or closer to their homes and outside the hospital settings.
The strategy is replicable and can be used in low-resource settings during the COVID-19 surge or similar outbreaks.The field-based approach reduces spread, ensures timely referral, saves lives, and ensures appropriate use of scarce resources.
RenderX
Textbox 1.The feasibility of triaging at a field level for early referral of high-risk patients with COVID-19 in a large metropolitan city.
•
Measurement of oxygen saturation through pulse oximetry is a simple objective tool to triage COVID-19 cases in low-resource settings and for early identification of hypoxia.
• Triaging patients with COVID-19 outside hospitals was feasible and rapidly scalable in a large metropolitan city.
• Field triaging of COVID-19 was patient-friendly and well accepted by the community; it reduced panic among the public and crowding in the hospitals.
Key implications
• Field triaging is a feasible strategy to identify and refer high-risk patients in low-resource settings Pulse oximeter is a simple tool to quantitatively triage patients with COVID-19 at a field-based setting • Field-triaging strategy could reduce additional burden on health facilities
Figure 2 .
Figure 2. Outcomes of the out-of-hospital triage of the COVID-19 cases in Chennai, Tamil Nadu, India during May-June 2021.RT-PCR: real-time reverse transcription polymerase chain reaction.
|
2023-05-28T06:16:03.319Z
|
2022-09-19T00:00:00.000
|
{
"year": 2023,
"sha1": "eb5754b22add8d1813ed4cec49756a1f8599ee51",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2196/42798",
"oa_status": "CLOSED",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3141d59e133f8fa29621c4f56da49ef42c62c3e0",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56342935
|
pes2o/s2orc
|
v3-fos-license
|
EVALUATION OF HYPOGLYCAEMIC , HYPOLIPIDAEMIC AND NON TOXIC EFFECT OF HYDRO-METHANOLIC EXTRACTS OF ZIZIPHUS MAURITIANA , ZIZIPHUS SPINA CHRISTI FRUIT AND GLIBENCLAMIDE ON ALLOXAN INDUCED DIABETIC RATS
The use of plant products in the management of diabetes has gained ground in pharmacotherapy. It becomes imperative to evaluate the antidiabetic effects of fruit extracts of Ziziphus mauritiana (HMZM), Ziziphus spina christi (HMZS) and Glibenclamide on blood glucose, total protein, albumin and lipid profile in alloxan induced diabetic rats. 68 albino rats weighing 70 130g were used in this study. 26 rats were used for acute toxicities of Ziziphus mauritiana and Ziziphus spina christi. 42 rats of 7 groups of 6 rats each were used to test antidiabetic effects of Ziziphus mauritiana and Ziziphus spina christi plant extracts. Group 1 served as the negative control, groups 2-7 were intraperitoneally administered 360mg/kg of alloxan in normal saline respectively. Group 2 served as positive control, groups 3 and 4; 5 and 6 were respectively administered daily dose of 200 and 400mg/kg of HMZM and HMZS, group 7 was administered 0.21mg/kg of glibenclamide. Results showed Ziziphus mauritiana and Ziziphus spina christi non toxic at dose of 5000mg/kg. 48 hours after alloxan administration, blood glucose levels were found to be significantly higher (P<0.05) in groups 2-7 compared to group 1, thus confirming induction of diabetes. In groups 3-7, on 7 and 14 days of administration of extracts and Glibenclamide, there was a significantly lower (p<0.05) serum glucose, triacylglycerol, High density Lipoprotein, Low density lipoprotein and significantly higher (P > 0.05) serum level of total protein and albumin compared to group 2. The results obtained showed a significantly lower (P < 0.05) serum level of glucose. The effects of HMZM and HMZS fruit extracts on blood glucose, total protein, albumin and lipid profile was dose dependent. Conclusively, this study has demonstrated the antidiabetic effects of HMZM and HMZS with HMZS having a more pronounced effect on Blood glucose and lipid profile.
INTRODUCTION
Diabetes may be defined as a group of diseases resulting from insufficient or no insulin production or the combination of both 1 .Diabetes mellitus as a disease is ranked the 7th killer disease in the world 2 .The International Diabetes Federation stated that about 366 million people are living with diabetes and this figure is projected to increase to 552 million by the year 2030 3 . 4tated that according to WHO, the number of diabetic patients estimated would increase to at least 300 million by 2025.The important regulatory role of insulin on glucose and lipid metabolism cannot be over emphasized.Thus, defect in hormonal insulin production is not unconnected to the unset of diabetes mellitus, which is associated with pronounced abnormality of lipoprotein metabolism, leading to hyperlipidaemia.Altered lipoprotein metabolism may be as a result of increase in production, decreased in absorption and maybe when the composition of lipoproteins changed, which in turn result in hyperlipaedemia in diabetes 5 .According to Clinical Practice Guidelines Expert Committee 2013, individuals who have history of or are suffering from certain diseases, such as genetic syndrome e.g. down syndrome, turner syndrome etc, pancreatic diseases e.g.cancer, cystic fibrosis, pancreatitis etc, Viral infections e.g cytomegalovirus etc, and endocrine diseases e.g.acromegaly, hyperthyroid etc, are likely to have diabetes.Diabetes may also result from the use of certain drugs such as thiazides, used to treat high blood pressure, Glucocorticoids drugs, such as cortisone, statins, used to treat high cholesterol level, drugs used to treat certain mental problems and epilepsy 6 .Experts have classified diabetes into three major types, which include type1, insulin deficiency, type 2, insulin resistance and gestational diabetes results from insulin blocking hormone during pregnancy.Other forms of diabetes include; Maturity Onset Diabetes of the Young (MODY), a rare form of diabetes that generally occurs before the age of 25 in individuals of normal weight and Latent Autoimmune Diabetes of Adults (LADA), which occurs in adults between 30 to 50 years where there is a rapid dependence on insulin due to the slow and progressive destruction of the beta cells of the pancreas by the presence of antibodies 7 .General symtoms of diabetes include; slow healing wounds, fatigue, blurry vision, excessive thirst and hunger, frequent urination and itchy and scaly skin 1 .
Hyperlipidaemia has been found to be associated with the alteration of lipid and lipoproteins metabolism in the onset of diabetes mellitus 5 .Several researches have reported the positive correlation of metabolic disorders in lipid and lipoproteins metabolism in individuals suffering from diabetes to arteriosclerosis 8 .Several factors are said to be likely responsible for diabetic dyslipidemia, which include; insulin effects on liver apoprotein production, regulation of lipoprotein lipase (LpL), actions of cholesteryl ester transfer protein (CETP) and peripheral actions of insulin on adipose and muscle 9 .Another report reveals that individuals diabetics with cardiovascular heart disease, were shown to have high total cholesterol, high low density lipoprotein (LDL) and reduced high density lipoprotein (HDL) compared to those without cardiovascular disease 10 .Hypertriacylglycerolemia, reduced HDL and ketoacidosis have been reported to found in poorly controlled type1 diabetes 11 .Hyperglycemia, a metabolic challenge is connected to insulin deficiency in type-1 diabetes and insulin resistance in type 2 diabetes. 12report that Hyperglycemia and hyperlipidemia blocks the Insulin-Inositol polyphosphate-5-phosphatase (Inpp5f) negative feedback loop and increase of Inpp5f in diabetes due to hyperglycemia and hyperlipidemia plays an important role in diabetic cardiomyopathy suggest that, increase of Inpp5f might be one of the key mediators of metabolic stress (hyperglycemia and hyperlipidemia), which is an induced insulin signaling deficiency.
Several researchers have reported volumes of work showing how plants and fruits have provided useful remedies as hypoglycemic and hypolipidemic agents owing to the presence of phytochemical and bioactive compounds in plants and fruits. 13reported the hypoglycemic and lipid lowering effects of aqueous fresh Leaf extract of Chromolaena odorata (linn) in albino wistar rats fed different concentrations of cholesterol enriched diet.
Several medicinal plants have been reported to be useful in diabetes worldwide and have been used empirically as anti-diabetic and antihyperlipidemic remedies 14 .A survey revealed that a number of cyclopeptide and isoquinoline alkaloids, flavonoids, terpenoids and their glycosides have been found to occur in various amounts in most Ziziphus species 15 .Various researches has been done to show the anti-diabetic effects Ziziphus species such as,Anti diabetic activity of Ziziphus mauritiana in streptozotocin induced Diabetic Rats and its comparison with some standard flavonoids by 14 , the comparative antihyperglycemic, antihyperlipidemic and antioxidant effects of Ziziphus spina christi and Ziziphus jujube in alloxan induced diabetic rats by 16 , while there is the need to validate the claims that Ziziphus mauritiana and Ziziphus spina christiare potent anti-diabetic agents, there is also the need to know which of the Ziziphus species has a more potent anti-diabetic effect.As a certified anti-diabetic drug, glibenclamide was used in this study based on the fact that based it has health benefits such as it having a fast onset of action, little to no effect on blood pressure and lower risk of gastrointestinal problems compared to chlopropamide (which was the original drug intended for this study but wasn't used because it has been banned) and other antidiabetic drugs such as metformin and miglitol 17 .The aim of this research is to compare the anti-diabetic efficacy of hydro-methanolic (50:50) of Ziziphus mauritiana (HMZM) and Ziziphus spina christi (HMZS) fruit extract with antidiabetic drug Glibenclamide.The fruit extracts were prepared by crushing the mesocarps of Ziziphusmauritiana.The dried mesocarps were ground to powder using mortar and pestle.The powdered fruit mesocarp was soaked in a mixture of water and methanol respectively (50:50) and allowed to stay for 48 hours at room temperature after which they were filtered.After filtration, 100ml of the extracts was then taken and evaporated to dryness using a rotary evaporator at 80 °C.The evaporated extracts were then reconstituted with distilled water relative to the weight of the evaporated extracts.
Experimental Animals
Policy of Bayero University Kano on research involving laboratory animals was followed in this study.Fifty-six male albino rats were used in this study.The rats were kept in cages under standard conditions and were fed with pelletized growers feed.They were allowed to acclimatize for a week before commencement of the experiment.
Experimental Design
A total of twenty-six rats were used for testing the acute toxicity of the extract of Ziziphus mauritiana fruit while thirty rats were used for testing the antidiabetic effects of the extract in comparison with antidiabetic drug, glibenclamide.The rats were distributed into five (5) groups of six (6) rats each as follows: Group 1 -Six (6) non-diabetic rats fed with normal feed (Negative control) Group 2-Six (6) diabetic rats fed with normal feed (Positive control) Group 3 -Six (6) diabetic rats administered with 200mg/kg of the hydro methanolic extract of the fruit of Ziziphusmauritiana Group 4-Six (6) diabetic rats administered with 400mg/kg of the hydro methanolic extract of the fruit of Ziziphusmauritiana Group 5-Six (6) diabetic rats administered with 0.21mg/kg of glibenclamide The blood glucose levels of the rats in all groups were determined before commencement of the experiment by use of a glucometer.Also, after treatment with alloxan, the blood glucose levels were determined to confirm that the rats (from groups 2-5) were diabetic, before oral administration of extracts and glibenclamide.Blood glucose levels were also determined on the 7 th day and on the 14 th day post extract and glibenclamide treatments.
On the 7 th and 14 th day, the animals were sacrificed (three rats per group on each day) by decapitation and blood was obtained to determine the blood glucose levels, serum protein, serum albumin and lipid profile of the rats.
Induction of Diabetes
A stock solution was prepared by dissolving 0.4g of alloxan monohydrate in 4ml of normal saline.Normal saline was prepared by dissolving 0.95g of salt in 100ml of distilled water 18 .Diabetes was induced by single intraperitoneal injection of alloxan monohydrate.The rats in group 2-7 were intraperitoneally administered with 360mg/kg of alloxan in normal saline respectively.
Acute Toxicity Study
The acute toxicities of the hydro-methanolic fruit extracts of Ziziphusmauritiana were evaluated in two phases as described by 19 .In the 1 st phase doses of 10mg/kg, 100mg/kg, and 100mg/kg were administered to 3 rats each.In the absence of any mortality in the1st phase, higher doses of 1500mg/kg, 2500mg/kg, 3500mg/kg and 5000mg/kg were then administered on 1 rat each.
Determination of Biochemical Markers of Diabetes Mellitus
Blood Glucose was determined by the method of 20 ; Serum Total Protein was determined by biuret method 21 ; serum total cholesterol was determined using the method of 22 ; Serum Level of HDL and LDL-Cholesterol was carried out using the method of 23 ; Serum Triacylglycerol was determined by the method of 22 .
Reagents: All reagents used for the analysis of glucose, lipid profile and albumin in this study are of analytical grade purchased as kits produced by Randox Laboratories Limited, United Kingdom.
Statistical analysis
The results obtained in all the experiments were expressed as mean ± standard deviation.Statistical analysis was carried out by using one way ANOVA as in standard statistical software package of social science and a component of Graph Pad Instat3 Software version 3.05 by Graph Pad Inc. was used with significant difference measured at (P < 0.05) 24 .
Acute Toxicity
The results of the acute toxicity of hydro-methanolic Ziziphusmauritiana(HMZM) extract is shown in table 1.
In the 1 st phase, doses of 10mg/kg, 100mg/kg, and 1000mg/kg were administered to 3 rats each of which no mortality was observed.In the absence of mortality in the1st phase, higher doses of 1500mg/kg, 2500mg/kg, 3500mg/kg and 5000mg/kg were then administered on 1 rat each of which no mortality was observed in the 2 nd phase of the experiment.The results of the acute toxicity of hydro-methanolic Ziziphusspina christi(HMZS) extract is shown in table 2.
In the 1 st phase, doses of 10mg/kg, 100mg/kg, and 100mg/kg were administered to 3 rats each of which no mortality was observed.In the absence of mortality in the1st phase, higher doses of 1500mg/kg, 2500mg/kg, 3500mg/kg and 5000mg/kg were then administered on 1 rat each of which no mortality was observed in the 2 nd phase of the experiment.The result of the effects of HMZM and HMZS on blood glucose in diabetic rats is shown in Figure 1 below.All the experimental groups (groups 3, 4, 5 and 6) showed a similar pattern of time dependency in blood glucose reduction during the 14 days treatment.As at the first day of the experiment (immediately after diabetes induction) group 1 rats (negative control) had a significantly lower (P < 0.05) blood glucose level compared to rats in group 2 (positive control), group 3 (200mg/kg HMZM), group 4 (400mg/kg HMZM), group 5 (200mg/kg HMZS), group 6 (400mg/kg HMZS), and group 7 (0.2143mg/kg of glibenclamide).However, no significant differences (P > 0.05) were observed when blood glucose levels of rats in groups 2-7 were compared at day 1.The significantly higher (P < 0.05) level of blood glucose observed in groups 2-7 rats compared to group 1 confirms that the rats were diabetic.
The blood glucose levels as at the 7 th day of administration of HMZM and HMZS shows a significantly lower (P < 0.05) level of blood glucose in all groups compared to rats in group 2 (table 3).Significant differences (P < 0.05) were also observed when blood glucose of group 3 and group 4, group 5 and group 6 were compared.The groups having the lowest blood glucose level were group 1, group 7 and group 6 (table 3).Significant difference (P < 0.05) was observed in the blood glucose levels of all groups (1-7) when compared to one another.However, there were no significant differences (P > 0.05) observed between rats in group 7 and group 6, group 3 and group 5.
The blood glucose levels as at the 14 th day of administration of HMZM and HMZS shows a significantly lower (P < 0.05) level of blood glucose of rats in groups 1, group 3, group 4, group 5, group 6 and group 7 compared to rats in group 2 with group 1 and group 7 rats having the lowest blood glucose level (table 3).Significant differences (P < 0.05) were observed when serum blood glucose level of rats in groups 3 and 4, groups 5 and 6 were compared.With the exception of group 4 and group 5 of which no significant difference (P > 0.05) was observed in their blood glucose levels, significant difference (P < 0.05) was observed in the blood glucose levels of all groups when compared to one another.The decrease in blood glucose level was found to be dose dependent such that blood glucose level of rats decreased with a corresponding increase in the doses of HMZM and HMZS extracts respectively.As shown Figure 2, the results obtained on the 7 th day of administration of HMZM and HMZS show a significantly higher (P < 0.05) level of serum triglycerides in group 2 rats (positive control), compared to rats in group 1 (negative control), group 3 (200mg/kg of HMZM), group 4 (400mg/kg of HMZM) group 5 (200mg/kg of HMZS), group 6 (400mg/kg of HMZS), and group 7 (0.2143mg/kg of glibenclamide).Significant differences (P < 0.05) were also observed when the serum triglyceride level of rats in group 3 and group 4, group 5 and group 6, were compared.Significant difference (P < 0.05) was observed when the serum triacylglycerol levels of all groups (1-7) were compared with one another.However, there were no significant differences (P > 0.05) in the serum triacylglycerol level of group 6 rats compared to group 7.
Results are expressed as Mean ± Standard Deviation (n=3).
ISSN: 2250-1177
[87] CODEN (USA): JDDTAO The results obtained as at the 14 th day of administration shows a significantly lower (P < 0.05) level of serum triacylglycerol of rats in groups 1, 3, 4, 5, 6 and 7 compared to rats in group 2, Significant differences (P < 0.05) were also observed when the serum triacylglycerol level of rats in groups 3 and 4, 5 and 6, were compared.Significant difference (P < 0.05) was observed when the serum triacylglycerol levels of all groups (1-7) were compared with one another.However, no significant differences (P > 0.05) were observed in the serum triglyceride levels of group 1, group 6 and group 7 rats which were having the lowest serum triacylglycerol level.
The results obtained on the 14 th day of administration show further regression in the serum HDL of rats in groups 1, 3, 4, 5, 6 and 7 compared to rats in group 2.Significant differences (P < 0.05) were also observed when the serum HDL level of rats in groups 3 and 4, 5 and 6, were compared.Significant difference (P < 0.05) was observed when the serum HDL-Cholesterol levels of all groups (1-7) were compared with one another.However, no significant difference (P > 0.05) was observed when the serum HDL-Cholesterol level of groups 1, 4 and 7 rats were compared.
Figure 4, shows the results obtained on the 7 th day of administration of HMZM and HMZS there was a significantly higher (P < 0.05) level of serum cholesterol in group 2 rats (positive control), compared to rats in group 1 (negative control), group 3 (200mg/kg of HMZM), group 4 (400mg/kg of HMZM), group 5 (200mg/kg of HMZS), group 6 (400mg/kg of HMZS), and group 7 (0.2143mg/kg of Glibenclamide).Significant differences (P < 0.05) were also observed when the serum Cholesterol level of rats in group 3 and group 4, group 5 and group 6, were compared.Significant difference (P < 0.05) was observed when the serum Cholesterol levels of all groups (1-7) were compared with one another.However, there was no significant difference (P > 0.05) in the serum cholesterol level of group 6 compared to group 7.
The results obtained as at the 14 th day of administration showed further regression in the serum cholesterol of rats in groups 1, 3, 4, 5, 6 and 7 compared to rats in group 2. Significant differences (P < 0.05) were also observed when the serum Cholesterol level of rats in groups 3 and 4, 5 and 6, were compared.Significant difference (P < 0.05) was observed when the serum Cholesterol levels of all groups (1-7) were compared with one another.However no significant differences (P > 0.05) was observed when the serum cholesterol level of group 1 compared groups 6 and 7 rats, and 4 compared to group 5 rats.The results obtained as at the 14 th day of administration shows further regression in the serum LDL-Cholesterol of rats in groups 1, 3, 4, 5, 6 and 7 compared to rats in group 2.Significant differences (P < 0.05) were also observed when the serum LDL level of rats in group 3 and 4, 5 and 6, were compared.Significant difference (P < 0.05) was observed when the serum LDL-Cholesterol levels of all groups (1-7) were compared with one another.However no significant difference (P > 0.05) was observed when the serum LDL levels of groups 1 and 7 rats were compared.The results of the effects of hydro-methanolic fruit extracts of Ziziphusmauritiana and Ziziphus spina christion on total protein and albumin at the 7 th and 14 th day are presented in Figures 6 and 7 respectively.The increase in the mean serum level of total protein and albumin was found to be dose dependent and it occurred with an increase in the of hydro methanolic Ziziphusmauritiana and Ziziphus spina christiextracts respectively.
Results are expressed as
As shown in Figure 4.6, the results obtained on the 7 th day of administration of HMZM and HMZS show a significantly lower (P < 0.05) level of serum total protein in group 2 rats (positive control), compared to rats in group 1 (negative control), group 3 (200mg/kg of HMZM), group 4 (400mg/kg of HMZM), group 5 (200mg/kg of HMZS), group 6 (400mg/kg of HMZS), and group 7 (0.2143mg/kg of glibenclamide).Significant differences (P < 0.05) were also observed when the serum total protein level of rats in groups 3 and 4, 5 and 6, were compared.Significant difference (P < 0.05) was observed when the serum total protein levels of all groups (1-7) were compared with one another.However, there was no significant difference (P > 0.05) in the serum total protein level of group 4 compared to groups 5, 6 and 7.
The results obtained as at the 14 th day of administration shows increase in the serum total protein of rats in groups 1, 3, 4, 5, 6 and 7 compared to rats in group 2.Significant differences (P < 0.05) were also observed when the serum total protein level of rats in groups 3 and 4, 5 and 6, were compared.Significant difference (P < 0.05) was observed when the serum total protein levels of all groups (1-7) were compared with one another however no significant difference (P > 0.05) was observed when the serum total protein level of groups 1 and 7 rats were compared.
Results are expressed as Mean ± Standard Deviation (n=3); Letters a, b, c, d, and e indicates significant difference (P < 0.05) when group 2 was compared with groups 1, 3, 4, 5, 6 and 7, respectively; HMZM-Ziziphusmauritiana(Hydro-methanolic fruit extract); HMZS-Ziziphus spina christi(Hydro-methanolic fruit extract); GBC-Glibenclamide Figure 7, shows the results obtained on the 7 th day of administration of HMZM and HMZS there was a significantly lower (P < 0.05) level of serum albumin in group 2 rats (positive control) (1.26g/dl ± 0.19), compared to rats in group 1 (negative control), group 3 (200mg/kg of HMZM), group 4 (400mg/kg of HMZM), group 5 (200mg/kg of HMZS), group 6 (400mg/kg of HMZS), and group 7 (0.2143mg/kg of glibenclamide).Significant differences (P < 0.05) were also observed when the serum albumin level of rats in group 3 and group 4, group 5 and group 6, were compared.Significant difference (P < 0.05) was observed when the serum albumin levels of all groups (1-7) were compared with one another.However, there was no significant difference (P > 0.05) when the serum albumin level of group 7 and group 5 rats, group 1 and group 6 rats, group 3 and group 4 rats were compared The results obtained as at the 14 th day of administration shows increase in the serum albumin of rats in group 1, group 3, group 4, group 5, group 6 and group 7 compared to rats in group 2. Significant differences (P < 0.05) were also observed when the serum albumin level of rats in group 3 and group 4, group 5 and group 6, were compared.Significant difference (P < 0.05) was observed when the serum albumin levels of all groups (1-7) were compared with one another.However no significant difference (P > 0.05) was observed when the serum albumin level of group 3 and group 7 rats, group 4 and group 5 rats were compared.
DISCUSSION
From the result of acute toxicity study (table 1 and 2) of Ziziphusmauritiana and Ziziphus spina Christi, no mortality was recorded in any of the experimental groups after oral administration of 5000mg/kg of each extract, which proves these species harmless and non toxic and therefore safe to use.According to toxicity classification of 25 , any compound with oral LD 50 of 5000mg/kg or more should be considered practically harmless.It has been argued that even if LD 50 values could be measured exactly and reproducibly, the knowledge of its precise numerical value would barely be of practical importance, because an extrapolation from the experimental animals to man is hardly possible 19 .However, it serves a great purpose as a first pointer to the safety or toxic potential of a substance whose toxicity profile is not yet known.
Diabetes mellitus is a chronic disease characterized by high blood glucose level due to absolute or relative deficiency of circulating insulin level or insulin resistance.Though there are various types of hypoglycemic agent for treatment of diabetes, diabetic patients tend to consume natural products with antidiabetic activity to overcome side effects and toxicity of chemical drugs.Herbal antidiabetic drugs are used because they are effective and have low cost and less side effects 26 .The data (figure 1) obtained in this study with regards to the control values are in some ways similar to a study carried out by 16 who studied the Antihyperglycemic, antihyperlipidemic and antioxidant effects of Ziziphus spina christi and Ziziphus jujube in alloxan induced diabetic rats with differences mainly in the lipid profile.The significantly higher (P < 0.05) mean serum level of glucose in the diabetic control rats when compared to normal control rats may be due to possible damage to the pancreatic βislet cells by the effect of alloxan.Alloxan selectively destroys the islets of langerhans and decreases insulin production which results in diabetes 27 .Also, alloxan increases free radical production and cause pancreatic injury 18 .In otherwise, glibenclamide exerts hypoglycemic action by stimulation insulin secretion and inhibition of glucagon release.Previous studies carried out by 28 on the phytochemistry of various Ziziphus species have shown that HMZM and HMZS contain saponins, tannins, carbohydrates and flavonoids.Thus, the regression in blood glucose level may be attributed to the presence of phytochemicals such as saponins and tannins which reduce blood glucose level by increasing insulin levels due to the stimulating effect of the extracts on the remaining β cells in the pancreas after alloxan injection.The tannins in Ziziphus fruits have antioxidative effect.Oxidative stress is one of the important factors in tissue injury in diabetes mellitus 29 .These potent antioxidants may protect beta cells and increase insulin secretion in diabetic patients.Also, tannins may inhibit insulin degradation and improve glucose utilization by stimulation of GLUT4 (Glucose Transporter 4) protein content of the muscle 30 .
The significantly lower (P < 0.05) mean serum level of glucose as observed in the glibenclamide treated rats in comparison to Group 2 rats could be due to the fact that glibenclamide exerts its hypoglycemic effect by binding to and inhibiting the ATP sensitive potassium channels (K ATP ) inhibitory regulatory subunit sulfonylurea receptor 1 (SUR1) in pancreatic beta cells.This inhibition causes cell membrane depolarization, opening voltage dependent calcium ion channels resulting in an increase in intracellular calcium in the beta cell and subsequent stimulation of insulin release 31 .Hence it can be inferred that both HMZM and HMZS can be used as safe potential natural functional food ingredient or ISSN: 2250-1177 [91] CODEN (USA): JDDTAO therapeutic drug in the treatment of diabetes.In addition, they are effective in reducing both hyperglycemia and oxidative stress accompanying diabetes 16 and although glibenclamide exerted more hypoglycemic and hypolipidemic effects compared to HMZM and HMZS, Ziziphus spina christiextract has more pronounced effects than Ziziphusmauritiana.
The significantly higher (P < 0.05) serum lipids level observed in the groups 2-7 rats might be as a result of disturbance in the regulation of the activity of the enzyme, hormone sensitive lipase, by insulin due to its deficiency or absence, caused by the alloxan induced destruction of βislet cells 31 .Insulin deficiency in diabetes induces the synthesis of lipase which enhances lipolysis and increases the concentration of free fatty acids in plasma and liver.Glucagon level also increases in diabetes which enhances the release of fatty acids.Excess fatty acids in serum promote their conversion into cholesterol and TG with concomitant increase in LDL 32 .Moreover, insulin deficiency elevates LDL level and consequently the levels of cholesterol 33 .
Several researchers have reported the use of plant fruits in reducing LDL TC and TG, exerting their lipid lowering effects 34 .The decrease in mean serum level of cholesterol, Low Density Lipoprotein (LDL) and triacylglycerol (TG) in this study, tend to support the claim of the use of plant fruits in the management of hypolipideamia resulting from diabetes.However, this may also be due to the presence of saponins which were also reported to have hypolipidemic effects by reducing total cholesterol, triglycerides, HDL and LDL cholesterol.This could be due to the fact that saponins form an insoluble complex with cholesterol, increase fecal lipid excretion 35 .They also increase liver LDL receptor activity and also decrease synthesis of triglycerides 36 .The obtained results are also in agreement with 37 that proved the antidiabetic effect of some Ziziphus species.
Hence it can be inferred that both hydro-methanolic fruit extracts of Ziziphusmauritiana and Ziziphus spina Christi are effective in reducing hyperlipidemia accompanying diabetes.However, Ziziphus spina Christi fruit extract has more pronounced effects than Ziziphusmauritiana.
The significantly lower (P < 0.05) serum total protein level observed in the diabetic control rats might be as a result of increased breakdown of protein to generate ketogenic amino acids via gluconeogenesis for energy production.With insulin deficiency, the oxidation of branched chain amino acids in muscle and uptake of alanine by the liver are accelerated, resulting in increased gluconeogenesis and augmented protein catabolism 38 .The results of the present study demonstrated that the treatment of diabetic rats with hydro-methanolic extracts of Ziziphusmauritiana and Ziziphus spina christi resulted to a noticeable elevation in the plasma total protein and albumin levels as compared with their normal levels.It has been established that insulin stimulates the incorporation of amino acids into proteins 33 .
CONCLUSION
In conclusion, this study has revealed that HMZM and HMZS could be used as safe potential natural functional food ingredient or therapeutic drug in the treatment of diabetes.In addition, they are effective in reducing both hyperlipidemia and hypoglycemia accompanying diabetes.Although the most effective dose which was 400mg/kg of Ziziphus spina Christi extract had more pronounced antidiabetic effects than Ziziphusmauritiana, it was found to be less than that produced by the reference drug (glibenclamide).Hence it can be inferred that both hydro-methanolic fruit extracts of Ziziphusmauritiana and Ziziphus spina christican can be used as therapeutic drug in the treatment of diabetes.
Letters a, b, c, d, e and f indicates significant difference (P < 0.05) when group 2 was compared with groups 1, 3, 4, 5, 6 and 7, respectively.HMZM-Hydro-methanolicZiziphusmauritiana; HMZS-Hydro-methanolicZiziphus spina Christi; GBC-Glibenclamide The results of the effects of hydro-methanolic fruit extracts of Ziziphusmauritiana and Ziziphus spina Christi on cholesterol, High Density Lipoprotein (HDL), Low Density Lipoprotein (LDL) and triacylglycerol are presented in Figures 2, 3, 4 and 5 respectively.The decrease in mean serum level of cholesterol, HDL-C, LDL-C and triglycerides was found to be dose dependent and it occurred with an increase in the doses of hydro methanolic Ziziphusmauritiana and Ziziphus spina christiextracts respectively.
|
2018-12-17T19:48:54.505Z
|
2018-05-14T00:00:00.000
|
{
"year": 2018,
"sha1": "bf9ca672f1bbcf6a10ef517ff9edb1a61ee02272",
"oa_license": "CCBYNC",
"oa_url": "http://jddtonline.info/index.php/jddt/article/download/1711/1031",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bf9ca672f1bbcf6a10ef517ff9edb1a61ee02272",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
169327700
|
pes2o/s2orc
|
v3-fos-license
|
CORPORATE NETWORKS PROTECTION AGAINST ATTACKS USING CONTENT-ANALYSIS OF GLOBAL INFORMATION SPACE
Urgency of the research. Further improvement of the security corporate networks in the conditions of massive influence of computer attacks requires an increase in the probability of detection of new computer attacks and a decrease in the recognition time for the signs of known attacks. Target setting. Analysis of the texts of the global information space reduces the time of detection of possible threats. Actual scientific researches and issues analysis. Recent publications about systems of defense from attacks and use of text analysis in detecting threats were considered. Uninvestigated parts of general matters defining. It is necessary to improve the methods of working out the data sets of the body of network packets, content of Internet pages, information of mass media and social networks, which in turn raises the problem of semantic and syntactic processing of natural language texts. The research objective. The aim of the paper is organization collective protection of corporate networks via the introduction of threat monitoring systems, active intelligence activities in the global information space in order to search collect and analyze data about attacks, abnormal behavior, and content of Internet resources. The statement of basic materials. The requirements of security systems to reduce the time of a threat detection lead to the need for active intelligence assessment aimed at continuous monitoring of the surrounding cyberspace that consists of a variety of individual users and organizations’ computer networks. The purpose of such monitoring is to determine the characteristics, interests, features of the security policy of a particular corporate network in the global information space. In this context, particular importance attaches to the analysis of text information from both fully and partially open digital sources. A rational solution to this task is the establishment of threat monitoring centers aimed at the organization of collective protection for corporate networks related to them. Conclusions. The proposed method of protection allows both to detect cyber threats in the global information space and to customize their own corporate network security systems in accordance with their characteristic threat vectors.
Introduction.
In the modern world, problems related to the use and spread of malicious software, information attacks and other types of cyber threats, which have received the general name "cybercrime" are becoming more and more relevant.
During its development, the information technology sector has accumulated various types of cybercrime, which causes a great damage to both companies and individuals. According to the ISTR report [1] provided by Symantec (one of the leading developers of information security software), the past 2017 was too active for the attackers and was marked by significant incidents in Europe, the United States and the Middle East. The harm that it caused significantly exceeded the figures for 2012, when the total loss inflicted by IT offenders amounted to $ 388 billion.
It is clear that IT specialists were first to realize that there were some problems with the fight against cybercrime. According to the survey, most incidents in the field of information security lead to a loss of payment data (13 %), intellectual property (13 %), customer bases (12 %) and staff information (12 %) [2]. Of course, the problem of improving the methods for analyzing network security and preventing violations in order to fight cybercrime remains relevant. Thus, in today's society, cybersecurity issues have become the defining task of protecting the global information space.
Analysis of recent studies and publications. Traditional approaches to detecting malware are either limited to the use of signatures -byte sequences that identify malicious software, or heuristic algorithms, but these methods are not capable of detecting new attacks in real time [3].
These days, content analysis of text information is used to prevent threats, along with the analysis of the network traffic characteristics, the behavior of corporate networks and their security policy. Existing systems of text analysis and modeling include different kinds of search engines and information-analytical systems. They are capable of solving such tasks as classification of documents by its subject matter, author identification, detection of plagiarism, modeling representations of the knowledge about the subject area and the content of text, classification and filtering of documents by specified queries, and much more [4; 5; 6].
Highlighting the previously unsolved parts of the problem. Enhancement in the effectiveness of security systems and reduced time of threats detection requires a further development in the methods of processing the data arrays of the network packets' body, content of Internet pages and information from mass media and social networks, which raises the problem of semantic and syntactic processing of text, written in natural language.
The purpose of this paper. Applying a wider range of information for assessment of cyber threat's level of danger and creation of collective protection for corporate networks through introduction of threat monitoring systems and active intelligence assessment in the global information space of the Internet.
Main text. 1. Global level of corporate networks security.
The IT community has a considerable amount of experience in solving the tasks of providing information security (cybersecurity) for computer systems. A number of freely distributed and commercial systems of defense from attacks (SDA) was developed and became widely accepted in the field of corporate computer networks building [7][8][9][10][11].
Typical components of SDAs are ( Fig. 1): -a control module designed to configure the system as a whole and issue control commands to its components; -a sensor block for collecting the output data of network packages, settings, system states, events, messages in system logs, etc.; -a subsystem of analysis, which identifies the facts of computer attacks and/or abnormal behavior in the information and telecommunication system of the corporate network; -a storage, which holds the primary information from sensors and signatures, and templates of attacks that are generated by the subsystem of analysis; -a response module, which is responsible for visualizing the results of the analysis, the generation of warnings, and, in the case of resistance, for the execution of the instructions for selected security methods. It is known, that there are two types of basic requirements for SDA: 1. Requirements for detecting non-standard behavior of the computer network and attacks, with aim of minimization of errors of the first and second kind (signaling of nonstandard behavior or attack, when it is absent, detection missing of attack or unusual behavior of the network when it takes place); 2. Requirements for detecting attacks in real time. Earlier, the main efforts of developers were pointed to create effective detection algorithms, satisfying to the first type of requirements. These detection algorithms have used different mathematical basis: statistical methods, methods of automata theory, methods of interacting sequential processes calculus, methods of mathematical logics, neural networks, fuzzy logics, and other formalisms.
Storage
Some detection algorithms, in particular algorithms on basis of neural networks have cyberspace-adaptive properties. However, the rapid dynamics of the environment change (the variety of network structures, the variety of types of attacks, etc.) often reduced efforts of designers to zero.
As a rule, the main "bottleneck" of all previous approaches is the violation of time limits adopted for real-time systems. In the case of neural nets detection process adaptation is done by procedure of neural network learning. But it is very time consuming procedure. So, enforcement of adaptive capabilities of detection algorithms leads to slowing of overall detection process.
The way to avoid this dead end situation is following: -for SDA of corporative information system to use a broader set of analyzed information about environment,that permits to predict behavior of IT system and it's environment; -to do risk analysis and estimate current or predictable level of danger for corporate nets from known attackers; -to have time and possibility for corporate system SDA be ready to reflect the most probable attacks.
The first two paragraphs from the above (list), subordinated to the field of intelligence or counterintelligence activity. According to [12; 13], in the modern world the term political, economic, scientifictechnical intelligence means active action, which are aimed at collecting, storage and processing of valuable information, that is closed to outsiders.
A similar definition can be given for counterintelligence activities. Concerning protection corporate computer network from unauthorized access to information, model of attack on computer network always contains step of intelligence activity, as well as protecting the computer network includes counterintelligence activities.
Consider possible approaches to the implementation of the above opportunities.
If the attacker has such information about atacker as: his address and qualification, his preferences regarding the use of certain types of harmful actions, the degree of activity, often gives the opportunity to build both passive and active protection. If the management of passive protection is comes down only to varying their own vulnerabilities, then in contrast to the latter, active protection allows you to carry out counterattacks to the source of the invasion.
In modern SDA, there are three levels of protection from attacks, having access to the processed information: 1. Network layer; 2. Layer of operating system; 3. Application layer (Fig. 2). Application layer -is responsible for interacting with the end user, layer of OS -is responsible for maintenance of application software and DBMS, network layer -is responsible for the interaction of units of the information and telecommunication system. Each level has its own vulnerabilities. Fig. 2. Levels of protection of computer networks by traditional SDA At the network layer, the bottleneck is the used sharing protocols between the corporate network and outside environment, which tend to be oriented on the package delivery of the information. The packages have a fixed structure. TSP and IP packages can be an illustration of such structures (Fig. 3).
Fig. 3. Structure of TCP and IP headers
The analysis of the structure of circulating packages in the corporate network is the essence of the analysis at the network layer of protection in SDA. As a rule, the package flags, the port addresses for network nodes, the time intervals between specific events and so on are analyzed here.
The package contains the information about the sender, which is often represented as a DNS-address. This information is definitely of a great value as it can clearly point at the source of the attack. However, the truth of address information about the source of the attack is often questionable, since it can be easily corrected by the sender of the package. For some protocols, such as mail, the address of the attacker may also be obviously stated. However, as in the previous case, the address of the sender can easily be changed.
As a result, there is a need to allocate one more level of realization of the protective methodsthe level of the global network.
At this level the information, which is contained in the text documents on web-sites, global network portals, social networks or other legitimate objects of the information space can be analyzed and both the sources of attacks and their information characteristics can be indirectly identified.
The concept of a text document here is multivalued: it is text information from websites and portals, and emails, and program codes that are entered into the computing environment of the victim's computer. In any case, this level is characterized by, on the one hand, methods used in intelligence activities, including business or competitive intelligence [12], and, on the other hand, methods of text processing [14].
In the latter case, studies in this area have significant scientific results and a stated number of tasks of text processing. These include: -the task of determining the topic of texts in information-analytical and information retrieval systems. The essence of the task is the automatic classification of texts by thematic categories; -the task of analyzing patents in information systems; -the task of finding out the author of the text. This is the task of determining the authorship of an unknown text by selecting features of the author's style and comparing of these features with the peculiarities of other documents which authorship is known; -the task of detecting plagiarism and incorrect borrowing in order to protect copyright. Its solution is to compare the proposed text with the texts of already known authors in order to determine the degree of coincidence; -the task of automatic annotation and abstracting. It is a brief characteristic of the document, that shows the main content and is an important component of automatic text processing systems. Most existing annotation systems are based on detection of words and vocabulary units, calculation of their weights in the sentence and determining of sentences with the largest total weight. Compiling the abstract is based on these sentences.
In the IT area, tasks of text analysis acquire specific sense. In particular, some of the most popular are: -the task of analyzing Internet texts and identifying users characteristics; -Text Mining, including tasks of information impact on the emotional state of social media users; -the task of analyzing source program code texts, etc. IT professionals very often have problems with viruses and other malware. Actual threats include spreading spam, phishing, network attacks on enterprise infrastructure, including target and DDoS attacks, where use potentially dangerous software vulnerabilities.
These and other similar examples show a close relationship between cybersecurity systems and word processing systems: when detecting spam, data loss, detecting and tracking potentially dangerous messages, etc.
As it is pointed out in [14], the main source of the text data in the IT industry are posts of users in social networks, blogs, forums, etc.
Processing of the flows of text messages has different purposes: -tracking of undesirable, potentially harmful messages, identifying the people behind them; -determination of the emotional dimensions (tone) of the text messages is used during the advertisement campaigns, including the times when it is used during the creation of the contextual advertising; -configuring of the information search systems interfaces for each specific user. The relevant task is the authorship identification of small texts, which appears a way more frequent than the task of the authorship identification of the significant size texts [14]. It is mainly due to the widespread of the instant messenger programs for message exchange over the Internet, increasing the role of email during the business communication process, vast popularity of the Internet forums and blogs.
Users have an opportunity to send messages without completing the registration forms and without inputting any kind of information about themselves; in this case, the registration is more a formality and the address of the sender can be changed easily.
The tasks of the creator identification of the software, including the identification of the malware creator are closely knitted with the tasks of the information security.
This field of the research is actively evolving lately. From one side, it is connected with intellectual property protection, from another, it is connected with the necessity of cyber threats prevention, which arises because of the malware usage. In the latter case, it is hard to overestimate the possible damage, which can be caused to control systems by the key infrastructure, including to the ТЕХНІЧНІ НАУКИ ТА ТЕХНОЛОГІЇ № 1 (11), 2018 TECHNICAL SCIENCES AND TECHNOLOGIES 120 military targets. Because there are new kinds of malware being created all over the globe, there is a necessity of the identification of the malicious code creators and bringing them to justice.
As it was mentioned before, we can get the benefit from the methods and approaches used by competitive intelligence for securing the computer networks, as well as its automatization approaches [12; 15], such as: 1. Objectives classification (like questions, topics, avenues for enquiry).
2. Groups of search bots (in the Ukrainian segment of the Internet using the Ukrainian language, in the international web using the main European languages).
3. Programs for automatic information ranking by classifiers. 4. Employees and units classifiers. 5. Programs for automatic information distribution by consumers. 6. Interactive reference books on information-based topics, collected at the present time.
These tools, as well as the presence in the arsenal of cyber security software for word processing tasks, combined with powerful tools for searching information on the Internet, allow the automated support of a number of competitive intelligence scenarios for the purpose of protecting computer networks.
For example, on fig. 4 presents one of the possible scenarios for determining the address of the attacker on the corporate computer network and possible automated support for it.
The formal models of texts representation.
The basis of all above-mentioned tasks of text processing is the formal models of text representation.
Let us consider that a text is a sequence of characters of an alphabet А, its structure is set by a formal grammar G, which defines its syntactic construction. Furthermore, words and its forms such as objects, subjects, verb constructions, simple sentences, complex sentences, etc. are highlighted. All the sequences of characters, which are described by grammar, form language. Even grammar involvement for text description gives an opportunity to carry out its characterization, since entry of every next element depends on the previous elements. Statistical dependency between elements of the text can be described with a help of informational portrait of the text, which is made on the basis of mutual information between elements of the texts. On that is pointed out in the works of А. Kolmogorov [16] and R. Piotrovsky [17], where the definition of amount of information in one last object relatively to another is being introduced.
The statistical models of the text.
Talking about the models of the texts that were founded on using statistical and informational approach, the view of С. Shannon about the source of information [18] can be used. If we consider the text as a sequence of symbols or other elements, so their occurrence is not random. Any meaningful words or phrases, which form text completely, have statistical structure. In the tasks of analyzing the text its must be accounted.
This approach, which is relied on the views of С. Shannon and fundamental concepts of information theory, was developed in the works of А. Kolmogorov [16] in the probabilistic plan.
It can be used if consider the text as holistic complex system. Any text has a certain meaning that is invariant to the methods of texts presentation. As a complex system text has a semiotic (full of linguistic) nature of informational relations between its subsystems [14].
Let @ , A = 1, -is elements of the text, -is a number of different meanings that element @ can obtain. Then: 2( @ ) -is a probability of occurrence of element @ in the text, 2 @ , @ -is a probability of occurrence of a pair of elements @ and @ . For well-known texts B , B * , … , B D the authors E , E * , … , E find the value of the selected parameter: the number of inputs of the selected elements in different ways and in their combination then calculate the probability of their appearance in the text, which can be written in the matrix of the probabilities of the collisions of the pair of elements: F G H = I 2( @ , @ ) … 2( @ , @ J ) … … … 2( @ J , @ ) … 2( @ J , @ J ) K , 3 = 1, L.
Then, for each pair of elements, a quantitative measure of mutual information between them can be brought into conformity, the results of this are presented in the form of the matrix MN G H (information portrait of the text B ) of the mutual information between the elements, where O = N @ , @ denotes mutual information between the elements @ and @ , which is calculated by the formula: Informational portraits can be constructed for each text B on a plurality of different text elements for each level of the structural-hierarchical model of the text.
In the work [4] the notion of informational portrait is defined as a set of words and phrases selected automatically, which are important for the chosen sample within a framework of general array of documents.
Informational portrait in this case is based on the identification of the relationship of terms and calculation of the weight coefficients of these terms.
There are two algorithms evaluating the relationship between concepts [14]: 1) the algorithm of joint occurrence, which is based on the calculation of the common occurrence of concepts in the same documents (I type); 2) the context proximity algorithm, which is based on the calculation of the correlations of the sets of keywords included in the documents in which the concepts where mentioned (II type).
Different methods of cluster and factor analysis can be used to regularize the concepts and identify their relationships. As a result of their functioning, the relationship tables will take the form of block-diagonal matrices. Thus, the informational portrait of a text can be regarded as its formalized model.
Markov models of texts.
A text is not a random sequence of independent usage of its elements. There are syntactic, semantic, and other dependencies between the elements of the coherent text. An extension of the approach in which symbols are used independently of each other (a probabilistic model of the text) is the Markov model of the generation of text elements [5]. The probability of appearance of an arbitrary element in a text presented in the form of the Markov`s chain depends on the previous element.
Consider some arbitrary text B as a system. Its elementary units (letters, letter combinations, words): V , (A = 1, … , ). W X denotes a state of the system at time Y. The simplest Markov`s chain is determined by the set of transition probabilities: With the complication of this model, the probability of occurrence of this element is considered to be dependent on the group of previous elements. Assume that the appearance of some element V depends on 3 previous elements, then: A similar model allows a more complete characterization of the structure of the text.
Relational Model of Text.
Much of the text processing literature a formalized model of text was seen as 〈],^〉 pair, where ] -set of essence that establish a construction of the text, ^ -finitary relations which are usually verb form in the text. Based on the model ontologies are built [19] which comprise the description of subject areas. The latter sometimes given as a way of presenting knowledge that enshrined in the text.
In the IT sector practice of using relational model of text is quite extensive: from designing applications to the use of information search mechanisms.
Logical and linguistic model of text.
The logic-linguistic model of the text is widely used in a mathematical linguistics [6,20]. It allows to present arbitrary sentences as the conjunction of atomic predicates, each of which describes the indivisible content of the sentence: where W -sentence of natural language; 2 -relation that connects actors, objects and subjects of relations in the sentence that connects actors, objects and items of relations in the sentence W, 2 ∈ Z _ -set of relations included in the sentence W; ℎ -characteristic of the 2-th sentence W relation, ℎ ∈ u _ -the set of characteristics of the 2-th relation in sentence W; _ (ℎ) -predicate that describes 2-th relation to the characteristic ℎ and connects actors, objects and items of relation 2 in sentence W; @ -sentence subject W, @ ∈ v _ (ℎ) -set of entities associated with the objects of sentence W by 2-th relation that has a characteristic ℎ; Q -characterization of the subject @ of the sentence W, Q ∈ w _ (@, ℎ) -set of characteristics of the subject @ ∈ v _ (ℎ); _ (@, Q, ℎ) -predicate that describes the p-th relation with the characteristic ℎ between the subject @ ∈ v _ (ℎ) with the characteristic Q ∈ w _ (@, ℎ), the objects and items of the p-th relation in sentence S; k -sentence object W, k ∈ x _ (@, Q, ℎ) -set of entities associated with the objects of sentence W by 2-th relation that has a characteristic ℎ; Y -characteristic of the object k of the sentence W, Y ∈ y _ (@, Q, k, ℎ) -set of characteristics of the object k ∈ x _ (@, Q, ℎ); _ (@, Q, k, Y, ℎ) -predicate that describes the p-th relation with the characteristic ℎ between the subject @ ∈ v _ (ℎ) with the characteristic Q ∈ w _ (@, ℎ), the objects k ∈ x _ (@, Q, ℎ) with the characteristic Y ∈ y _ (@, Q, k, ℎ) and objects of the p-th relation in sentence S; o -subject of the p-th relation of the sentence W, o ∈ z _ (@, Q, k, Y, ℎ) is the set of objects of the p-th relation, which has the characteristic h, between the subject @ ∈ v _ (ℎ) with the characteristic Q ∈ w _ (@, ℎ) and the object k ∈ x _ (@, Q, ℎ) with the characteristic Y ∈ y _ (@, Q, k, ℎ); p -characteristic of the subject of the p-th sentence relation W, p ∈^ _ (@, Q, k, Y, o, ℎ)set of characteristics of an object o ∈ z _ (@, Q, k, Y, ℎ); _ (@, Q, k, Y, o, p, ℎ) -simple, atomic predicate that describe a sentence part that has a finished content and describes in the sentence S the p-th relation with the h-th characteristic between the subject @ ∈ v _ (ℎ) with the characteristic ∈ w _ (@, ℎ) and the object k ∈ x _ (@, Q, ℎ) with the characteristic Y ∈ y _ (@, Q, k, ℎ), whose subject o ∈ z _ (@, Q, k, Y, ℎ)has the characteristic p ∈^ _ (@, Q, k, Y, o, ℎ).
The logic-linguistic model _ of sentence S is represented by the set of formulas (1-4) presented above and is formally described by the sequence of the eight conjunctions included in these formulas. The transition from the general formula _ to the predicate _ (@, Q, k, Y, o, p, ℎ) is a decomposition of the problem of the formal description of the arbitrary sentence of the natural language and reflects a systematic approach to its solution. Therefore, the complex expression _ is true if and only if all elementary predicates of the type _ (@, Q, k, Y, o, p, ℎ) are included.
Multidimensional text model. Every text object can be set with a set of some values. Sign selection depends on the processed texts, aims and tasks of the data analysis and other factors. The character of the signs also can be different, qualitative and quantitative, binary (dichotomous), ordinal, etc. However, in any case their complex can be treated as appropriate -dimensional space of signs, and given objects as points of this space. In some tasks, including text information analysis tasks, data is often presented by not the separate signs values, but with probability values of some variable ρ (@ , @ ), which characteri-ТЕХНІЧНІ НАУКИ ТА ТЕХНОЛОГІЇ № 1 (11), 2018 TECHNICAL SCIENCES AND TECHNOLOGIES 124 zes objects pairwise mutual accordance@ і @ . Depending on the aims of tasks the degree of similarity or difference is examined, in last case such description denotes distance between objects. Anyway when solving data analysis problems geometrical closeness of two or more points in thisdimensional space means the closeness of corresponding objects, i.e. their homogeneity. The separate classes (clusters) of objects will be represented by coherent areas in this space.
As an example, it is possible to point the next possible signs of every level.
For the level of letters as signs can come forward: frequencies of separate letters appearance, frequencies of separate syllables and signs appearance, frequencies of n-gram subsequences of characters from text appearance. For the level of words: frequencies of appearance of separate words, word-parts, bases of words or a few words.
For the level of sentences: frequencies of appearance of sentences with the fixed amount of words, with a certain grammatical construction, using special turns, etc.
In the semantic representation of the text, the value of different attributes as well can be defined at all levels of the semantic hierarchy. Then a collection of documents can be presented in the form of a matrix "Object-sign" {B = [@ \, in which lines correspond to texts (A = 1, L), columns -to signs (U = 1, w), and matrix elements -to the value of sign for each text. . Matrix "Term-document" is formalized by an expression, which is a separate case of transposed matrix "Object-sign".
To reduce the dimension of the matrix "Text-sign" and detection the most informative features can be used singular decomposition of the matrix (SVD -singular value decomposition). An arbitrary matrix can be represented as: } -diagonal matrix, in addition, its elements are sorted in descending order. Elements of the matrix } -are singular numbers.
Columns and rows of matrices | і ~G, which correspond to a small singular numbers, make the smallest contribution to the final text, so their exclusion will allow to reduce the dimension of the matrix M without significant losses for further calculations [21]. Large singular numbers are main information characteristics, others contain random noise.
When using methods as analysis of main components, factor and discriminatory analysis and others in classical multidimensional data analysis, the "Object-sign" matrix is converted into covariance (correlation) matrix. In this case, the covariance matrix is a square matrix of the "sign-sign" type and it characterizes the degree of proximity (similarity) of signs. However, in practice, to describe text objects is often used representation form of an objects proximity matrix (matrix of "object-object" type).
The correlation matrix "object-object" defines the degree of similarity of the objects, and its elements are determined by the formula: where @̅ = ‡ ∑ @ ‡ -the average value. Formulas are used to calculate the coefficients of the rank correlation when not quantitative values of signs are considered. At the same time, using the developed methods of data multidimensional analysis, it is necessary to take into account the features of the text as a real object and it is essential to consider the process of text structures formation, when compiling models and presenting texts in the form of a multidimensional object.
Evaluation of cyberspace from the perspective of threats to corporate computer networks.
Sure, active intelligence of cyberspace in the interests of cyber security of corporate computer networks needs to calculate some threat indicators. For corporate computer networks these indicators can be considered as a vector of threats from different attacks: ^(ˆ) = 〈p (ˆ), p * (ˆ), … , p ‰ (ˆ)〉, where p (ˆ) = Z (ˆ) * ‹ -risk of i-type attack during t-time, Z (ˆ) -corporate network's probability of being attacked by i-type attack during t-time, ‹ -cost of lost cause of i-type attacks. Calculations of risks from various attacks require the identification of sources of attacks on indirect grounds, determining their inclinations to attacks or undesirable influences of one kind or another, determining the characteristics of attack activity, calculating predictive activity indicators based on time series analysis, and the like.
The ordering of the elements of this vector in descending risk values is reduced to the construction of the vector ^ * (ˆ), the first elements of which indicate the attacks, which should strengthen the protection of the computer network.
This protection becomes possible or by configuring the corporate network SDA to prepare the activation of attack detection algorithms in accordance with the vector ^ * (ˆ), or by eliminating the vulnerabilities that use this type of attack. Given the temporary limitations of the attack detection process, such actions should be performed based on predictions of the activity of potential attack sources, the detection of which is the task of the global network security level of the corporate network.
Collective protection of corporate networks against computer attacks.
As can be seen from the previous arguments, the task of text processing and the task of assessing cyber threats indicators for corporate networks, inherent to the global network level, are complex resource-intensive tasks.
Given the temporary requirements for the SDA, it can be assumed that including them in the latter will entail a slowdown in the performance of basic functions and an unjustified increase in resource consumption. At the same time, in our opinion, assigning functions of the globally-lingual level of protection of the corporate network to the functions of a separate computer complex that manages this level of protection of several corporate networks and determines the threat indicators for each of them is a promising solution. We may call this complex as System Monitoring Unit (SMU).
In addition to the parallelism in performing certain functions of SMU and SDA, this solution allows for the collective protection of subordinate corporate computer networks against computer attacks. The essence of this protection is to conduct self-diagnostics of corporate computer networks with the help of SDA, exchange of information about attacks and nonstandard behavior with partners, about interference in work. Here you can solve the problem of determining the speed of the spread of external interventions, the coordination of the parameters of the SDA, including the coordination of efforts to analyze unknown invasions.
The structure of SMU complex is shown in Fig. 5. Fig. 5. Architecture of SMU In our opinion, the rational use of the proposed complex is to support the activities of the regional cybersecurity center, which is designed not only to perform the functions of operative protection of wards of corporate networks, but also to support their audit.
Conclusions and suggestions. Further improvement of the security and stability in functioning of the information and telecommunication systems of corporate networks in the conditions of massive influence of computer attacks requires an increase in the probability of detection of new computer attacks and a decrease in the recognition time for the signs of known attacks.
To solve this problem, it is not enough to use only traditional methods that utilize identification characteristics of network traffic and information about the work of corporate networks and security devices. The processing of data sets of the body of network packages, content of Internet pages, information from mass media and social networks is very valuable in this area.
Processing, careful analysis and synthesis of information collected from Internet resources is made using content and/or rapid analysis methods, bibliometric and/or cluster analysis, as well as expert and/or situational methods. However, a tight time limit for the search, collection, extraction and processing of information circulating in the global information space of the Internet, its accumulation, classification by certain attributes, further analysis, synthesis, compilation and making it accessible to the concerned users, as well as transformation into synthesized conclusions and recommendations necessitates some arrangements. First, the automation of all measures in the complex of risks monitoring system associated with these processes. Second, the configuration of SDAs subordinate to the SMUs of corporate networks according to their risk vectors.
The development of a corporate networks protection model with a collective SMU defense module, methods for detecting and identifying computer attacks with help of content analysis of the global information space and the architecture of SDA, related to it, will provide a basis for the synthesis of a reliable and high-performance adaptive cyber threats detection systems and will shorten the detection time of the computer attacks of the new generation.
|
2019-05-30T23:44:40.730Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "0c37849c825b94e2cad1cd858cf5e6398ae9f561",
"oa_license": "CCBYNC",
"oa_url": "http://tst.stu.cn.ua/article/download/135504/132370",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "c1134411205cabb2a73a0e308d7c03a97719260d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
117958520
|
pes2o/s2orc
|
v3-fos-license
|
Casimir force for electrolytes
The Casimir force between a pair of parallell plates filled with ionic particles is considered. We use a statistical mechanical approach and consider the classical high temperature limit. In this limit the ideal metal result with no transverse electric (TE) zero frequency mode is recovered. This result has also been obtained by Jancovici and \v{S}amaj earlier. Our derivation differs mainly from the latter in the way the Casimir force is evaluated from the correlation function. By our approach the result is easily extended to electrolytes more generally. Also we show that when the plates are at contact the Casimir force is in accordance with the bulk pressure as follows from the virial theorem of classical statistical mechanics.
I. INTRODUCTION
It is a pleasure to contribute this work to a festshrift volume for Professor Iver Brevik. We have had an extensive collaboration through many years on problems connected to the Casimir effect. In our works we have fruitfully utilized methods from different fields of research. In particular we have explored the statistical mechnanical aspects of the Casimir problem. The present contribution is a work that continues in the statistical mechanical direction.
A pair of metallic or dielectric plates attract each other. This is the well known Casimir effect, and it is commonly regarded to be due to fluctuations of the quantum electrodynamic field in vacuum. However, Høye and Brevik considered this in a different way by regarding the problem as a statistical mechanical one of interacting fluctuating dipole moments of polarizable particles. In this way the Casimir force between a pair of polarizable point particles was recovered [1]. To do so the path integral formulation of quantized particle systems was utilized [2]. Before that this method was fruitfully utilized for a polarizable fluid [3]. With this approach the role of the electromagnetic field is to mediate the pair interaction between polarizable particles. Later this type of evaluation was generalized to a pair of parallell plates, and the well known Lifshitz result was recovered [4]. Similar evaluations were performed for other situations [5,6].
The statistical mechanical approach opens new perspectives for evaluations of the Casimir force. Instead of focusing upon the quantization of the electromagnetic field itself one can regard the problem as one of polarizable particles interacting via the electromagnetic field. It is found that these two viewpoints are equivalent [1,4,6,7].
Metals are materials that have electrons that can be regarded as free. When deriving the Lifshitz formula they are regarded as dielectric media that have infinite dielectric constant for zero frequency. Jancovici andŠamaj realized that it should be possible to evaluate the Casimir force for metals by regarding an electron plasma. Thus they considered parallell plates filled with charged particles at low density in a neutralizing background [8,9,10]. Further they considered the classical case, i.e. the high temperature limit. In this situation the Debye-Hückel theory of electrolytes is fully applicable. Then they use the Ornstein-Zernike equation (OZ) equation, and utilize its equivalence with the differential equation for the screened Coulomb potential to obtain the pair correlation function. This function is used to obtain the local ionic density at the surfaces of the plates. The difference between local and bulk densities is attributed to the Casimir force in accordance with the ideal gas law. The result obtained coincides with a result for ideal metals in the high temperature limit. The latter has been a dispute of controversy [11]. The ionic plasma result coincides with the one where there is no transverse electric mode at zero frequency. This is also in accordance with Maxwell's equations of electromagnetism.
The ionic plasma has also been extended to the quantum mechanical case by use of the path integral formalism from a statistical mechanical viewpoint, and it has been shown that magnetic interactions do not contribute in the classical high temperature limit [12].
In the present work we reconsider the ionic plasma in the classical limit. We arrive at the the same pair correlation function as in Ref. [8]. But we use a different approach to obtain the Casimir force. As we see it, our method better utilizes the methods of classical statistical mechanics especially for possible further developments. Thus we use the correlation function to directly evaluate the average force between pairs of particles in the two plates and then integrate to obtain the total force. This is the method used in Refs. [1,4]. In this way the result of Ref. [8] is recovered. A noteable feature of this comparison is that it demonstrates that the modification of the density profile at the surface is a perturbing effect that can be neglected to leading order by our approach.
With our approach the evaluations are extended in a straightforward way to electrolytes of more arbitrary density. To do so known properties of the direct correlation function is utilized. The main change with this extension is that the large distance inverse shielding length is modified while the Casimir force remains unchanged for large separations.
An additional result of our approach is that it is shown that when the plates are at contact the Casimir pressure more generally is nothing but the contribution to the bulk pressure (with opposite sign) that follows from the virial theorem of classical statistical mechnanics.
II. GENERAL EXPRESSIONS
Consider a pair of harmonic oscillators with static polarizability α. They interact via a potential ψs 1 s 2 where s 1 and s 2 are fluctuating polarizations. This interaction creates a shift in the free energy of the system. This is easily evaluated to be [1] where T is temperature and k B is Boltzmanns constant. The last sum is the expansion performed in Ref. [4] where the two particles were replaced with two plane parallell plates. In the latter case the terms can be interpreted as the sum of graph contributions due to the mutual interaction ψ. The α will represent correlations within each plate separately while each ψ gives a link between the plates while 2n is the symmetry factor of the graphs that form closed rings. With plates the endpoints of each link ψ should be integrated over the plates. In the quantum mechanical case there is also a sum over Matzubara frequencies upon which α and ψ may depend. The parallell plates are separated by a distance a. Due to the interaction there will be an attractive force K between the plates. This force is found from [4] The fraction in the middle of this expression represents the graph expansion of the pair correlation function with the endpoints in separate plates. These graphs form chains where each ψ forms a link between the plates. Thus we can write where ρ is number density, h(r 2 , r 1 ) is the pair correlation function, and ψ ′ z (r 2 −r 1 ) = ∂ψ/∂a with the z-direction normal to the plates. For polarizable particles integral (2.3) will also contain integrations with respect to polarizations [4].
For infinite plates integral (2.3) diverges, so as usual we will consider the force f per unit area which then will be where the hat denotes Fourier transform with respect to the x-and y-coordinates. (Here we have used f g dxdy = fĝ dk x dk y /(2π) 2 and translational symmetry along the xy-plane.) Now we can introduce Further with z 2 = u 2 + a and z 1 = −u 1 we then get An interesting feature of result (2.6) or (2.4) is that it is fully consistent with the virial theorem in statistical mechanics. This means that when the plates are at contact for a = 0 the Casimir force equals the contribution to the pressure from the virial integral with pair interaction ψ. With a = 0 translational symmetry is also present in the z-direction, so we have f = ρ 2 With new variable z = z 2 − z 1 one can first integrate with respect to z 2 which then will be confined to the region 0 ≤ z 2 ≤ z. Thus with z 0 dz 2 = z we obtain (r = r 2 − r 1 ) where symmetry with respect to the x-, y-, and z-directions and with respect to positive and negative z is used. (It may be noted that the above is correct if the average of ψ is zero. Otherwise the pair distribution function 1 + h should be used. But for neutral plates as for dielectric plates with dipolar interaction this average will be zero.)
III. PAIR CORRELATION FUNCTION
To obtain the correlation function we use the Ornstein-Zernike (OZ) equation which here has been extended to non-homogeneous fluids. The c(r) is the direct correlation function. For week long-range forces [13] or to leading order the c(r) is related to the interaction in a simple way c(r 2 , r 1 ) = −βψ. where q c is the ionic charge assuming one component for simplicity. (Here Gaussian units are used.) To keep the system neutral a uniform background is assumed. As noted in Ref. [8] the OZ-equation is now equivalent to Maxwells equation of electrostatics. The similar situation was utilized in Ref. [4] for dipolar interactions. Since ψ is the electrostatic potential from a charge one has With this Eq. (3.1) can be rewritten as where r 2 and r 1 have been replaced by r and r 0 respectively. In the present case with parallell plates the number density is with equal densities ρ = const. on both plates. By Fourier transform in the x-and ydirections Eq. (3.5) becomes where the hat denotes Fourier transform and with κ 2 = 4πβq 2 c ρ κ 2 z = κ 2 1, z < 0 0, 0 < z < a 1, a < z. (3.8) The κ is the inverse Debye-Hückel shielding length in the media. Solution of Eq. (3.7) can be written in the formΦ where q = k ⊥ , q κ = k 2 ⊥ + κ 2 . (For z < z 0 the solution is the first line of Eq. (3.9) where the resulting exponent of first exponential has changed sign.) With continuousΦ and ∂Φ/∂z as conditions, one finds for the coefficient of interest (3.10) With this the pair correlation function for z 0 < 0 and z > a iŝ h(k ⊥ , z, z 0 ) = −2πβq 2 c De −qκ(z−z 0 ) . (3.11)
IV. CASIMIR FORCE
Besidesĥ theψ ′ z is needed to obtain the Casimir force f . In accordance with Eq. (3.3) the ionic pair interaction is ψ = q 2 c /r. Its full Fourier transform isψ = 4πq 2 c /k 2 which is consistent with Eq. (3.4). With k 2 = k 2 ⊥ + k 2 z this can be transformed backwards to obtain This is consistent with solution (3.9) for Φ. The derivative of (4.1) with respect to z is now together with expression (3.11) inserted in Eq. (2.6) to first obtain (z − z 0 → z 2 − z 1 = First one can note that this result is precisely result (3.44) in Ref. [8]. This is seen by some rearrangement of the latter result with the substitutions κ 0 → κ, k → q/κ, and d → a for dimensionality ν = 3. Expression (3.10) and result (4.2 may be simplified further with new variable of integration q = κ sinh t, dq = κ cosh t dt. With this we have q κ + q = κ(cosh t + sinh t) = κe t , q κ − q = κe −t , and A = e −4t by which the Casimir force becomes where g(t) = 4t + 2κa sinh t.
For large separation a only small values of t will contribute, and one can put g(t) = (2κa + 4)t and sinh 2 t cosh t = t 2 .
With this and expansion of the denominator the force becomes The ζ(3) is the Riemann ζ-function, ζ(p) = ∞ n=1 1/n p . As noted earlier [8,9,10] this is the ideal metal result for high temperatures when the transverse electric mode is absent. Also one sees that for large a the effective separation between the plates is increasesd by twice the Debye shielding length, i. e. a → a + 2/κ. Thus for semiconductors the influence of free ions vanishes due to the increase of effective separation for decreasing ionic density. The small conductivity of semiconductors has been an issue of some controversy [14]. It has been argued that small concentrations of free ions in semiconductors should be neglected [15]. However, result (4.1) suggests that lack of influence for small ionic concentration is due to increased effective separation for vanishing κ.
When the plates are in contact, a = 0, the integral (3.11) can be evaluated exlicitly. With 1 − e −4t = 4e −2t sinh t cosh t one finds .
(4.5) For an ionic system at low density this is precisely the contribution to the pressure (with opposite sign) from the ionic interaction (beyond the ideal gas pressure) in accordance with the virial integral (2.8).
V. ELECTROLYTES IN GENERAL
For higher densities and lower temperatures the direct correlation function c given by Eq. (3.3) will be modified. However, the crucial point is that for large r → ∞ this expression is still valid while for small r there will be changes. On the scale of plate separation this change will be a term that can be regarded as a δ-function in r-space such that where c 0 (r) = −βq 2 c /r and τ is a constant that will depend upon the local density. When the local density varies the OZ-equation Thus the only change in the resulting pair corrrelation function ρhρ is that ρ is replaced by an effective density ρ e on the right hand side. In this way only the inverse shielding length is affected by which we get κ 2 = 4πβρ e q 2 c . But for large plate separations the Casimir force (4.4) does not depend upon κ by which the ideal metal result is generally valid for large separations for any electrolyte.
VI. SUMMARY
The Casimir force between a pair of parallell plates filled with ionic particles has been evaluated in the classical high temperature limit. To do so methods of classical statistical mechanics have been used. The pair correlation function is evaluated from which the average force between pairs of particles in different plates is found. When the plates are at contact the magnitude of the force equals the contribution to the pressure from the virial theorem. This latter result makes the force consistent with bulk pressure. The force found is the same as the one found earlier in Ref. [8] for charged particles at low density. There the force was evaluated on basis of the difference between surface and bulk densities. By the present approach it thus follows that this difference in densities can be neglected to leading order.
|
2009-03-17T14:37:44.000Z
|
2009-03-17T00:00:00.000
|
{
"year": 2009,
"sha1": "5bd1ab463d8f92b0383bf19acde0ea5609cf184d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5bd1ab463d8f92b0383bf19acde0ea5609cf184d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
237948795
|
pes2o/s2orc
|
v3-fos-license
|
Erudition after neonatal gastric transposition for esophageal atresia at 10 years of follow-up
Results: Four children (three male and one female) were included in the study. The mean age at ER was 5.3 ±2.2 days with a mean birth-weight of 2.43 ±0.13 kg. Two children had primary GT, while the other two had GT following a leak in primary anastomosis. During the mean follow-up of 180.25 ±43.5 months, none of the children required esophageal dilatation or other surgical intervention or procedures. All children were below 3rd centile for weight-forage while all except one were below 50th centile for height-for-age. There was no stricture on oral contrast study, however, one child had grade III reflux on GER scan. Persistent duodenogastric reflux on HIDA scan was seen in one child. Three children had restrictive parameters on spirometry. Symptomatically, all reported poor weight gain, one had left vocal cord palsy with hoarseness, and one had chest-wall protuberance.
INTRODUCTION
The management of Esophageal atresia has always been a challenge. The survival of these babies depends on birth weight, early diagnosis, associated anomalies, pre-operative stabilization, and postoperative care.
The contemporary approach is to preserve the native esophagus by attempting a primary anastomosis, even in moderate-severe tension. [1] However, when the primary anastomosis is not possible, either a serial lengthening procedure followed by delayed primary repair is opted or esophageal diversion with subsequent esophageal replacement (ER) at a later date is considered. Various options are available for ER including gastric transposition (GT), gastric tube, colonic interposition, ileal or jejunal micro anastomo-sis. [2,3] However, attempt for neonatal esophageal replacement (ER) is sparsely reported. [4,5] We report characteristics of children who had undergone GT in the neonatal age at our center and have a minimum follow-up of ten years. With this study, authors attempt to emphasize the long-term results of neonatal ER and thus, define the safety and feasibility of neonatal ER.
METHODS
All the children who had undergone neonatal ER by a single senior surgeon (DKG) for esophageal atresia with or without trachea-esophageal fistula and have completed a minimum ten years of follow-up were included. They were enrolled from the Pediatric surgery clinic at All India Institute of Medical Sciences, New Delhi, India from 01 July 2018 to 30 June 2019, after ethical approval by the Institutional review board. Parental informed consent and assent were taken from all parents and participants respectively. Refusal to participate in the study or failure to visit clinic for follow-up were taken as exclusion criteria.
The records of all participants were accessed for documentation of their baseline characteristics and they were subjected to follow-up assessments during the study period. Anthropometric assessment, including height in centimeters, weight in kilograms, and bodymass-index in kg/m2 was done. IAP (Indian Association of Pediatrics) growth charts were used as a reference and the value between the 3rd to 97th centile was considered normal. [6] Oral contrast swallow was done using diluted Diatrizoate meglumine (1:1) to evaluate the characteristics of the conduit including narrowing or stricture, hold-up of contrast, and distal drainage. The Hepatobiliary Scintigraphy (HIDA) scan with 99m-Tc mebrofenin, while Gastroesophageal reflux (GER) study and Gastric emptying test (GET) study with 99m-Tc sulphur colloid nuclear-tracers were done to assess duodenogastric reflux (DGR), GER, and gastric emptying, respectively. Pulmonary function test was performed using a PC spirometer where Forced vital capacity (FVC) and Forced expiratory volume in 1 second (FEV1) were documented. Blood sample (approx. 5ml) was withdrawn to assess the hemoglobin, total protein, albumin, ferritin, transferrin, serum folate, and vitamin B12 levels. Functional Oral Intake Scale (FOIS) by Crary et al was utilized to assess eating habits and dysphagia. [7] Descriptive statistics were applied in the study. The data entry has been done using Microsoft Excel version 16.50. Statistical analysis was done using SPSS 24 (SPSS, Inc., Chicago, Illinois, USA). Data are expressed as Mean ± Standard deviation and median (with interquartile range IQR1 and IQR3) values.
Baseline characteristics:
Four children (Male: Female 3:1) with a mean birth weight of 2.43 ±0.13 kg, underwent ER with Gastric transposition (GT) in the neonatal period at the mean age of 5.3 ±2.2 days ( Table 1). One of them had associated patent ductus arteriosus (PDA) and pyloric atresia, while the other had associated PDA only. Two children, one with pure EA (Gross type A) and the other with EA-TEF (Gross type C), had undergone primary GT. The other two children with EA-TEF (Gross type C) had undergone primary anastomosis and had a major leak in the mediastinum, following which ER with GT was done on postoperative days 3 and 4, respectively. The trans-hiatal route was considered in three children while the retrosternal route was chosen in one. One of them had a cervical anastomosis leak in the post-operative period and required prolonged mechanical ventilation and diagnostic bronchoscopy with bronchial lavage in the postreplacement period. None of the children required esophageal dilatation in the post-operative period.
Follow-up characteristics:
The mean duration of follow-up in our series was 180.25 ±43.5 months following ER. No other surgical procedure or intervention was required for any child during the follow-up period. The following characteristics were observed during the follow-up: a) Anthropometry: The median height was 151cm and the median weight was 25 kg at a median age of 15.2 years ( Table 2). All the children were below the 3rd centile for weight-for-age (Fig.1). One child (25%) had height-for-age above 50th centile, while the other three were less than 50th centile but more than 3rd centile for height-for-age. All had BMI less than 3rd centile for age. b) Oral contrast swallow study: Healthy conduit was noted in all. There was no stricture or narrowing in any child. However, two children showed hold-up of contrast in the intrathoracic stomach with complete delayed clearance from the transposed stomach (Fig. 2). c) GER study: There was a prolonged accumulation of tracer in the intrathoracic stomach of two of the children. One other child had grade III GER reflux. d) HIDA scan: One child demonstrated the persistent DGR on sequential HIDA scans. However, another child who had revealed DGR in the previous scan had resolution of DGR on the latest follow-up scan. e) Gastric emptying time study: All the children had normal gastric emptying on follow-up with a mean percentage of emptying being 93.5 ±6.8 % in mean half-time of 28.7 ±13.1 minutes.
f) Pulmonary Function test: Three children (75%) revealed a restrictive pattern on spirometry. However, none of them had any grade of dyspnea, nor required bronchodilators.
g) Nutritional assessment: The assessments of hemoglobin, total protein, albumin, serum folate, and vitamin B12 were normal for all four children (Table 3). However, the mean value of serum ferritin levels of the series was 14.0 ±10.8 ng/ml with two children having serum ferritin of less than 10ng/ml. h) Symptoms on recent follow-up visit: Parents of all children reported a poor weight gain in comparison to their siblings. Three children had no dysphagia or dietary restrictions and thus were stated as FOIS level 7 while one patient occasionally complained of difficulty in swallowing hard consistency food and pre-ferred soft consistency food (FIOS level 5). As per the academic recall, two children were good in academics, while the other two were average in their studies. One child had hoarseness, recurrent cough, and coryza due to left vocal cord palsy. However, overall activities were good and this child was excelling in sports activities. Another child, with chest wall bony protuberance following surgery, had body disfigurement concerns.
DISCUSSION
The neonates with esophageal atresia (with or without fistula) having long gap esophageal atresia are managed with common procedures like primary repair, diversion, delayed primary repair with or without lengthening procedures. [8,9] However, in the setting of a major leak following primary anastomosis, refractory stricture formation, or recurrent tracheoesophageal fistula, a diversion procedure with subsequent ER is the acceptable strategy. [1] Alternatively, these neonates can also be considered for a neonatal ER provided the leak is detected early and the general condition (especially the lung condition) of the child is favorable for major reconstructive surgery.
In our case series, all the children underwent gastric transposition in the neonatal period. GT was initially described by Lewis Spitz. [10] The favorable points for Neonatal GT are: [11] 1. Due to good vascularity and musculature of the stomach, the fundus of the stomach can be mobilized to the cervical region with ease.
2. This procedure utilizes single anastomosis i.e. cervical esophagogastric anastomosis. Thus, re-ducing the chances of the leak from long suturelines and multiple anastomoses.
3. The non-distended stomach of esophageal atresia can substitute for the esophagus of similar caliber conduit, thereby, lesser long-term respiratory problems of distended intrathoracic stomach.
4. This procedure can be done by laparotomy and cervical exploration. Thus, thoracotomy and its complications are avoided.
The primary ER has been done in esophageal atresia repair, with neonatal GT being the preferred method at various centers. [4,5,12,13] It is favored in cases of Long-gap esophageal atresia and pure esophageal atresia. [14] A healthy newborn with favorable chest condition along with a well-experienced surgeon and availability of an advanced neonatal intensive care system remains the prerequisite for the success of the procedure. [15,16] Long-term survivors of esophageal atresia have numerous challenges of the aerodigestive system. In our previous experience with GT, we have shown satisfactory long-term outcomes in children. [17] Gastroesophageal reflux (GER) is a common postoperative complication in patients with esophageal atresia ranging from 27% to 75% while with gastric transposition the incidence of reflux reaches as high as 100%. [18,19] The contributory factors include the innate condition of the esophagus i.e. its innate dysmotility, damage to the vagus nerve, and anatomical changes due to surgical repair. Dysphagia has been documented as high as in around half of the patients with esophageal atresia and those who underwent gastric transposition. [18,20] Swallowing problems and dysphagia is a common post-ER complication, which is often attributed to anastomotic stricture or GER. A meta-analysis documented a 17.7% incidence of anastomotic stricture in gastric transposition. [21] While significant reflux was noted in one patient, none of the patients had dysphagia or anastomotic stricture in our series.
The abnormalities of gastric motility (delayed gastric emptying) are known as a delayed complication with GT. [22] To address this concern, pyloromyotomy or pyloroplasty are frequently performed as drainage procedures for efficient gastric emptying. [3,23] All our children of GT underwent a simultaneous drainage procedure with pyloromyotomy or pyloroplasty as the operating surgeon is a proponent for providing drainage procedures to GT due to the small lumen of the pylorus in children. This additional step also helped in the simultaneous management of a patient with pyloric atresia. In our series, while gastric emptying was normal for all the children, duodenogastric reflux (DGR) which is a known complication in GT and attributed to pyloroplasty or pyloromyotomy, was seen in one child. Other associated gastrointestinal complications with GT, i.e. dumping or rapid gastric emptying was not seen in any of the patients of our study.
The persistent pulmonary dysfunction in patients of esophageal atresia is suggested secondary to airway and lung damage due to repeated respiratory tract infection and GER. [24,25] Also, the incidence of respiratory morbidity is well-documented with all the procedures of ER, with a higher incidence in children who had undergone GT (24.6%) when compared with other methods of ER. [20] It can be attributed to the transposition of the stomach in the mid-thorax, straightening of the gastroesophageal junction, thus promoting reflux. While these children develop the chronic pulmonary disease with normal lung functions in 23% to 48% of them, overall normal exercise tolerance with no or little limitation is observed in most of the children. [24] Growth restriction and underweight are seen in patients of esophageal atresia beyond childhood. While weight averages around the 25th percentile for age, these patients do catch up height to normal distribution for age. [22] Similar growth patterns were observed during long follow-up periods in our series. The anemia and low stores of iron are documented with GT. [26] However, all patients in our series maintained normal hemoglobin levels, while two of them had low ferritin levels. Very few reports document the post-ER laryngeal nerve injury. [22,27] We had one case with laryngeal nerve injury following GT. Around 10% of patients with esophageal atresia have associated congenital skeletal anomalies, while following thoracotomy, the chest wall deformity, and vertebral abnormalities may vary from 14% to 47%. [24] In our series, one child had rib prominences, but none of them had spinal deformities.
Patient selection is paramount for attempting the ER in neonates. The chest condition and cardiac anomalies should be well investigated, and stabilization of which should preclude the neonatal ER. The neonates with a good chest condition who have been detected to have early primary anastomotic leak can also be the candidates for neonatal ER. [16] The neonatal ER is a feasible option, the success of which depends on the expertise of the surgeon and availability of neonatal care. There is a dearth of literature on long-term outcomes of neonatal ER. However, the overall longterm functional outcome of GT remains good to excellent with minimal complication and unimpaired longterm quality of life. [28] In view of the assessment of a 10-year long follow-up period, the small sample size of the study remains the limitation. A multi-center trial with a larger number of children needs to be conducted before definite conclusions are drawn.
With our study, we demonstrated satisfactory 10-year long-term outcomes with neonatal GT and thus, con-template and promote the implementation of neonatal ER. In a developing country like ours, where patients are lost to follow-up, this single-stage procedure with an outcome comparable to that of staged repair in long-gap esophageal atresia, can serve as a boon.
|
2021-09-27T20:05:40.637Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "3660b32920c83e77eecf19a20430cc94e1024cd2",
"oa_license": "CCBY",
"oa_url": "https://www.jneonatalsurg.com/ojs/index.php/jns/article/download/981/1105",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "61e644562132d8959b71660d53bbc228fd7023c4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
260205980
|
pes2o/s2orc
|
v3-fos-license
|
AlphaFold2 models of the active form of all 437 catalytically competent human protein kinase domains
Humans have 437 catalytically competent protein kinase domains with the typical kinase fold, similar to the structure of Protein Kinase A (PKA). Only 155 of these kinases are in the Protein Data Bank in their active form. The active form of a kinase must satisfy requirements for binding ATP, magnesium, and substrate. From structural bioinformatics analysis of 40 unique substrate-bound kinases, we derived several criteria for the active form of protein kinases. We include requirements on the DFG motif of the activation loop but also on the positions of the N-terminal and C-terminal segments of the activation loop that must be placed appropriately to bind substrate. Because the active form of catalytic kinases is needed for understanding substrate specificity and the effects of mutations on catalytic activity in cancer and other diseases, we used AlphaFold2 to produce models of all 437 human protein kinases in the active form. This was accomplished with templates in the active form from the PDB and shallow multiple sequence alignments of orthologs and close homologs of the query protein. We selected models for each kinase based on the pLDDT scores of the activation loop residues, demonstrating that the highest scoring models have the lowest or close to the lowest RMSD to 22 non-redundant substrate-bound structures in the PDB. A larger benchmark of all 130 active kinase structures with complete activation loops in the PDB shows that 80% of the highest-scoring AlphaFold2 models have RMSD < 1.0 Å and 90% have RMSD < 2.0 Å over the activation loop backbone atoms. Models for all 437 catalytic kinases are available at http://dunbrack.fccc.edu/kincore/activemodels. We believe they may be useful for interpreting mutations leading to constitutive catalytic activity in cancer as well as for templates for modeling substrate and inhibitor binding for molecules which bind to the active state.
INTRODUCTION
Protein kinases regulate most cellular processes in eukaryotes. In humans, their dysregulation is often involved in disease and they are therefore often targets in drug development, especially in cancer (Cohen, Cross et al. 2021). A large majority of human protein kinases take on a common fold first determined by Susan Taylor and colleagues in 1991 (Knighton, Zheng et al. 1991), consisting of an Nterminal domain of five beta strands and the C-helix, and a largely helical C-terminal domain. The residues involved in catalytic activity are contained in the catalytic and activation loops that form a pocket for ATP binding and a groove for substrate binding in between the N and C terminal domains. Humans have 481 genes which contain at least one typical full-length protein kinase domain; 13 of these have two kinase domains, for a total of 494 kinase domains [NB: since that paper was published, three kinases have been determined to be pseudogenes]. Of these, 437 are likely catalytic kinases, participating in phosphorylation of Ser, Thr, or Tyr residues on proteins, and 57 are likely pseudokinases. Currently in the PDB there are structures for 292 human typical kinase domains, of which 268 are catalytic kinases and 24 are pseudokinases (Modi and Dunbrack 2022).
Active and inactive conformations of typical kinases have been classified in several ways (Jacobs, Caron et al. 2008, Hari, Merritt et al. 2013, Ung, Rahman et al. 2018, Kanev, de Graaf et al. 2021. The active form is generally very similar across kinases because of the requirements of binding ATP, magnesium ions, and substrate, which impart constraints on the conformation of the activation loop (Johnson, Noble et al. 1996). Early on in the history of structure determination of kinases, a classification of structures into "DFGin" and "DFGout" was described (Levinson, Kuchment et al. 2006). In DFGin structures, the Asp side chain of the DFG motif is "in" the ATP binding site and the Phe side chain of the DFG motif is in a pocket under or adjacent to the C-helix of the N-terminal domain. In DFGout structures, the Asp side chain is "out" of the active site and the Phe side chain is removed from the C-helix pocket, allowing for the binding of Type 2 inhibitors such as imatinib that span both the ATP site and the C-helix pocket (Schindler, Bornmann et al. 2000).
There are, however, additional requirements for substrate binding and kinase activity (Johnson, Noble et al. 1996, Lowe, Noble et al. 1997. We previously used the presence of bound ATP, magnesium ion, and a phosphorylated activation loop to identify a set of 24 "catalytically primed" structures of 12 different kinases in the PDB . We found that in addition to being "DFGin," these structures possess specific backbone and side-chain dihedral angles for the DFG motif ("BLAminus"), including the backbone dihedral angles of the residue immediately preceding DFG and the side-chain c1 dihedral angle of the DFG-Phe residue. They also possess a well-characterized salt bridge between a conserved glutamic acid residue in the C-helix and a conserved lysine residue in beta strand 3 of the N-terminal domain (Yang, Wu et al. 2012). These structures are often referred to as "C-helix-in." Using these criteria for active kinases, only 183 of 437 catalytic typical human kinase domains are currently represented in the PDB with active structures. Additional criteria on the positions of the N-terminal and C-terminal segments of the activation loop (see Results) reduce this number to 155 kinases or 35%. Only 130 of 437 catalytic human kinases (30%) possess active structures and complete coordinates for the activation loop.
The program AlphaFold2 from DeepMind is a deep-learning program for highly accurate protein structure prediction and is trained on a large number of structures from the PDB (Jumper, Evans et al. 2021). It uses as input the query sequence, a multiple sequence alignment (MSA) of homologs of the query, and optionally template structures related to the query. DeepMind has provided models of nearly all human proteins produced by AlphaFold2, which are available on a website provided by the European Bioinformatics Institute (Varadi, Anyango et al. 2022). However, only 209 of the 437 (48%) catalytic human protein kinases have a fully active model in the EBI data set.
Because of the importance of knowing the active-state structures of kinases for understanding such features as substrate recognition, the effect of activating mutations in cancer, and drug development, in this paper we describe a pipeline for producing active models of typical protein kinases using the program AlphaFold2. Several groups have found that using MSAs of reduced depth and templates in specific conformational states coerces AF2 to produce conformationally variable models, including some models in the conformational state of the templates (Del Alamo, Sala et al. 2022, Heo andFeig 2022). We use similar techniques to compute predicted structures of active kinases.
A key aspect of this work is that we utilize structural bioinformatics of 40 non-redundant substratekinase complexes from the PDB to define strict criteria for identifying catalytically active protein kinases structures, including both experimental structures and models predicted by AlphaFold2. We impose criteria on the position of the Phe residue and the dihedral angles of the DFG motif, the formation of the N-terminal domain salt bridge (in kinases that possess the appropriate residues), and on the positions of the N and C terminal halves of the activation loop necessary for the formation of a substrate binding cleft.
In addition to reduced MSAs from various sources and active templates from the PDB, we use catalytically active models of kinases produced by AF2 as additional templates for kinases which are more recalcitrant in producing active models and for additional sampling for all kinases. We refer to these as "distillation templates" in analogy with predicted structures that AF2 was trained on ("the distillation training set" (Jumper, Evans et al. 2021)).
We benchmark our protocol with 22 substrate-bound kinase structures in the PDB with complete activation loops and a set of 130 kinase structures from the PDB with complete activation loops that satisfy our active criteria. We show that the pLDDT scores for the activation loop are inversely correlated with RMSD of the activation loop for well characterized kinases. With these methods, we have produced active models of all 437 catalytic human protein kinase domains and made these models available on
Catalytic protein kinases
To make catalytically active models of all human kinases with typical protein kinase domains, we need to distinguish between catalytic protein kinase domains and non-catalytic protein kinase domains or pseudokinases. Catalytic protein kinase domains are those able to phosphorylate proteins on Ser, Thr, or Tyr residues. Non-catalytic protein kinase domains or pseudokinases are domains that possess the typical protein kinase fold but lack protein kinase activity, although they may have other catalytic activity (e.g., PAN3, POMK). We previously published an alignment of all 497 human kinase domains from 484 genes annotated by Uniprot . This list excludes atypical kinases, such as ADCK, PI3/PI4, Alpha, FAST, and RIO kinases (https://www.uniprot.org/docs/pkinfam.txt). Since that time, three kinase genes have been identified as likely pseudogenes (SIK1B, PDPK2P, and PRKY) (Frankish, Carbonell-Sala et al. 2023), leaving us with 481 genes and 494 domains.
On our Kincore website (http://dunbrack.fccc.edu/kincore) and in the text that follows, we use the family name as a prefix in front of the HUGO gene name (e.g., TYR_EGFR) (Seal, Braschi et al. 2023
The characteristics of active protein kinase domains
To identify structural features of the active form of catalytic protein kinases, we created two sets of structures that constitute likely catalytically active structures of protein kinases. The first consists of structures in the Protein Data Bank of kinases with peptide or protein substrates bound at the active site ( Table 1). The second consists of 391 structures of catalytic protein kinase structures (comprising 74 different kinases) with bound ATP, ADP, or an ATP analog that are also in the DFGin "BLAminus" conformational state of the DFGmotif that we found characteristic of "catalytically primed" kinase structures . The example shown in Figure 1 is an active form of human AKT1 bound to a substrate, PDB:4ekk (Lin, Lin et al. 2012). To determine what features are important to . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 catalytic activity, we compared the structures in these data sets to all available structures of kinases in the PDB (without ATP and/or not in the DFGin-BLAminus state). Autophosphorylation complexes are marked with "A" in column 4. The phosphorylation site is in bold red type in the sequence in column 7. In other columns, outliers are shown in red (non-ATP structures, non-BLAminus structures, longer distances for some parameters). The absence of ATP is correlated with a broken salt bridge. DFG6 is the shorter backbone-backbone hydrogen bond distance of the sixth residue of the activation loop (DFGxxX) and the residue before the HRD motif (Xhrd). APE9 is the distance between the Ca atom of the 9th residue from the end of the activation loop (XxxxxxAPE) and the backbone carbonyl oxygen of the Arg residue of the HRD motif. The Max Spine distance is the largest of the three spine distances of the regulatory spine defined by . Each spine distance is the closest atom-atom distance of any pair of side chain atoms in two neighboring spine residues. Table 1 presents a list of unique kinase-substrate and kinase-pseudosubstrate complexes in the PDB and some structural parameters that will be considered below. Some of the "substrates" are in fact substrate-mimicking inhibitors, which bind similarly to substrates. Some kinases are represented more than once if they contain different bound substrates in the active site. Eleven of the 40 complexes are "autophosphorylation complexes," which we previously identified as homodimeric complexes in crystals of protein kinases in which a known autophosphorylation site of one monomer sits in the active-site and substrate-binding groove of another monomer in the crystal (Xu, Malecka et al. 2015). These include autophosphorylation complexes of sites in the activation loop (STE_PAK1, 4zy4; TYR_IGF1R, 3lvp) and the kinase insert loop (TYR_FGFR1, 3gqi; TYR_FGFR3, 4k33). The remainder are N or C terminal tails . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10. 1101/2023CAMK_CAMKII, 3kk9;CMGC_CLK2, 3nr9, TYR_CSF1R, 3lcd;TYR_KIT, 1pkg;TYR_EPHA2, 4pdo;TYR_FGFR2, 3cly). Three other complexes are with larger proteins which are either direct substrates or inhibitors or both (AGC_PRKACA:KAP2, 2qvs; AGC_PRKACA:KAP3, 3idb; TKL_BAK1:HPAB2, 3tl8). The last of these is a plant kinase/pathogen-inhibitor complex (Cheng, Munkvold et al. 2011). The autophosphorylation complexes (marked with "A" in column 4 of Table 3) and inhibitor protein complexes provide insights of how kinases phosphorylate amino acids in the context of folded protein domains, as opposed to intrinsically disordered regions (IDRs) (Xu, Malecka et al. 2015).
We previously identified several criteria for active structures in the PDB for catalytic protein kinase domains (Modi and Dunbrack 2019): 1) the spatial label must be DFGin; 2) the dihedral label must be BLAminus; this indicates that the X, D, and F residues of the XDFG motif are in the "B", "L", and "A" regions of the Ramachandran map respectively, and the c1 rotamer of the Phe side chain is g -(~ -60°); 3) there must be a salt bridge between the C-helix glutamic acid side chain and the beta strand 3 lysine side chain (the WNK kinases are an exception to this rule). In this paper, we validate these criteria and extend them to include: 4) the activation loop must be "extended" as determined by the presence of a backbone-backbone hydrogen bond between the sixth residue of the activation loop (X in DFGxxX) and the residue before the HRD motif (X in XHRD); 5) the C-terminal segment of the activation loop, which must be positioned for binding a substrate, as determined by a residue 9 positions from the end of the activation loop. We also consider the presence of the regulatory spine defined by . We review each of these in turn.
DFGin conformation
The position of the DFG-Phe residue determines, in part, the position of the catalytic DFG-Asp residue. We defined DFGin by the distance between the DFG Phe Cz atom and the Ca atoms of two residues in the N-terminal domain : the Lys residue in the b3 strand of the Nterminal domain salt bridge and the "Glu4" residue in the C-helix (Figure 1), which is the residue four residues following the Glu residue of the salt bridge. Based on these distances, structures are labeled as follows: DFGin, where the DFG-Phe residue is near the C-helix Glu4 residue but far from the Lys residue; DFGout, where the Phe residue is far from the C-helix Glu4 residue and close to the Lys residue; and DFGinter, where the Phe residue is not far from either the Glu4 or Lys residues. These distances are plotted for ATP and non-ATP-bound structures in Figure 2. The vast majority of ATP-bound structures (defined as having ligands with PDB 3-letter codes: ATP, ADP, ANP, or ACP in the active site) are DFGin with LysCa-PheCz distance > 11 Å and Glu4Ca-PheCz distance < 11 Å. All of the substrate-bound structures listed in Table 1 are DFGin (required for the BLAminus and ABAminus conformations of the XDF motif).
BLAminus conformation and Salt bridge formation
The conformation of the XDFG motif and the formation of the salt bridge in the N-terminal domain work together to form an active site capable of binding ATP and magnesium ions for the phosphorylation reaction. These interactions are shown in Figure 1, where the Asp of the DFG motif interacts with the active site magnesium ions which chelate ATP. The carbonyl oxygen of the residue before the DFG motif (X of XDFG, T291) forms a hydrogen bond with the Tyr residue of the YRD motif (usually HRD, but Tyr in AKT1). This hydrogen bond helps position the catalytic aspartic acid residue of the Y/HRD motif, which interacts with the Ser or Thr hydroxyl atoms of substrate residues to be phosphorylated. The BLAminus conformation is required for these interactions . ABAminus structures involve a "peptide flip" of the X-D residues (Hayward 2001), such that the carbonyl of the X residue points While 69% of ATP-bound and 65% of non-ATP-bound catalytic kinase structures are in the BLAminus conformation, the role of the BLAminus configuration becomes clearer when combining it with the formation of the N-terminal domain salt bridge. In Figure 4A, the density of distances of the salt bridge atom pairs (Nz in the b3 Lys residue with Oe1 or Oe2 in the C-helix Glu residue (whichever is shorter)) is plotted for BLAminus and non-BLAminus structures with and without ATP. When BLAminus structures are bound with ATP, the salt bridge is strongly favored with a mean distance of about 3.0 Å (upper left of Figure 4A). However, ATP-bound structures that are not in the BLAminus state have a broken salt bridge, with most structures having a Lys/Glu distance greater than 10 Å (lower left panel of Figure 4A). Even in the absence of ATP, the BLAminus conformation encourages the formation of the salt bridge (upper right vs lower right panels of Figure 4A).
Conversely, if we require salt bridge formation ("SaltBr-In") with a cutoff of Nz/Oe distance of 3.6 Å, 99% of ATP-bound structures are in the BLAminus conformation. When the salt bridge is not formed ("SaltBr-Out"), only 19% of the structures are BLAminus (Figure 4B).
ActLoopNT
We examined the substrate-bound structures listed in Table 1 for further characteristics of the . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint activation loop structure that may be required for binding substrates by determining contacts of the substrate with residues in the activation loop. These residues must be in the appropriate position for forming a substrate binding groove. Examples from four families are shown in Figure 5 with the activation loops in magenta, phosphorylated residues in the activation loop in pink, ATP (or analogs) in green sticks, and the substrates in blue.
Besides the conformation of the DFG motif (the Phe/Tyr residue of DFG is shown in orange sticks), two other features are evident in the substrate-bound structures. The first is that the first few residues of the activation loop, up to at least the sixth residue (yellow in each figure), have similar conformations and positions across the members of each family. The second is that the C-terminal segment of the activation loop, up to at least 9 residues from the end of the activation loop, also shares a common conformation and position across family members. This segment is referred to as the "P+1 loop," since it binds the side chain of the substrate residue immediately after the phosphorylation site (Lowe, Noble et al. 1997, Kornev and. In Figure 5, residues 8 and 9 from the end of the activation loop are shown in cyan. In Ser/Thr kinases, the conformation of residues 8-11 from the end of the activation loop resemble the hull shape of an upside-down, round-bottom boat. Residues 8-9 in TYR kinases are also in a common position, although the structure diverges in residues 10 and 11 more than in the Ser/Thr kinase members. In TYR kinases, the substrate binds directly to these residues in the form of a short beta sheet (blue lines in Figure 5, lower right). The conformation in TYR kinases may diverge to accommodate substrates with larger or smaller side chains. In other kinase families, the substrate binds to a groove between the N and C-terminal segments of the activation loop.
To investigate potential requirements for substrate binding, we determined which residues within the activation loop form direct contacts with substrate residues (any atom contact within 5 Å between substrate residues and the DFG…APE sequence). The results are shown in Table 2. Most substrates have a contact with one or more of the DFG residues as well as the fourth residue of the activation loop, while a small number have contacts with residues 5 and 6. By looking at the structures, we identified the existence of backbone-backbone hydrogen bonds between residue 6 of the activation loop (DFGxxX) and the residue immediately preceding the HRD motif (XHRD) ( Figure 6A). This backbone-backbone hydrogen bond is contained in two very short anti-parallel beta strands (3 residues each), labeled beta strand 6 (comprising the three residues preceding the HRD motif) and beta strand 9 (comprising residues 6-8 of the activation loop) in protein kinase structures . We used this hydrogen bond previously to characterize active structures in the PDB . This hydrogen bond is present in all of the substrate-bound structures in Table 1 . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint Even without ATP, 97% of BLAminus structures contain the DFG6/XHRD hydrogen bond ( Figure 6B, upper right). In the non-BLAminus state, only 24% of structures contain this hydrogen bond ( Figure 6B, bottom panels).
Table 2. Contacts between activation loop residues and substrate
For each kinase, contacts between substrate and activation loops residues (≤ 5 Å) are marked with an "X". A contact with any residue of the DFG motif is listed under "DFG." Residues 4, 5, and 6 of the activation loop are in the adjacent columns. Contacts for the Cterminal region of the activation loop are to the right of the shaded area, starting with residues 15, 14, 13, ..., from the end of the activation loop which typically has the sequence motif "APE" (sometimes "SPE" or "PPE").
ActLoopCT
The conformation of the C-terminal end of the activation loop is critical for binding substrate. Most substrate-bound structures in Table 2 contain contacts between the substrate and residues 4-11 from the end of the activation loop, which ends in the sequence motif "APE." From examination of the substrate-bound structures, we identified a contact that is consistent with substrate binding and which is absent in structures that likely block substrate binding: a contact (or near contact) between the APE9 Ca atom and the backbone carbonyl oxygen of the Arg residue in the HRD motif. This contact is shown in 23 non-TYR kinase structures from Table 2 in Figure 7A. The Ca-O distance is ≤ 4.2 Å in all of these structures.
. CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint Aurora A kinase (AURKA) is a good example of the utility of these contacts. In the BLAminus state, there are two dominant conformations of the entire activation loop of AURKA. Figure 7B (left panel) shows five structures that contain these contacts. This comprises seven structures of AURKA with TPX2 (PDB: 1ol5, 3e5a, 3ha6, 5lxm, 6vpg). Two other structures bound with MYCN (PDB: 5g1x, 7ztl) are very similar (not shown). Both proteins are known to activate AURKA by binding to the N-terminal domain and the tip of the activation loop (Bayliss, Sardon et al. 2003, Richards, Burgess et al. 2016. Most BLAminus structures of AURKA, however, resemble the structures shown in Figure 7B (right panel). In these structures, the C-terminal end of the activation loop (APE6-APE10) deviates significantly from the TPX2and MYCN-bound structures and from the structures of substrate bound kinases in the AGC and CAMK families. In the active structures, the Ca-O distances are about 3.6 Å, while in the inactive structures, the distance is more than 10 Å.
In Table 1, the APE9(Ca)-hRd(O) distance ranges from 3.4 to 4.2 Å in the substrate complexes in the Ser/Thr kinases (all families except TYR). This suggests that the Ca-O interaction is a CH-O hydrogen bond, which have been observed in proteins (Derewenda, Lee et al. 1995). In 271 of 355 non-TYR catalytic kinases, the APE9 residue is a glycine, which forms Ca-O hydrogen bonds more readily than other amino acids likely for steric reasons. In the substrate-bound TYR family kinases in Table 2, the APE9(Ca)-hRd(O) distance is longer and ranges from 6.5 to 7.4 Å.
We examined the distributions of this distance in ATP-bound and non-ATP structures in the BLAminus and other conformational states (Figure 8). For non-TYR kinases, the APE9-hRd distance is typically less than 6 Å in BLAminus/ATP-bound structures ( Figure 8A, upper left panel), while the distance is much greater than 6 Å in a majority of non-BLAminus structures ( Figure 8A, lower panels). As with the substrate bound structures, this distance is somewhat longer for BLAminus/ATP-bound structures of TYR kinases than for non-TYR kinases, ranging from 5 to 8 Å ( Figure 8B, upper left panel), The large peak at 5 Å are all structures of FGFR2. For other TYR kinases, the distance is typically between 6 and 8 Å. One third of non-BLAminus, non-ATP-bound TYR kinase structures have an APE9-hRd distance greater than 8 Å ( Figure 8B, lower right panel).
Regulatory spine
Finally, we evaluated the utility of the regulatory spine for identifying active structures. The regulatory spine consists of four amino acids: 1) the His residue of the HRD motif. This residue is: His in 393 catalytic kinases; Tyr in 38 AGC kinases, CK1_CSNK1G1,2,3; OTHER_SBK2; TKL_LRRK2; Leu in OTHER_PKDCC, Phe in TKL_LRRK1.
These four residues define three distances: Spine1 (HRD-His/DFG-Phe), Spine2 (DFG-Phe/Glu4), and Spine3 (Glu4/HPN7). When the residues are small or polar, there may not be a contact between the side chains and such a contact may not be necessary for constructing an active kinase structure. In When we apply the 5 active criteria described in the previous sections (DFGin, BLAminus, Saltbridge-in, ActLoopNT-in, ActLoopCT-in), there are 3013 human catalytic kinase chains in the PDB that are "Active." If we define a broken spine as a structure with one or more spine residue pairs with a distance greater than 6 Å, there are only 14 structures in this set with an unformed regulatory spine, all of them PAK4 with a twisted end of the C-helix. There are only 11 active structures with a spine distance between 5 and 6 Å. For the sake of simplicity, we therefore do not use the regulatory spine as a criterion for active structures.
. CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint
Active structures of catalytic kinases in the Protein Data Bank
From the considerations above, we define probable "Active" structures of kinases at those capable of binding ATP, Mg ions, and substrate, with the following criteria: 1. DFGin spatial state 2. BLAminus dihedral angle state 3. SaltBr-in state (Nz/Oe distance < 3.6 Å) 4. ActLoopNT-in (DFG6-Xhrd backbone hydrogen bond < 3.6 Å) 5. ActLoopCT-in (APE9-Ca/hRd-O distance < 6 Å in non-TYR kinases and < 8 Å in TYR kinases) We made certain exceptions to the criteria for some kinases. The salt bridge criterion is skipped for OTHER_WNK1, WNK2, WNK3, and WNK4 kinases (WNK -"With No Lysine") and for TKL_MAP3K12 and TKL_MAP3K13. In the experimental structures of TKL_MAP3K12 (e.g., 5CEP), the residue equivalent to the salt bridge Glu is Asp161 and is turned outwards with a break in the alpha C-helix, which is shorter than that of other kinases. The AlphaFold2 models with all of Uniprot90 as the MSA sequence database reproduce this unusual feature even in BLAminus structures. The presence of the Asp makes the salt bridge less likely to form so we omitted it as a criterion for these two kinases. Finally, OTHER_HASPIN, OTHER_TP53RK, and OTHER_PKDCC do not have APE motifs (Modi and Dunbrack 2019), and do not fold into the same structures as the C-terminal regions of other kinases. Thus, there is no ActLoopCT requirement for these kinases.
We calculated the relevant data for all human protein kinase domain structures in the PDB. The results are shown in Table 3 for catalytic kinases. Of 437 catalytic kinase domains in the human proteome, only 155 (35.5%) have active structures in the PDB. Of these, only 130 have complete sets of coordinates for the backbone of the activation loop, comprising less than 30% of catalytic kinases in the human proteome. We therefore chose to see if we could use AlphaFold2 to produce active structures of all 437 catalytic typical kinase domains in the human proteome. . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint
Generation of active models of catalytic protein kinase domains
To generate active models of the 437 human protein kinase domains, we created sequence sets for the multiple sequence alignments (MSAs) required by AlphaFold2 and template data sets in the active form. Sets of orthologous sequences (or near paralogs) for each kinase were created from UniProt such that each sequence in an ortholog set for a given kinase was greater than 50% identical to the target and aligns to at least 90% of the target kinase domain length with fewer than 10% gaps. Each ortholog set was filtered with CD-HIT so that no two sequences in a given set were more than 90% identical to each other. This was done to create diversity within the ortholog sets for each kinase. We also created "Family" sequence sets consisting of all the human kinase domains within each kinase family.
To create a template set, we identified all active structures of catalytic kinases in the PDB (including non-human kinases) using the criteria given above and selected two structures from different PDB entries (if available) with the largest number of coordinates for the activation loop residues (to select in favor of complete activation loops). If more than two structures were available with the same number of ordered residues in the activation loop, those with the highest resolution were selected. This resulted in a set we named "ActivePDB," consisting of 165 kinase domains from 278 PDB entries.
We applied AlphaFold2 to all 437 human catalytic kinase domains, using the ortholog and family sequence sets and the ActivePDB template set. Different depths of the sequence alignment were utilized ranging from 5 sequences to 90 sequences. Only two of AlphaFold2's five models ("model 1" and "model 2") utilize templates, so only these models were run when templates were included in the calculations. If a structure of a target sequence was present in the template data set, it was removed when predicting the structure of the target. The models were relaxed with AMBER and the standard AlphaFold2 protocol, and we assessed the activity state of both the unrelaxed and relaxed models. In many cases, hydrogen bonds that were broken in the unrelaxed models were formed properly in the relaxed models.
We downloaded structures of all 437 kinases from the EBI website of AlphaFold2 models. The EBI site contains only one model per protein. Only 208 of the 437 kinase domains (48%) contain active structures within this set. When we ran all five models within AlphaFold2 with default parameters (with and without templates, Uniprot90 as the sequence database), we obtained active models of 281 catalytic kinases (out of 437) using the PDB70 template database and active models of 298 catalytic kinases using no templates. By comparison, under different conditions, using the ActivePDB templates and Distillation templates, ortholog and family sequence databases, and different MSA depths, we obtained between 371 and 421 active kinases (Figure 9) for each setup, depending on the input template and MSA data sources.
No one set of inputs (MSA source, MSA depth, template database) produces active models of all 437 catalytic targets but combining the approximately 200 models from the different sets for each kinase achieved active models of 435 out of 437 targets. For two kinases, we needed special procedures. For . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint the second kinase domain of obscurin (CAMK_OBSCN-2), the C-terminal segment of the activation loop made an a helix of residues 7825-7829 in all models that blocked access to the substrate binding site.
We made the mutation D7929G (residue APE9, which is conserved as Gly in 73 out of 83 catalytic CAMK kinases) which helped to unfold this helix. It is possible that OBSCN-2 is a pseudokinase.
For LMTK2, all AlphaFold2 models formed a folded activation loop containing a strand-turn-strand motif that would be inconsistent with substrate binding. This structure forms in many DFGout structures of TYR kinase family members. We added additional distillation templates to the distilled AF2 template set of active structures of TYR_AATK (also known as LMTK1) and TYR_LMTK3. This produced active models of LMTK2 with very shallow sequence alignments (1-3 sequences from the ortholog data set). It also has an asparagine residue in place of the C-helix glutamic acid residue of the N-terminal domain salt bridge. However, LMTK2 has been shown to phosphorylate CFTR and other substrates involved in neuronal activity (Luz, Cihil et al. 2014).
In Table 4, we show the number of active kinase domains produced by different combinations of template database and MSA source summed over the MSA depths run for each combination shown in Figure 9. Using all the models with Uniref90 sequences produced active models of only 308 kinase domains. The ActivePDB template set plus the Family models and Ortholog models combined produced active models of 435 kinases. The only two kinases that required the distillation templates were LMTK2 and LMTK3; they only formed active models with 5 or fewer sequences from the ortholog set. As noted above, LMTK2 required the LMTK3 model, effectively a redistillation template. However, the quality of models in terms of pLDDTs of the activation loop is improved by including the distillation set models, as we show in the next section. Family 432 ActiveAF2+ActiveAF2 Ortholog 435 ActivePDB Family+Ortholog 435 All All 437 The first line of the table is derived from data from the EBI database of Alphafold2 structures.
"ActiveAF2" represents the active distillation templates produced by AlphaFold2.
. CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint
Picking the best model with pLDDTs scores of the activation loop
The structures of substrate-bound kinases ( Figure 5) show that in active kinases, the activation loop is generally situated against the kinase domain, extending from the DFG motif towards the rightedge of the kinase domain (as generally oriented and shown in Figure 5). It then turns around and moves leftward and concludes in the APE motif, roughly below the DFG motif. This open U shape is characteristic of substrate-bound structures and of AlphaFold2 models produced by our pipeline. Dozens of examples of active AlphaFold2 models produced in this study are shown in Figure 10 for the AGC, CMGC, STE, and TYR kinase families.
To benchmark the behavior of our pipeline in modeling active structures of catalytic kinases, we first compared the collection of AlphaFold2 models for the 22 kinases listed in Table 1 that have complete activation loops within their experimental structures. When the same kinase is listed more than once in Table 1, we picked a single example since the activation loop structures were all very similar (<0.5 Å RMSD).
These experimental structures contain substrates so are likely to be one (of possibly several) substrate-binding-capable conformations of the activation loop of each kinase. Both experimental and computed structures that pass our "Active" tests still exhibit some heterogeneity of the structure of the activation loop, especially for residues far from the beginning or the end of the loop. This may be natural structural variation and it is possible or even likely that multiple conformations are compatible with substrate phosphorylation. In any case, we explored the ability of the pLDDT values of the activation loop to pick out good models that pass our "Active" tests described above. We also wanted to know if the distillation templates produced better models of active structures in some cases.
In Figure 11, we show scatterplots of RMSD vs pLDDT of the activation loop for these 22 kinases.
The results demonstrate that for most of the kinases, the highest pLDDT for the activation loop (defined as the minimum pLDDT value over the activation loop residues in the model) also produced the best or very close to the best RMSD to the structures listed in Table 1. The distillation templates ("ActiveAF2") produced significantly better models than the ActivePDB templates for CMGC_CDK2 and CAMK_PIM1, and higher pLDDTs for most kinases. Thus, it seems likely that the extra sampling with the distillation templates may produce better models or more confident models for active structures of kinases.
To extend the benchmark, we picked out at least one structure for each of the 130 human catalytic kinases with active structures in the PDB with complete activation loops. When all or almost all of the active structures for a particular kinase were similar (except for perhaps a few outliers), we picked out only one structure as a representative. When more than one conformation was represented in multiple PDB entries, we picked out a representative from each, labeling them "conf1," "conf2," etc. The structures labeled "conf1" were generally those that most closely resembled the substrate-bound structures in Table . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint 1. The distribution of RMSD for the highest scoring models (highest min(pLDDT over the activation loop)) and the distribution of min_pLDDT values for the conf1 structures are shown in Figure 12. The results show that 104 (80%) of the 130 kinases are represented by a model with less than 1 Å backbone atom RMSD (N,Ca,C,O) over the whole activation loop (after superposition of the C-terminal domains of each kinase). A total of 117 (90%) are less than 2.0 Å.
We can show that when multiple conformations of the activation loop of a given kinase are considered "Active", our AlphaFold2 models are generally close to one of them, and this structure most closely resembles the substrate-bound structures in Table 1. For example, by visually clustering the structures of human CDK2 that pass our active-kinase criteria, we identified four predominant conformations ( Figure 13A). The conf1 benchmark structure (PDB: 1QMZA) is a substrate-bound structure listed in Table 1. PDB structures very similar to the conf1 structure are also the only ones that are phosphorylated on residue T160 in the activation loop. For conformations conf2 (2BZKA), conf3 (5UQ1A), and conf4 (1FINA), the closest structures among the AlphaFold2 models have RMSD of 2.59 Å, 1.09 Å, and 2.78 Å respectively, all with min_pLDDT of less than 50.0. This contrasts with the best model of conf1, which has an RMSD of 0.48 Å to PDB:1QMZA and min_pLDDT of 84.2. SRC presents another interesting example ( Figure 13B). The human SRC structures which are "Active" by our criteria and contain fully ordered activation loops (PDB: 1Y57A (green in Fig 16B), 1Y16A, 1Y16B (orange in Fig 13B)) do not resemble substrate-bound structures of TYR kinases in Table 1, such as ABL1 (PDB:2G2I). However, there is a structure of chicken SRC in the PDB (PDB 3DQW, chains A-C, blue in Figure 13B) that is quite similar to the ABL1 (PDB 2G2I) structure listed in Table 1 ( Figure 13B MAPK1 has two main conformations in our benchmark derived from the PDB ( Figure 13C). One of them resembles the substrate-binding structures in Table 1 ( Figure 13C, blue, left panel). The other has a bulge in the activation loop in the C-terminal half that places the activation loop over the APE motif and in contact with the G-helix ( Figure 13C, orange, left panel). AlphaFold2 reproduces both of these structures almost exactly ( Figure 13C, right panel) with RMSD of ~0.3 Å in both cases. The active models are produced by the ActivePDB templates, while the alternate conformation models are produced by the ActiveAF2 distillation templates. The distillation templates included a structure of the closely related CMGC_MAPK3, which also has the same bulge as the orange structures in Figure 13C. The benchmark . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint structure of MAPK3 (PDB: 4QTBA) in fact has the same bulge. However, the highest scoring AlphaFold2 model resembles substrate bound structures with an RMSD of 5.1 Å to the bulged benchmark structure 4QTBA, and is one of the RMSD outliers in Figure 12A. We believe this model more accurately reflects the likely substrate-binding conformation of MAPK3.
In addition to CMGC_MAPK1 and CMGC_MAPK3 just discussed, there are 11 other kinases where the highest pLDDT models have more than 2 Å RMSD to the structural representative we chose In TKL_ACVR2B, there is a change in position of residues 10-17 or a 30-residue activation loop, while residues 1-9 and 18-30 are very similar in the benchmark structure (2QLUA) and the AF2 models.
In some other cases, the AF2 structure appears capable of binding substrate while the PDB structure does not. This can be demonstrated by comparing the benchmark structure to that of closely related kinases in the PDB and in our AF2 models. For example, the CMGC_HIPK3 and CMGC_HIPK2 benchmark structures are quite different in the C-terminal region of the activation loop. The AF2 models of HIPK2 and HIPK3 closely resemble the HIPK2 experimental structure (PDB:7NCFA) but not the HIPK3 experimental structure (PDB: 7O7IA). The activation loop sequences of HIPK2 and HIPK3 are 96% identical (25 of 26 positions). Similarly, for TKL_ACVR1A, the PDB structure (6UNSA) blocks the active site, while the AF2 models resemble the TKL kinase BAK1 (PDB:3TL8A) from Table 1, which contains a substrate peptide.
For some other kinases, the AF2 models have poor pLDDT scores in the activation loop. This occurs for some kinases that are remotely related to other kinases in the human proteome or that have particularly long activation loops. For all three kinases in the RAF family (ARAF, BRAF, RAF1 or CRAF), the min_pLDDT scores for the activation loop are below 40. For BRAF, the top scoring AF2 models are not very similar to the benchmark structure with an RMSD of 2.53 Å (PDB:4MNEB, the only structure of BRAF with a complete activation loop that passes our "Active" criteria). It is unclear if this PDB structure is fully capable of binding substrates or whether the AF2 models are in fact better models of substratecapable structures.
DISCUSSION
We have developed a structural bioinformatics approach to identifying structures of typical protein kinases that are likely capable of binding ATP, metal ions, and substrates and catalyzing protein phosphorylation, which is involved in nearly all cellular processes in eukaryotes. We applied these criteria . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 to experimental structures, which enabled us to develop a set of templates that could be used to model all 437 catalytic protein kinases in their active form with AlphaFold2. The same criteria enabled us to distinguish active structures among the models produced by AlphaFold2, which we cycled back into the protocol as templates for producing additional models with improved pLDDT scores. We refer to these as distillation templates, in analogy to the distillation models that the team at DeepMind used as additional training data for the original implementation of AlphaFold2. We demonstrated that the models with the highest values of pLDDT for the activation loop residues also most closely resemble substrate-bound structures of kinases in the PDB. In all, we generated approximately approximately 90,000 models to identify active model structures for all 437 catalytic human kinases.
While much attention has been given to the structure of the active site residues surrounding ATP, including the DFG motif and the N-terminal domain salt bridge, we examined 40 substrate-bound structures of protein kinases in the PDB to define criteria that ensure the presence of a substrate binding site necessary for the phosphorylation reaction. This is a far larger set of substrate-bound structures than has been previously analyzed, since it includes known autophosphorylation complexes contained within crystals (Xu, Malecka et al. 2015). In substrate-bound structures, the activation loop is extended away from the ATP binding site, lying against the surface of the kinase domain. To accomplish this, the activation loop interacts with the relatively fixed positions of residues in the catalytic loop in and around the HRD motif. This occurs near the N-terminus of the activation loop in a short beta sheet (beta6 + beta 9, ) and can be identified by backbone-backbone hydrogen bonds of residue 6 of the activation loop with the residue that immediately precedes the HRD motif. It also occurs near the C-terminus of the activation loop where the Ca atom of residue 9 from the end of the loop makes a short contact with the backbone carbonyl of the Arg residue of the HRD motif. While other distances could also be used as criteria, we found that all substrate-bound structures in the PDB satisfy these two rules and that the vast majority of experimental and computed structures that satisfy these criteria appear to form a functional substrate binding site. For some kinases, there remains some conformational diversity of the activation loop among PDB structures after satisfying these criteria. It is likely that multiple conformations of the outer portion of the activation loop may be capable of phosphorylating substrates in some kinases.
In other cases, some conformations that satisfy our criteria may block substrate binding.
Unfortunately, there does not seem to be a readily identifiable criterion that would be applicable across kinases to identify such situations. This phenomenon does seem to be rare. For example, MAPK1, MAPK3, and MAPK7 share an alternate conformation in experimental structures that would block substrate binding. AlphaFold2 produces these structures but also substrate-capable structures that resemble substrate-bound structures in the same CMGC family. These latter models are the ones we have made available in a set of models of active structures of all 437 catalytic typical protein kinases in the human proteome (http://dunbrack.fccc.edu/kincore/activemodels).
. CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10. 1101/2023 We believe our models will be useful in understanding the structural basis of kinase substrate specificity, since they place substrate binding residues in the activation loop and the active site of the kinase domain in suitable positions for catalysis. There remain challenges, however. We have found that AlphaFold-Multimer is in some cases capable of making models of substrate-bound structures of typical protein kinases when given a peptide substrate and Uniref90 as a sequence database. But it is not always able to do make an active model of the kinase activation loop without appropriate templates and shallow sequence alignments. But doing this sometimes disrupts its ability to place the substrate in the active site, probably due to the lack of sequence information for the substrate MSA. This will take additional study and implementation to develop a robust protocol that reliably makes models of kinase-substrate complexes from suitable choices of templates and multiple sequence alignments for AlphaFold-Multimer.
This work is ongoing.
Ortholog sequence sets
We first searched UniProt for Pfams PF00069 and PF07714 to collate a set of 1.68 million sequences in UniRef100 with typical protein kinase domains. For each of 437 catalytic kinase domain sequences from our earlier alignment of all human kinase domains (Modi and Dunbrack 2019), we used PSI-BLAST to get a list of the top 25,000 closest kinases to each human kinase domain. The queries used were 8 residues longer on each end of the kinase domain than our published alignment. The hit regions in the PSI-BLAST output were then filtered for sequences more than 50% identical to the query, coverage greater than 90% of the query length, and gap percentage in the alignment of less than 10%.
We then applied CD-HIT (Fu, Niu et al. 2012) to create lists of orthologs (or close paralogs) with no more than 90% sequence identity to each other. These sequences were used as query databases in AlphaFold2 calculations.
Data Input and Preparation:
Three sets of sequence databases were used to create multiple sequence alignments: the default UniRef90 database, an additional kinase family-focused sequence database (all 496 human kinases in the human proteome, separated into each family), and a kinase orthologs-focused sequence database (described above). Templates for the calculations were obtained from the default PDB70 set, a curated selection of active PDB models identified through our criteria by . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint Kincore ("ActivePDB"), and a distilled set of AlphaFold2 models that passed Kincore criteria with activation loop pLDDT scores of 60 or higher ("ActiveAF2" or "distilled").
Model Configuration and Implementation:
Calculations with AlphaFold2 were conducted using the recommended configurations provided by DeepMind. The multiple sequence alignment was prepared using the hh-suite package (Steinegger, Meier et al. 2019) and subsequently fed into the model for structure prediction. When using templates, we used only AlphaFold2 models 1 and 2 since they utilize templates and the MSA data, while models 3, 4, and 5 do not use templates (Jumper, Evans et al. 2021) which was done by commenting out models 3-5 (lines 39-61 in /alphafold/model/config.py): We ran AlphaFold2 with specific sequence data sets by replacing ./uniref90/uniref90.py with our sequence sets: Uniref90, Ortholog, and Family for the MSA building step and specific template data sets to predict protein structures. Template implementation consisted of two parts: .cif files of the structures in ./pdb_mmcif/mmcif_files and their sequence data in the ./pdb70 folder. For each set of templates (PDB70, ActivePDB, ActiveAF2), the files need to be changed for AF2 to use the desired template set. When predicting the structure of a given target, any structures of the target in the PDB template sets were removed.
We introduced a variable MSAlimit that controls the number of sequences in the multiple sequence alignment used by AF2 model building by modifying the class DataPipeline (in /alphafold/data/pipeline.py). When AF2 has too many sequences in the MSA, it tends to ignore any templates provided to it. We also disabled other sequence databases like mgnify, bfd, small bfd, uniref30: def __init__(self, jackhmmer_binary_path: str, hhblits_binary_path: str, uniref90_database_path: str, . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
Benchmarking
The predicted structures were validated by comparing them to the benchmark PDB structures of kinases. The validation process relied on the pLDDT score (Mariani, Biasini et al. 2013) Two benchmarks were constructed. One contained substrate-bound structures from Table 1 with complete coordinates for the activation loop in the PDB structure (22 kinases). The other consisted of 170 structures of 130 with complete activation loops that passed our active criteria from the PDB. For some kinases, there were multiple conformations that passed our criteria. We labeled the structure that most closely resembled substrate-bound structures as "conf1" with the others labeled "conf2,", "conf3," etc.
Update to Kincore Database and Website and Kincore_Standalone2
We have updated the Kincore database and website (http://dunbrack.fccc.edu/kincore) to include the additional active criteria defined in this study (Saltbridge, ActLoopNT, ActLoopCT) for all structures in the PDB. Each structure is marked "Active" or "Inactive" based on these criteria. We have also added the highest scoring AlphaFold2 active model for each of the human catalytic kinases to Kincore. These are labeled with the prefix "AF-" and the suffix "-K1" attached to the Uniprot Accession ID (e.g., the CAMK_AURKA model is AF-O14965-K1). The updated standalone program for assessing the conformational state of protein kinases, Kincore-standalone2, is available at https://github.com/DunbrackLab/Kincore-standalone2/ .
Data Availability and Reproducibility
To ensure the reproducibility of our study, all data, including input sequences, ortholog sequence sets, and predicted structures, are accessible at http://dunbrack.fccc.edu/kincore/activemodels. . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ;https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint Suijkerbuijk, S. J., T. J. van Dam, G. E. Karagöz, E. von Castelmur, N. C. Hubner, A. M. Duarte, M. Vleugel, A. Perrakis, S. G. Rüdiger and B. Snel (2012. "The vertebrate mitotic checkpoint protein BUBR1 is an unusual pseudokinase." Developmental cell 22 ( . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 Figure 1. Active site of human AKT1 (PDB:4ekk, chain A). Residues making hydrogen bonding interactions with ATP (green sticks), Mg 2+ (purple spheres), the catalytic aspartic acid residue of the HRD motif (in AKT1, this is YRD; residues 272-274, cyan), and the aspartic residue (yellow) of the XDFG motif (residues 291-294, magenta) are shown in dashed lines. These include the salt bridge residues of the N-terminal domain (K179, E198, gold). Residue K297, which is the sixth residue of the activation loop (light pink), makes backbone-backbone hydrogen bonds with V271, which immediately precedes the YRD motif. In the stick representations, oxygen atoms are in red and nitrogen atoms are in blue. A substrate peptide is present in this structure but not shown in this figure.
. CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 Table 1. BLAminus structures are shown in magenta and ABAminus structures are shown in blue.
. CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 Table 1. In each figure, the substrate peptides (or pieces of longer proteins) are in blue and the activation loop is in magenta. ATP or any analogue is shown in green sticks. The Phe of the DFG motif is shown in orange sticks and phosphorylated residues in the activation loop are in pink. The sixth residue of the activation loop (DFGxxX) is in yellow, while the 8 th and 9 th residues from the end of the activation loop are in cyan (XXxxxxAPE).
. CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ;https://doi.org/10.1101/2023 doi: bioRxiv preprint Figure 6. Interactions of residue 6 of the activation loop and the residue before the HRD motif ("XHRD") A. Beta bridge hydrogen bonds between DFG6 and XHRD residues in kinase-substrate complex structures listed in Table 1. The carbon atoms are colored as follows: DFG5 (gold), DFG6 (yellow), DFG7 (orange), Xxhrd (blue), xXhrd (green), xxHrd (magenta, including the side chain, which is sometimes Tyr). Oxygen atoms are in red, hydrogen atoms are in white (modeled with PyMol), and nitrogen atoms are in blue. Hydrogen bonds in a few selected structures are marked with dashes.
B.
Distribution of the XHRD-DFG6 backbone-backbone hydrogen bond distance of ATP bound and unbound structures in the BLAminus and other states. The distance plotted is the minimum of the N-O or O-N distances between these two residues. The DFG6 residue is identified with the last X in the DFGxxX sequence, where x is any amino acid.
. CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made A. Contact between Ca of APE9 residue (XxxxxxAPE) and backbone carbonyl oxygen of Arg residue of HRD motif. His/Tyr (green), Arg (yellow), and Asp (orange) of HRD/YRD motif are shown in sticks including side chains, numbered according to AKT1 residues 272-274. Residues APE11 (AKT1 F309, magenta), APE10 (C310, magenta), APE9 (G311, cyan), APE8 (T312, magenta), APE7 (P313, magenta) are shown in sticks without side chains. Structures from the AGC, CAMK, CMGC, STE, and TKL kinases in Table 1 are shown.
B. Two conformations of human AURKA. Left: Active structures with bound TPX2 (orange): PDB: 1ol5, 3e5a, 3ha6, 5lxm, 6vph. Right: Inactive structures without TPX2. PDB: 4dee, 5dt3, 5oro, 5oso, 6i2u, 6r49, 6r4d and others. Gly291 Ca (cyan sticks) is in contact with Arg255 backbone carbonyl O (yellow sticks) in the active structures (average distance 3.6 Å), while there is no contact in the inactive structures (average distance > 10 Å). The activation loop C-terminal region in the active structures resembles the structures of substrate-bound complexes in Figure 5, while the inactive structures would block substrate binding.
. CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint Figure 10. Examples of AlphaFold2 models of the active forms of 51 AGC kinases, 65 CMGC kinases, 31 STE kinases, and 77 TYR kinase. For clarity, some structures with long disordered regions within the activation loop are not shown in each family.
. CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint Figure 11. RMSD values for 22 substrate-bound structures from Table 1 versus the minimum value of pLDDT across the activation loop of each model. Models from different template data sets are shown in different colors: active structures from the PDB ("ActivePDB"), active models from AlphaFold2 ("ActiveAF2"), no templates provided to AF2 ("notemplate"), and AlphaFold2's default template database ("PDB70").
. CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint Figure 13. Structures of activation loops from the benchmark and corresponding AF2 models.
A. CMGC_CDK2 has four dominant conformations among structures that pass our "Active" criteria in the PDB. The Conformation 1 cluster contains the substrate-bound structures listed in Table 1, and is also the only cluster that contains phosphorylated activation loops. The AlphaFold2 models most closely resemble Conformation 1.
B.For TYR_SRC, we used a chicken SRC structure (PDB: 3DQW) as the benchmark structure since it most closely resembled substrate0bound structures of other TYR kinases such as ABL1 (e.g., PDB:2G2I in Table 1). The human structures which pass our criteria (PDB: 1Y57 and 1Y16 in orange) are quite different in much of the activation loop away from the first few and last few residues of the DFG...APE sequence. PDB: 1Y57 is often used as the basis of molecular dynamics simulations of "Active SRC" even though it is not likely the substratebinding conformation.
C. CMGC_MAPK1 has two dominant conformations in the PDB (left panel), one of which resembles substratebound structures in Table 1 ("conf1") while the other has a bulge towards the C-terminus of the activation loop. AlphaFold2 reproduces both conformations (right panel), conf1 from ActivePDB templates and conf2 from ActiveAF2 ("distillation") templates.
. CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint Supplementary Figure 1. Distribution of the Spine1, Spine2, and Spine 3 distances. The Spine distances are defined as the closest distance among all side-chain atom pairs between the two residues.
. CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10. 1101/2023 ActivePDB AF2 model and the no-template AF2 model more closely resembles a substrate-bound PDB structure of TKL_BAK1 from Arabidopsis (PDB: 3TL8, magenta, substrate not shown). It is likely that the ActivePDB AF2 model (green) is a substratebinding structure, while the PDB structure is not.
. CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint Supplementary Figure 4. Benchmark structures with large RMSD to the best scoring AlphaFold2 models. CMGC_MAPK3 and CMGC_MAPK7 show the same bulge toward the C-terminal end of the activation loop that is present in CMGC_MAPK1 ( Figure 13C, main paper) in PDB structures (blue). The AF2 structures (magenta) more closely resemble RAT CMGC_MAPK1 (PDB:2ERKA, orange) shown in both figures. It is likely the AF2 structures are correct substrate-binding forms of MAPK3 and MAPK7. For STE_MAP3K5, the two dominant PDB conformations (conf1 and conf2) are not typical of substratebinding structures of STE kinases in the PDB. They also differ substantially from each other in the outer portions of the activation loop. The AF2 models (magenta) are closer to substrate-binding structures of STE kinases in Table 1 (PDB:2q0nA, 4zy4A). For STE_MAP3K14, the AF2 structures are quite different than either of the dominant PDB conformations for reasons . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted September 3, 2023. ; https://doi.org/10.1101/2023.07.21.550125 doi: bioRxiv preprint that are unknown. TKL_BRAF is not that close to other kinases in the PDB other than RAF1, and the AF2 models are quite different than the benchmark one active structure with a complete activation loop in the PDB (4MNEB). OTHER_BUB1 is also distantly related to all other kinases; in the PDB there are two conformations, one of which has a large bulge blocking the substrate binding site (orange). Most of these structures are not phosphorylated on Ser969, although one of them is (shown in the figure). The other PDB structures (e.g., PDB:4QPM, blue) are more likely to be substrate-capable and most of these structures are phosphorylated on Ser969. The ActiveAF2-template-based model is quite different from both PDB conformations, while the PDB70 and no-template AF2 models (magenta) are much closer to the Active PDB structure (4QPM) and are most likely correct, even though they do not have pLDDT values as high as the ActiveAF2-template structures (green).
|
2023-07-28T13:11:54.583Z
|
2023-07-25T00:00:00.000
|
{
"year": 2023,
"sha1": "50be43b9b8059b8f8b7b942c7488799caae96b86",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/07/25/2023.07.21.550125.full.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "1dee2c756e3d0737ece55ae5f41f1a182f750744",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology"
]
}
|
233940593
|
pes2o/s2orc
|
v3-fos-license
|
Forest Conservation and Renewable Energy Consumption: An ARDL Approach
: Deforestation shows the constant environmental degradation that occurs worldwide as a result of the growth of economic activity and the increase in population. This research examines the causal link between renewable energy consumption, GDP, GDP 2 , non-renewable energy price, population growth and forest area in high, middle- and low-income countries. Based on data obtained from World Development Indicators, the autoregressive distributed lag model, with a time series, is used to examine the long-term cointegration relationship between the variables. The results justify the existence of a joint long-term relationship between the variables analysed for the middle-income countries and low-income countries. When the forest area is not at its equilibrium level, the speed of adjustment is slow (0.44% and 8.7%), which is typical of the nature of this natural resource. An increase in the consumption of renewable energy is associated with an increase between 0.04 and 0.02 square kilometres of forest cover, respectively. The research does not show evidence about the equilibrium relationship in the short term. Growth in renewable energy consumption is one of the main drivers for preserving the forest area. Therefore, those responsible for making economic policies must aim their measures towards the use of clean energy.
Introduction
Global demand for goods and services is directly related to the demand for natural resources [1]. The role that forests play in the environment is fundamental, since they contribute to the oxygen balance and help protect hydrographic basins (areas where water for human consumption comes from [2]). Some of these highly-demanded resources are non-renewable resources from forests. According to the World Economic Forum (WEF) [3], in 2019, 3.8 million hectares of forest cover were lost from primary forests, humid tropical forests, areas of mature tropical forest, which constitute essential elements for biodiversity and air purification. This loss of primary forest is related to the emission of 1.8 megatons of CO 2 emissions. Compared to previous years, 2019 registered an increase of 2.8% compared to 2018. However, this value is lower than in 2016 and 2017.
In addition, the WEF [3] mentions that deforestation is affected differently depending on the income level of countries. In developed countries, such as Spain, Greece, or Italy, the forest area has registered increases of 9%, 6% and 6%, respectively, since 1990, which is due to government subsidies. In contrast, in countries like Brazil, the Congo or Bolivia,
Literature Review
Preserving forest area or reducing deforestation is a global concern, due to the constant demand for forest services [14] and the increasing rates of environmental degradation. In some countries, governments established incentives to avoid deforestation, given that there is competition for the use of forest area [15]. However, in others, the measures taken were incipient. In this regard, deforestation was widely studied to learn more about its determinants and to be able to design measures to mitigate its spread. Over the last few years, various studies have been carried out on the subject, evidencing a long-term relationship between deforestation and energy consumption [12]. Molion [10] is one of the pioneers in relating deforestation with energy consumption, mentioning that renewable energy can reduce CO 2 emissions from greenhouse gases, caused by energy from the consumption of fossil fuels. Another of the highly cited authors who examine the same relationship, deforestation and energy consumption, is Lettau et al. [11], who use the hydrological cycle and atmospheric recycling to study deforestation. These authors indicate that the construction of dams, urbanisation, an increase in the capacity of the irrigation system, increasing energy demands and unsustainable economic growth are determinants of a decrease in the forest area. In this regard, several studies have examined the factors that cause deforestation, and this study focuses on the consumption of renewable energy, economic growth and the non-renewable energy price as determinants of forest area. Thus, there is evidence that argues that deforestation shows a long-term equilibrium relationship with its determinants [12], for which the following hypothesis is established:
Hypothesis 1. (H1)
There is a long-term equilibrium relationship between forest area, GDP, renewable energy consumption and the price of non-renewable energy.
Therefore, the empirical evidence is divided into three groups. The first group includes those studies that examine the effect of renewable energy consumption on deforestation, their contributions being very significant. Thus, Tanner and Johnston [13] found that the government can reduce deforestation rates by applying an ecological policy that expands access to renewable energy to the rural population, so that the consumption of biomass is left for their daily needs. Nazir et al. [16] study the development of the wind energy atlas as a proposal for a partial solution to the problem, which made it possible to confirm a strong relationship between the use of clean energy and deforestation. On the contrary, in Northern Europe, Enevoldsen [17] highlights that the development of wind projects in forest areas has a negative effect on deforestation, which is carried out by installing wind turbines to achieve performance enhancement of renewable energy and reduce the cost of energy, which allows access at a low cost and to give up the consumption of polluting energy.
Brazil, launched the Clean Development Mechanism (CDM), taking into account that 60% of its energy comes from sustainable energy sources. Following this line, Moutinho et al. [18] conduct research, where they show that deforestation rates are related to the energy crisis caused by drought. Stigka et al. [19] confirm the need to replace fossil fuels with clean or renewable energies when producing electricity.
In the same vein, in China, Bhattacharyya and Ohiare [20] found a very close long-term relationship between access to electricity and deforestation. This fact leads them to conclude that ensuring access to electricity to the rural population by the State will help reduce deforestation rates significantly. In the north of Angola, Temudo, Cabral, Talhinhas [21], by using interviews with the heads of households with the observation of the change in vegetation cover, found that deforestation in rural Zaire is comparatively small. Taking into account that the use of biomass for the population's basic needs has been reduced, the government has intervened by boosting the production of renewable energy.
On the Asian continent, Ahmed et al. [12] conducted a study in Pakistan, the fifth most populated country in the world. By using time series data from 1980-2013, these authors find the existence of cointegration, both in the short and long term, between deforestation and renewable energy consumption. Undoubtedly, this is one of the studies that enables to reinforce the hypothesis raised in this research on the strong links that exist between deforestation, economic growth and energy consumption. For this reason, Houghton and Nassikas [22] recommend that good forest management could stabilise CO2 emissions and would serve to make a successful transition from the use of fossil fuels to the use of energy from renewable resources.
In Colombia, when using General Circulation Models (CGM), Poveda and Mesa [23] mention that a decrease in renewable energy consumption caused by a decrease in river flows leads to an increase in the consumption of forest resources. This in turn, increases deforestation, and consequently, leads to an increase in surface temperature, an increase in atmospheric pressure and mainly a decrease in rainfall in the medium and long term, which generated a decrease in river flows, which in turn, is reflected in severe failures in hydroelectric power systems. The aforementioned circular phenomenon is corroborated by Rojas [24], who confirms that in Colombia, deforestation causes 2.5% of losses in hydroelectric plants. In this sense, the evidence shows that the consumption of renewable energy is positively related to the forest area [13,16], and when there is greater access to clean energy, there is less demand for forest products to use as fuel. Therefore, the following hypothesis of this relationship is proposed:
Hypothesis 2. (H2)
The increase in the consumption of renewable energy is related to the increase in forest cover.
The second group comprises of studies that examine the relationship between the non-renewable energy price and deforestation. For example, in Eisner et al. [25], a positive relationship was found between the rate of global forest loss and the resulting biodiversity loss related to inelastic supplies from oil after 2005. These authors state that while it is true that changes in oil supply and price cause changes in forest cover, this relationship is very challenging, since there are more factors that also influence the change in forest cover, and this is more evident in Southeast Asia and Central America. The authors recommend examining other clean energy options with more elastic prices that do not cause a decrease in forest cover. These recommendations are similar to those made by Scheidel and Sorman [26]. In the same way, studies, such as the one by Abbaspour and Ghazi [27], carry out a pilot model in two rural communities in Iran, Yakhkesh and Pechet, in which the authors find that one of the main reasons for deforestation is an increase in the consumption of fossil fuels as the main source of energy. For this reason, they recommend that this scenario should be considered within the Kyoto protocol, which encourages reducing environmental pollution and deforestation. Furthermore, Czúcz et al. [28] mention that worldwide oil reserves will be depleted, and their price will increase, resulting in consequences for forest conservation, since some non-renewable resources from nature will be used as oil substitutes. Based on the aforementioned, the price of non-renewable energy is a determinant of the forest area [25,27]; consequently, the following hypothesis is proposed:
Hypothesis 3. (H3)
The increase in the price of non-renewable energy is positively related to the decrease in forest cover.
The third group includes all the studies that relate economic growth with deforestation. Research on climate change also generated strong links between economic growth and trade, positioning them as the main drivers of deforestation. This fact has played a great role in the scientific world since the last years of the previous century. In the 1990s, the environmental Kuznets curve was proposed, which establishes the relationship between environmental degradation and economic growth. Since then, some economists, such as Grossman and Krueger [29], Panayotou [30], Selden and Song [31] and Vincent [32], used this hypothesis to verify the existence of an inverted U relationship between economic activity and various forms of environmental degradation. The study by Cropper and Griffiths [33] is one of the pioneers in examining the Kuznets hypothesis, taking into account the relationship between deforestation and economic growth. However, despite the various investigations carried out between deforestation and economic boom, there is no definite consensus on the form that this relationship has [34].
The antecedents presented by the FAO in 1954 and a growing concern about environmental degradation led the academic community to consider deforestation as one of the key indicators of environmental degradation. Some authors, such as Andrée et al. [1], have studied this relationship-finding inverted U-shaped relationships specifically between per capita income and environmental degradation indicators, and concluding that the development and economic growth of a country encourages the consumption of non-renewable resources, which is directly related to deforestation.
It is important to highlight that deforestation is advancing extremely quickly, mainly in South America. However, there is no awareness of the environmental problem generated by economic activity. This is the case of Brazil, which represents most of the worldwide flora and fauna biodiversity, but despite this knowledge, humans and the economy are replacing this biodiversity with commercial land use. By using a linear fixed effects model with a balanced panel of 3168 observations, Santiago and Couto [35] found a long-term relationship between deforestation and the socioeconomic situation between the years 2000 and 2010 in Brazil. The results of this research suggest that investment in agricultural research should be improved to achieve sustainable economic growth and thus, reduce deforestation rates, especially in the Amazon region of Brazil.
In the same country, the authors Arima et al. [14] mention that economic growth continues to advance as investments continue to be made in hydroelectric energy and road paving, which are associated with a high deforestation rate. By 2020, Brazil will have achieved an 80% reduction in deforestation, especially in the Amazon. The authors Carvalho et al. [36] investigated the compensation between environmental conservation and economic growth, using an equilibrium model for 30 Amazonian regions and found that the most affected population would be family farms, but to compensate for this loss, it is estimated that to obtain profits each year, they would have to produce 1.4% of the land. In the same country, Tritsch and Arvor [37] conducted a sub-municipal analysis between socioeconomic development and deforestation in the Brazilian Amazon in their research. Their results confirm a positive relationship between deforestation and economic development, following an environmental Kuznets curve.
In Chile, Apablaza [38] shows the relationship between economic growth and pollution by using linear regressions. In addition, a dummy variable is also used, which identifies the effectiveness of the environmental policies that follow the conceptual behaviour of the Kuznets [39] environmental curve. These results coincide with those by Turner [40]. In Ecuador, Sierra [41] performs a spatial model, where he manages to determine that an increase in economic activity accelerates deforestation growth, reaching very high rates, and in the same way, when growth decreases, deforestation rates also decrease.
Caravaggio [42] studied 114 countries, and found that in high-income and middleincome countries, the boom in economic activity is reflected in the conservation of forest cover. Cuaresma and Heger [43] found that sub-Saharan Africa and low-income group countries have a higher development and deforestation elasticity. Similar results are found in a study carried out by Bhattarai and Hammig [44], whereby using a panel of 66 countries from Asia, Africa and Latin America, quasi-experimental and difference in differences approaches were applied to assess the changes in deforestation produced by economic activity. On the other hand, Tritsch et al. [45] propose that it is mandatory to have a Forest Management Plan (FMP) with logging concessions. The results suggest that applying an FMP will help counteract deforestation significantly, enabling logging companies to carry out extraction cycles to avoid overexploitation. Afawubo and Noglo [46], mention that to reduce deforestation rates, economic development should not be the only focus, but also the institutional quality of countries. This is confirmed by Miyamoto [47], who reveals that poverty has a strong relationship with the change in the forest area. For this reason, it is considered that economic growth generates a greater demand for land [27,36] for other economic and human activities, with which the forest area decreases. Thus, the following hypothesis is established:
Hypothesis 4. (H4)
The increase in economic activity is negatively related to the forest area.
Data Sources
This research examines the relationship between renewable energy consumption, GDP, GDP 2 , the non-renewable energy price, population growth and the forest area during the period 1990-2018. The period of examined time has been defined based on the availability of information, especially by the forest cover variable, which has been available in the World Bank [48] since 2018. For this, the aggregate series of countries are used according to their income level: High-income countries (HIC), middle-income countries (MIC) and lowincome countries (LIC). Data from the World Bank Development Indicators [48] are used for this study, in which the forest area represents the dependent variable and renewable energy consumption, and the GDP are independent variables. The variable GDP 2 is included to evaluate the environmental Kuznets curve [39]. The non-renewable energy price is used as an explanatory variable-which refers to the international price of a barrel of oil, which plays an important role in the economic activity, taken from the BP Statistical Review of World Energy [49]. Additionally, population growth is used as an explanatory variable to measure the variation in annual population growth. According to the World Bank [48], the classification of countries is based on Gross National Income (GNI) per capita in United States dollars. HIC have a GNI per capita greater than $12,055, MIC between $996-12,055 and LIC $995 or less.
Appendix A Table A2 describes the countries examined according to their income level. The description of the variables used in the model is shown in Table 1. All the variables are expressed in their logarithmic form to reduce their measurement scale, with the exception of the dependent variable, which has relatively low values and population growth. The descriptive statistics and the correlation matrix are shown in Table 2. At 5% significance, it can be seen that there is a strong negative relationship between renewable energy consumption, the GDP, GDP 2 , the non-renewable energy price and the forest area, except for MIC countries in which a positive relationship between REC and FAP is seen. In addition, POP shows a positive and significant relationship with the forest area. Figure 1 shows the annual evolution of per capita forest area measured in square kilometres for each of the groups of countries. In addition, it is observed that in 2018 the per capita forest area in HIC is approximately double of that in MIC and LIC. Figure 1 shows the annual evolution of per capita forest area measured in square kilometres for each of the groups of countries. In addition, it is observed that in 2018 the per capita forest area in HIC is approximately double of that in MIC and LIC.
Econometric Model
The main objective of this document is to find the relationship between renewable energy consumption, the GDP, GDP 2 , the non-renewable energy price, population growth and the forest area in different groups of countries classified according to their income level. Thus, the model specification can be written as: In Equation (1), FAP t represents the forest area at time t = 1990, 1991, 1992, . . . 2018; REC t represents renewable energy consumption; GDP t is the domestic product; GDP 2 t is the square of GDP; EP t is the price of non-renewable energy; POP t represents population growth; β i denotes the coefficients of the explanatory variables and ε t is the error term.
Next, various econometric strategies are applied, according to what is described in the following sections.
Stationary Tests
To fulfil the objective of the study, the stationarity of the series must be examined. One of the most widely used formal methods to assess stationarity is the Augmented Dickey-Fuller [50] unit root test. The null hypothesis (H 0 : ρ = 0) assumes that the variable contains a unit root, while the alternative hypothesis (H 1 : ρ = 1) states that it does not contain a unit root. To examine the long-term relationship, the variables examined may have a different order of integration, I (0), I (1) or a mixture of the two, so the ARDL approach becomes suitable for performing the cointegration analysis [51][52][53]. However, a limitation of the ARDL approach is that it cannot be used with variables with integration order I (2), the maximum order is I (1). In addition, the unit root test of Kwiatkowski is performed [54]. Equation (2) formalises this relationship, where t represents the year and i the number of lags of the variable:
Cointegration Method
The time series cointegration method is used to examine the long-term relationship between the variables. Consequently, Pesaran, Shin and Smith [55] are followed to apply the Autoregressive Distributed Lag (ARDL) Model, since it analyses the cointegration between variables with different degrees of cointegration, and also controls endogeneity [7,56] and allows use for short periods-even with observations less than 30 [57,58]. An indispensable requirement is that the stationarity order must be at most I (1), otherwise the analysis is invalid [59][60][61][62]. The relationship is formalised in the following equation: In Equation (3), ∆ is the difference operator. α 1 is the constant term, α 2 , α 3 , α 4 , α 5 are the long-term coefficients. β 1 , β 2 , β 3 , β 4 , β 5 , β 6 represent error correction dynamics. ε t is the error term k represents the number of lags for each variable. The ARDL model uses the Wald test (F-Statistic) to determine long-term existence. The null hypothesis establishes no cointegration between the variables (H 0 : β 1 = β 2 = β 3 = β 4 = β 5 = β 6 = 0) against the alternative hypothesis that establishes cointegration between the variables (H 0 : β 1 = β 2 = β 3 = β 4 = β 5 = β 6 = 0). In the cointegration analysis, Pesaran, Shin and Smith [55] establish the critical values of F-statistics and two types of limits: Lower and upper. If F-statistics is less than the lower limit, the null hypothesis of no cointegration is accepted. In contrast, if F-statistics is greater than the upper limit, the null hypothesis is rejected-that is, there is long-term cointegration between the variables. In the case that the value is between the lower and upper limit, the results are inconclusive. Finally, the Akaike [63] criterion is used to determine the optimal lag of the variables.
Error Correction Term
Once long-term cointegration has been verified, Error Correction Term (ECT) is examined. The model specification is described below: In Equation (4), ECT t−1 represents the calculated error term of the cointegration equation that reflects the non-equilibrium error that deviates from the long-term equilibrium relationship. γ describes the adjustment parameters and the speed at which the variables return to the long-term equilibrium relationship.
Finally, the stability of the model is checked using the diagnostic test, which checks if the model is free of serial autocorrelation and heteroscedasticity. Likewise, the correct specification, normality (JB) and stability are verified, using the Ramsey, Jarque-Bera and the cumulative sum of squares of recursive residuals proposed by Brown et al. [64], respectively. Figure 2 summarises the methodology used in this investigation.
Discussion of Results
Prior to the long-term analysis, the stationarity of the variables was examined by using the Augmented Dickey-Fuller unit root test (ADF) [50]. The results of Table 3 rejects the null hypothesis that assumes the existence of a unit root-that is, the series are stationary. One of the main advantages is that the ARDL approach can use variables with integration order I (0), I (1) or a mixture of both [65]. Complementarily, it is carried out on Kwiatkowski, Phillips, Schmidt and Shin (KPSS) [54]. Thus, the forest area and population
Discussion of Results
Prior to the long-term analysis, the stationarity of the variables was examined by using the Augmented Dickey-Fuller unit root test (ADF) [50]. The results of Table 3 rejects the null hypothesis that assumes the existence of a unit root-that is, the series are stationary. One of the main advantages is that the ARDL approach can use variables with integration order I (0), I (1) or a mixture of both [65]. Complementarily, it is carried out on Kwiatkowski, Phillips, Schmidt and Shin (KPSS) [54]. Thus, the forest area and population growth variable for all groups is stationary at levels I (0) and the rest of the independent variables at I (1).
After checking the stationarity of the series, Table 4 presents the results of the ARDL cointegration test. To properly determine the optimal lag length of each variable, the Akaike information criteria (AIC) is used. In MIC and LIC, the calculated F-statistics are higher than the value of the upper limit proposed by Pesaran, Shin and Smith [55]. Consequently, at the 1% significance level, the alternative hypothesis that establishes a long-term cointegration relationship between the study variables is accepted, which means that the variables move jointly over time. On the contrary, for HIC, the results show no equilibrium relationship in the model studied.
The findings of the cointegration test evaluate the long-term relationship between the forest area, GDP, GDP 2 , renewable energy consumption, non-renewable energy price and population growth. Thus, the ARDL approach is used to estimate the long-term coefficients between the variables. Table 5 shows the results obtained, FAP t−1 represents the error correction term (ECT), in MIC and LIC is negative and statistically significant as expected. However, in the HIC, its value is positive and not significant, which shows the long-term non-cointegration mentioned above. The values of FAP t−1 range from 0 (no adjustment) to −1 (immediate adjustment) as expected. Its values are small, which is reasonable, since increasing forest cover is a time-consuming process inherent in its nature. That is, when the forest cover area is far from its equilibrium level, it is adjusted by 0.44% and 8.7%, respectively, within the first year. The speed of reaching the equilibrium level is slow and significant. In MIC and LIC, an increase of 1% in renewable energy consumption represents an increase of 0.041512 km2 and 0.027512 km2 of forest area, respectively. That is, energy consumption from renewable sources contributes to reducing deforestation. The increase in the consumption of renewable energy represents an alternative for access to clean energy, instead of wood from forests, which is used as energy sources. These results are consistent with those reported by Tanner and Johnston [13], Nazir et al. [16] and Bhattacharyya and Ohiare [20], who affirm that the State can generate policies so that the rural population can have access to electricity and give up the consumption of products from forests.
Regarding an increase in economic activity in MIC and LIC, measured by GDP, it decreases the forest area. The increase in economic activity brings with it some externalities, such as increased urbanisation, expansion of crops to provide food, among others, which generally is related to a greater demand for land and to achieve this, spaces that are destined for forests are occupied. These findings are also based on the fact that the increase in economic activity demands resources that are found in the forests, which leads to a process of deforestation [43,66]. On the other hand, population growth is negatively related to forest area coverage in both groups of countries (Table 5). That is, as the population increases, a change in land use is generated in which the forest area is used for another type of human activity, such as growing food, spaces for housing, resources (wood) for the construction of houses, among others. These results coincide with those found by Ahmed, Shahbaz, Qasim and Long [12], who mention that the increase in the population density demands more forest resources for the construction of housing in the rural sector. Moreover, Tritsch and Le Tourneau [67] find that one third of deforestation in the Amazon region of Brazil is associated with 1.5% of the population. The findings described in this section provide sufficient information to verify the fulfilment of hypotheses H1, H2 and H4 raised in Section 2. Additionally, it is observed that the price of non-renewable energy is not significant, thus ruling out hypothesis H3. Similarly, GDP 2 is not significant-that is, non-compliance with the environmental Kuznets curve is corroborated [39].
Following the long-term analysis, the short-term test between the model variables is evaluated. Table 6 shows that the variation in renewable energy consumption, GDP, GDP 2 , the non-renewable energy price, population growth are not related to the immediate changes in the forest area in the short term in all three models. Subsequently, the error-correction term (ECT) of the Granger causality test is used to detect the direction of long-term causality between the study variables. Table 7 shows that in the long term, renewable energy consumption, GDP, GDP 2 , the non-renewable energy price and population growth causes the forest area in MIC and LIC. Additionally, Table 8 shows the diagnostic tests to validate the model fit [12,52,53,56] for all groups of countries. The p-value greater than 0.05 of the Ramsey test confirms that the models are correctly specified. The p-value greater than 0.05 rules out the presence of serial correlation in the estimated models. The p-value of the heteroscedasticity test, which is greater than 0.05, confirms that the models are homoscedastic. Moreover, the Jarque-Bera normality test with a probability of 0.8204, 0.5734 and 0.7683, confirms that the residuals are normally distributed. Finally, the coefficient of determination of 77.45, 83.87 and 84.89, respectively, indicate the good fit of the model. To conclude the study, following Brown et al. [64], the stability of the parameters is evaluated. The cumulative sum (CUSUM) and cumulative sum of squares (CUSUMQ) are observed in Figures 3-5 for all groups of countries. In all three models, the graph shows that the lines are at the critical limit of 95%, which indicates the stability of the coefficients. Diagnostic tests confirm that the ARDL model is reliable for defining policies at the linking point of forest area, renewable energy consumption, GDP, GDP 2 , non-renewable energy price and population growth. Additionally, Table 8 shows the diagnostic tests to validate the model fit [12,52,53,56] for all groups of countries. The p-value greater than 0.05 of the Ramsey test confirms that the models are correctly specified. The p-value greater than 0.05 rules out the presence of serial correlation in the estimated models. The p-value of the heteroscedasticity test, which is greater than 0.05, confirms that the models are homoscedastic. Moreover, the Jarque-Bera normality test with a probability of 0.8204, 0.5734 and 0.7683, confirms that the residuals are normally distributed. Finally, the coefficient of determination of 77.45, 83.87 and 84.89, respectively, indicate the good fit of the model. To conclude the study, following Brown et al. [64], the stability of the parameters is evaluated. The cumulative sum (CUSUM) and cumulative sum of squares (CUSUMQ) are observed in Figures 3-5 for all groups of countries. In all three models, the graph shows that the lines are at the critical limit of 95%, which indicates the stability of the coefficients. Diagnostic tests confirm that the ARDL model is reliable for defining policies at the linking point of forest area, renewable energy consumption, GDP, GDP 2 , non-renewable energy price and population growth.
Conclusions and Policy Implications
Deforestation is a global economic and environmental problem, so trying to understand its determinants is essential to mitigate its accelerated pace. This research examined the long-term equilibrium relationship between renewable energy consumption, GDP, GDP 2 , renewable energy price, population growth and forest area in high-, middle-and low-income countries, using the ARDL econometric approach.
The results confirm a long-term equilibrium relationship between the mentioned variables for MIC and LIC. The ECT indicates that the speed of forest cover adjustment is slow when it is not at its equilibrium point, approximately it adjusts by 0.44% and 8.7%, respectively, within the first year. Furthermore, the consumption of renewable energy is positively related to the forest area. In contrast, population growth maintains a negative relationship with the forest area. The results obtained provide valuable information to confirm the fulfilment of the hypotheses of this investigation, H1, H2 and H4. On the contrary, the H4 is not fulfilled.
Those responsible for establishing public and environmental policy measures must consider that encouraging the consumption of renewable energy allows for an alternative to the use of forest products and services. In MIC and LIC, the boom in economic activity must take place in scenarios in which environmental sustainability and the care of forests
Conclusions and Policy Implications
Deforestation is a global economic and environmental problem, so trying to understand its determinants is essential to mitigate its accelerated pace. This research examined the long-term equilibrium relationship between renewable energy consumption, GDP, GDP 2 , renewable energy price, population growth and forest area in high-, middle-and low-income countries, using the ARDL econometric approach.
The results confirm a long-term equilibrium relationship between the mentioned variables for MIC and LIC. The ECT indicates that the speed of forest cover adjustment is slow when it is not at its equilibrium point, approximately it adjusts by 0.44% and 8.7%, respectively, within the first year. Furthermore, the consumption of renewable energy is positively related to the forest area. In contrast, population growth maintains a negative relationship with the forest area. The results obtained provide valuable information to confirm the fulfilment of the hypotheses of this investigation, H1, H2 and H4. On the contrary, the H4 is not fulfilled.
Those responsible for establishing public and environmental policy measures must consider that encouraging the consumption of renewable energy allows for an alternative to the use of forest products and services. In MIC and LIC, the boom in economic activity must take place in scenarios in which environmental sustainability and the care of forests are on the horizon. Population growth must be associated with sustainable measures on land use, thereby ensuring that deforestation does not increase.
One of the main limitations of this research is the lack of information on the price elasticity of demand for agricultural products throughout the period analysed, to include
|
2021-05-08T00:03:12.607Z
|
2021-02-23T00:00:00.000
|
{
"year": 2021,
"sha1": "458d0e92f324471adfb5d9798a43c4a55fb8dc9c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4907/12/2/255/pdf?version=1614220436",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7b41ee23099b509940bf96e931e43a44227b3bd2",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
244305337
|
pes2o/s2orc
|
v3-fos-license
|
Hafnium Bismuth Erbium Co-Doped Fiber Based Dark Pulses Generation With Black Phosphorus As Saturable Absorber
A dark pulse generation is demonstrated in a fiber laser configured with a 20 cm long HBEDF and multilayer Black Phosphorus as a gain medium and saturable absorber, respectively. Dark pulses fiber laser at 1.5 µm region was obtained when the pump power exceeds the threshold of 147 mW. Meanwhile, the spectrum of the dark pulse is centred at 1556.40 nm, with the 3 dB bandwidth of 0.12 nm and the separation between adjacent pulses is 1145 ns, corresponding to the cavity length of 211 m. The pulse width was measured to be around 320 ns. The radio-frequency spectrum of the dark pulse, which was measured within the 20 MHz range. More than 17 harmonics were observed within this range, which indicates the mode-locking operation of the laser. The fundamental frequency was obtained at 1.1 MHz, which agreed with the oscilloscope trace. Furthermore, it shows a signal to noise ratio of about 36.58 dB, which indicates good stability. The maximum output power of 0.78 mW and pulse energy of 0.78 nJ were obtained at 187 mW pump power.
1.
Introduction The advantages of compactness and flexibility of fiber lasers have drawn widespread attention. Most of the applications used is being practised in many applications such as in optical communication, fiber sensor technologies, micromachining and the military system [1][2][3]. Q-switching and mode locking are mostly used by these applications. Compared to the conventional active technique, passive mode-locked is desirable through the usage of a saturable absorber (SA) which produce an ultrashort pulsed fibre laser [4]. This is due to the rapid modulation of the resonator compared to any electronic modulator which is necessary for an actively mode-locked laser [4]. Recently, numerous types of saturated absorbing materials are used for pulse laser generation, including semiconductor saturable absorber mirror (SESAM) [5], single-walled carbon nanotubes (SWCNTs) [6] and graphene [7,8]. Most of the researchers have used semiconductor saturable absorber mirrors (SESAMs) for passively generating mode locked lasers even though their fabrication process is complicated and expensive [9]. In addition, the operating wavelength depends on the semiconductor materials. Nowadays popular passive techniques are SWCNTs and graphene. However, the SWCNTs SA operation wavelength is determined by the nanotube diameter and the bandgap engineering was complicated, which normally lead to an uncontrollable non-saturable loss. Meanwhile, graphene has higher electron mobility and zeroes energy bandgap, which allows broadband operation. However, its optical absorption is located weakly at 1.5 µm region, so its application in the optical communication area may be limited. Apart from SWCNTs and graphene, many other nanomaterials have been investigated in fiber laser systems as SA devices, such as topological insulators, transition metal dichalcogenides and black phosphorus (BP). BP have a narrow direct band-gap and wide optical response from infrared to mid-infrared that corresponds to the bulk of layers and single, respectively [4].
There is also interest in the generation of dark pulse lasers that are less sensitive to fiber loss compared to bright pulse [9,10]. Dark Pulse laser emits a steady beam of light, with periodic dips in the continuous bright background. This type of lasers are new, so their applications are still under investigation [11]. Optical frequency combs and optical atomic clocks are some of the applications for dark pulse lasers. The frequency of dark pulse waves could be used to transfer information. Meanwhile, the continuous-wave background may provide a single strong comb line that could be used to probe a quantum transition. In telecommunications, some applications can be used due to the lack of dispersion and linearity during the transmission of dark pulses [12].
Many works have been previously reported on the generation of dark pulses. For instance, Wang et al. demonstrated the generation of dark pulses in a mode-locked Erbium-doped fiber laser (EDFL) based on a Molybdenum disulfide film based saturable absorber (SA) where the laser was at a 1.7 MHz repetition rate [13]. In another work, Zhao et al. produced dark pulses by using rhenium disulfide (ReS2) as a saturable absorber [14].
Mode-locked fiber lasers operating at 1.5 µm region were demonstrated using 20 cm long Hafnium Bismuth Erbium Doped Fiber (HBEDF) as a gain medium in conjunction with the newly developed passive saturable absorber. The HBEDF was achieved by the combination of Hf, Bi, and Er doped yttria alumina-silica glass based preform, which have absorption loss at 980 nm is found to be 100 dB/m equivalent to 12 500 wt ppm [15,16] Here, the pulse generation was realised using Black Phosphorus as a saturable absorber.
Pulse generation with BP SA
In 2D nanomaterials, BP has gained much interest in recent years as a potential in many applications. The multilayer BP has good optical characteristics such as wideband absorption and ultrafast carrier dynamics. Furthermore, it comprises only the elemental "phosphorus", and thus it could be easily peeled off by mechanical exfoliation. The BP SA was prepared by transferring multilayer BP onto a fiber ferrule tip using a mechanical exfoliation method. At first, thin flakes were moderately peeled off from a big block of commercially available BP crystal (purity of 99.995 %) using clear scotch tape. Then, the flakes were repeatedly pressed so that they adhered onto the scotch tape to form a thin layer of BP. Afterwards, an end surface of fresh standard FC/PC fiber ferrule tip was pressed down on the scotch tape to transfer the multilayer BP onto it. The BP transferring process is described in Figure 1 (a). Next, the ferrule with multilayer BP was connected to another fresh FC/PC fiber ferrule via a fiber adapter to form an all-fiber BP based SA device. A tiny amount of index matching gel was applied at the connector to minimise the SA device loss. The energy dispersive spectroscopy (EDS) analysis was used to examine the composition of the multilayer BP tape [17]. The presence of BP material on the scotch tape adhesive surface was confirmed by the presence of the high peak of phosphorus in the spectroscopy, as shown in Figure 1 (b). Figure 1 (c) shows the FESEM image of the multilayer BP tape, which confirmed the existence of uniform multilayer phosphorus on the tape.
Laser Cavity Configuration
The laser cavity has a typical ring configuration, as shown in Figure 2. It uses a passive BP SA for mode-locking. A 20 cm HBEDF piece was pumped with a 980 nm laser diode via 980nm/1550nm wavelength division multiplexer (WDM). It produced an amplified spontaneous emission (ASE) photons, which oscillated in the laser cavity to produce lasing at 1550 nm region. The SA functions as a mode-locker to convert the continuous-wave lasing to nanosecond pulses. A polarisation insensitive optical isolator was incorporated inside the ring cavity to lock the propagation of light in one direction and thus to prevent any detrimental effects due to spurious reflections inside the resonator. A 198 m long single mode fiber (SMF) coupler was also inserted to increase the nonlinearity so that enough phase shift per round trip can be achieved in the cavity for assisting the mode-locking operation. A 10 dB output coupler was used to split the output power in the portion of 90% and 10%. The 90% portion was channelled back into the cavity for further oscillation, while 10% portion was tapped out as the output. The spectral and temporal analysis of the Q-switched EDFL was carried out using a 0.02 nm resolution OSA (Yokogawa AQ6370C) and a high-speed photodetector linked to an oscilloscope (GWINSTEK: GDS-3352), respectively. A 7.8 GHz Radio Frequency (RF) spectrum analyser (Anritsu MS2683A) was used to measure the repetition rate and evaluate the stability of the pulse laser. The average laser power was measured by an optical power meter (Thorlabs PM 100D) coupled with an InGaAs powerhead operating between 800-1700 nm (Photodiode Power Sensor S145C Integrating Sphere). Results and Discussion In this experiment, the dark pulses were obtained when the pump power exceeds the threshold of 147 mW. As shown in Figure 3, the spectrum of the dark pulse is centred at 1556.40 nm, with the 3 dB bandwidth of 0.12 nm. Compared to CW operation (without the SA), the operating wavelength was slightly shifted to a shorter wavelength due to the insertion loss of the SA. Figure 4 displays the modelocked pulse train, which indicates obvious dark pulses. The separation between adjacent pulses is 1145 ns, corresponding to the cavity length of 211 m. The pulse width was measured to be around 320 ns. The formation of dark pulses is most probably due to the deployment of highly nonlinear HBEDF in a long cavity. The cavity design provides an independent polarisation nature. Thus, the two orthogonal polarisation components inevitably exist and couple to each other, inducing the generation of the dark pulses. Figure 5 illustrates the radio-frequency spectrum of the dark pulse, which was measured within the 20 MHz range. More than 17 harmonics were observed within this range, which indicates the modelocking operation of the laser. The fundamental frequency was obtained at 1.1 MHz, which agreed with the oscilloscope trace. It shows a signal to noise ratio of about 36.58 dB, which indicates good stability. Conclusion This research demonstrates a dark pulse generation in a fiber laser configured with a 20 cm long HBEDF and Black Phosphorus as a gain medium and saturable absorber, respectively. The dark pulses fiber laser at 1.5 µm region was obtained when the pump power exceeds the threshold of 147 mW. The spectrum of the dark pulse is centred at 1556.40 nm, with the 3 dB bandwidth of 0.12 nm. The separation between adjacent pulses is 1145 ns, corresponding to the cavity length of 211 m. The pulse width was measured to be around 320 ns. The radio-frequency spectrum of the dark pulse, which was measured within the 20 MHz range. The maximum output power of 0.78 mW and pulse energy of 0.78 nJ were obtained at 187 mW pump power. Dark pulse lasers are new, so applications for them are
|
2021-11-18T20:07:19.595Z
|
2021-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "0e6870529f3fd712e8bac4cde8bfc6f891aa98bd",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/2075/1/012018",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0e6870529f3fd712e8bac4cde8bfc6f891aa98bd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
24920943
|
pes2o/s2orc
|
v3-fos-license
|
Mycobacterial Cultures Contain Cell Size and Density Specific Sub-populations of Cells with Significant Differential Susceptibility to Antibiotics, Oxidative and Nitrite Stress
The present study shows the existence of two specific sub-populations of Mycobacterium smegmatis and Mycobacterium tuberculosis cells differing in size and density, in the mid-log phase (MLP) cultures, with significant differential susceptibility to antibiotic, oxidative, and nitrite stress. One of these sub-populations (~10% of the total population), contained short-sized cells (SCs) generated through highly-deviated asymmetric cell division (ACD) of normal/long-sized mother cells and symmetric cell divisions (SCD) of short-sized mother cells. The other sub-population (~90% of the total population) contained normal/long-sized cells (NCs). The SCs were acid-fast stainable and heat-susceptible, and contained high density of membrane vesicles (MVs, known to be lipid-rich) on their surface, while the NCs possessed negligible density of MVs on the surface, as revealed by scanning and transmission electron microscopy. Percoll density gradient fractionation of MLP cultures showed the SCs-enriched fraction (SCF) at lower density (probably indicating lipid-richness) and the NCs-enriched fraction (NCF) at higher density of percoll fractions. While live cell imaging showed that the SCs and the NCs could grow and divide to form colony on agarose pads, the SCF, and NCF cells could independently regenerate MLP populations in liquid and solid media, indicating their full genomic content and population regeneration potential. CFU based assays showed the SCF cells to be significantly more susceptible than NCF cells to a range of concentrations of rifampicin and isoniazid (antibiotic stress), H2O2 (oxidative stress),and acidified NaNO2 (nitrite stress). Live cell imaging showed significantly higher susceptibility of the SCs of SC-NC sister daughter cell pairs, formed from highly-deviated ACD of normal/long-sized mother cells, to rifampicin and H2O2, as compared to the sister daughter NCs, irrespective of their comparable growth rates. The SC-SC sister daughter cell pairs, formed from the SCDs of short-sized mother cells and having comparable growth rates, always showed comparable stress-susceptibility. These observations and the presence of M. tuberculosis SCs and NCs in pulmonary tuberculosis patients' sputum earlier reported by us imply a physiological role for the SCs and the NCs under the stress conditions. The plausible reasons for the higher stress susceptibility of SCs and lower stress susceptibility of NCs are discussed.
Name
Reference or Source
Bacterial strains
Mycobacterium smegmatis mc 2 155 William R. Jacobs (Snapper et al., 1990) Mycobacterium The 64% and 66% percoll fractions were mostly containing SCs and hence they were termed short-sized cells' fraction 1 & 2 (SCF1 & SCF2) based on their frequency of length distribution ( Figure 1E,F), respectively. The 78% percoll fraction was mostly composed of NCs and was called normal-sized cells' fraction (NCF). All the other percoll fractions of Msm cells were called mixed-sized cell fractions (MCF). under DIC (bottom lines) are indicated in the second column; their respective proportions are mentioned in the third column; the proportion of outliers which are much above or below the average length ± SD is indicated in the fourth column. These are cells, which were also counted to determine the average length, but having lengths much lower or much higher than the average length of the cells in the respective fraction, in spite of having same density as that of average sized cells. n ≥ 300 cells from each fraction. The lines in the first column indicate the percoll fractions; the average lengths and SD of cells in the SCF1, SCF2, and NCF fractions as measured under bright-field (BF) are indicated in the second column; their respective proportions are mentioned in the third column; the proportion of outliers which are much above or below the average length ± SD is indicated in the fourth column. These are cells, which were also counted to determine the average length, but having lengths much lower or much higher than the average length of the cells in the respective fraction, in spite of having same density as that of average sized cells.
__________________________________________________
The percoll step-gradient used for Mtb cells was 60-76%. The 68-76% percoll fractions of the Mtb sample contained very few or no cells. The 60%, 62% and 64% Mtb percoll fractions were also mostly composed of shorter cells based on their frequency of length distribution ( Figure 8C). However, since the average size of cells from 60% and 62% were comparable, they were mixed together and termed Mtb SCF1. Subsequently, the 64% Mtb percoll fraction was termed as Mtb SCF2. The 66% percoll fraction was mostly composed of longer cells and was called Mtb NCF ( Figure 8C). The lines in the first column indicate the percoll fractions; the average lengths and SD of cells in the SCF1, SCF2, and NCF fractions as measured under bright-field (BF) (top lines) and
Supplementary
under DIC (bottom lines) are indicated in the second column; their respective proportions are mentioned in the third column; the proportion of outliers which are much above or below the average length ± SD is indicated in the fourth column. These are cells, which were also counted to determine the average length, but having lengths much lower or much higher than the average length of the cells in the respective fraction, in spite of having same density as that of average sized cells. n ≥ 300 cells from each fraction.
CFU Determination of Msm and Mtb SCF1, SCF2 and NCF
The cells in the Msm 64% (SCF1), Msm 66% (SCF2) and Msm 78% (NCF), obtained following analytical scale percoll gradient centrifugation, were resuspended in 400 µl of 1x PBS or 0.5% Tween 80 or Middlebrook 7H9 medium, while the cells in the NCF were further diluted 250 times with Middlebrook 7H9 medium (as mentioned under 'MATERIALS AND METHODS'). Subsequently, 200 µl from each of the respective cell suspensions was added into 25 ml Middlebrook 7H9 medium taken in 100 ml flask, to obtain 10 3 cells/ml of cell density, followed by exposure to stress. In order to obtain 10 5 cells/ml of the preparative scale percoll gradient fractionated SCF and NCF [which was visually made (by dilution with the medium) to the same cell density as that of SCF (the 400 µl of SCF1 + SCF2 mixture prepared)], 100 µl from each of the respective cell suspensions was added into 25 ml Middlebrook 7H9 medium taken in 100 ml flask followed by exposure to the stress (as mentioned under 'MATERIALS AND METHODS'). It may be noted here that the visual comparison of cell density of the fractions for matching cfu were earlier verified for the accuracy of cfu values by plating performed multiple times using independent samples prepared from multiple cultures at different times by different people. Following the addition of cells from the cell suspensions, 100 µl was taken from each of these samples from the 25 ml Middlebrook 7H9 medium, at 0 hr (before exposure) and at the time mentioned post-exposure to the stress agents, for serial dilution followed by plating to determine their cfu.
Since the cells in the percoll fractions were found to elongate, when kept in PBS or Middlebrook 7H9 medium after removal of percoll, it was not possible to take cell count of the fractions or to determine cfu (takes 3 days), to obtain almost equal cell number of the SCF1, SCF2 and NCF for the exposure to stress agents. Therefore, a large number of sets (n = 80 sets) of the SCF1, SCF2, and NCF samples, which were prepared independently on multiple occasions, were plated to find out the consistency and reproducibility of the size range of the cells that get fractionated into 64%, 66%, and 78% percoll fractions, corresponding to SCF1, SCF2, and NCF samples, respectively. From these experiments, the consistent volumes of the SCF1, SCF2, and NCF, which reproducibly gave consistent cfu, were found out and used.
The average (± standard deviations) observed for the Msm cfu in the SCF1, SCF2 and NCF were as follows: 110.4 (± 43.3), 202.6 (± 80.03) and 124.6 (± 94.7). Though these variations were observed in the cfu, the technical triplicates within each set of experiment were consistent and the nature and the trend of the response of the individual fractions were reproducible and consistent.
The cell density of the Mtb SCF2 (64%) and Mtb NCF (66%) was visually made (by dilution with the medium) to the same cell density as that of Mtb SCF1 (60 + 62%). In order to obtain 10 4 cells/ml of Mtb SCF1, SCF2 and NCF, for exposure to stress, 100 µl each of the respective cell suspensions was added into 25 ml Middlebrook 7H9 medium taken in 100 ml flask followed by exposure to the stress.
The average (± standard deviations) observed for the Mtb cfu in the SCF1, SCF2 and NCF were as follows: 42.8 (± 7.6), 57.2 (± 8.5) and 45.4 (± 21.9). Even though these variations were observed in the cfu, as observed in Msm cell samples, the technical triplicates within each set of experiment were consistent and the nature and the trend of the response of the individual fractions were reproducible and consistent.
It was not possible to obtain 100% enrichment for either the SCs or the NCs of Msm or Mtb cells in any percoll fraction for the following probable reasons. The cell size heterogeneity of Msm mother cells varied from on an average of 4 µm (normal size) to 9 µm (longer size) in length. With the cells elongating prior to division, the length varied from 8 µm to 18 µm! Therefore, although difficult to determine, it is possible that the longer cells whose sizes are higher than the average cell size (+) SD in the 64% Msm SCF1 might have come from the asymmetric division of longer mother cells. Similarly, the shorter cells whose sizes are lesser than the average cell size (-) SD in the 78% Msm NCF might have come from the asymmetric division of shorter mother cells. Similar explanations may as well be applicable to the enrichment of Mtb cells in the respective percoll fractions.
We speculate that the low proportions of normal-sized cells in the SCF and the short cells in the NCF could probably be due to the comparable buoyant density of these respective minor population of the cells to that of the larger proportion of the cells in the respective fraction. Nevertheless, majority of the shorter cells got fractionated into the percoll fractions of low buoyant density. Differential buoyant density of mycobacterial cells has been found in M. tuberculosis cells subjected to multiple stress conditions and has been suggested to be due to differential lipid (triglyceride) content (Deb et al., 2009). Thus, the heterogeneity in the population seemed to be not confined to cell size alone, but to density as well, indicating that the high levels of heterogeneity found in mycobacterial population (Deb et al., 2009;McCarthy, 1974;Khomenko, 1987;Anuchin et al., 2009;Ghosh et al., 2009;Ryan et al., 2010;Farnia et al., 2010;Markova et al., 2012;Aldridge et al., 2012) seem to be based on several parameters, operating through multiple mechanisms, under diverse growth and stress conditions.
Equal cell density (10 3 , 10 4 , or 10 5 cells/ml) of the respective Msm and Mtb SCF1, SCF2, and NCF cells were exposed individually to a range of concentrations of rifampicin and isoniazid (antibiotic stress) and H2O2 (oxidative stress) (Milano et al., 2001), for different durations. Likewise, the Msm SCF1, SCF2 and NCF cells with the same cell density (10 3 cells/ml) were also exposed to 7.5 mM NaNO2 (pH 5) (nitrite stress) (Colangeli et al., 2009) for 30 min (as mentioned under 'MATERIALS AND METHODS'). The percentage survival of the different samples, in terms of cfu, against the four stress agents was determined by plating the respective stressed cells and the unstressed cells on stress-agent-free plates. Using these experimental rationale and strategy, we investigated whether SCF1, SCF2, and NCF cells showed differential survivability against these stress agents.
Rationale for the Range of Rifampicin, Isoniazid and H 2 O 2 Concentrations Used
In order to find out the robustness of the stress response exhibited by Msm SCF1, SCF2 and NCF cells, the cells were exposed to a range of rifampicin, isoniazid and H2O2 concentrations. Subsequently, the response of the cells in these percoll fractions to acidified nitrite stress, was also determined. However, the concentrations of these stress conditions which resulted in very less lethality or those which effected in very high lethality to the cells, were not selected for the stress exposure. The range of concentrations of rifampicin, isoniazid and H2O2 used are those that effect survivability between 0% and 100% survival. For example, since the survivability of the Msm NCF cells was ~80% (data not shown) when exposed to < 25 µg/ml rifampicin and < 10% survival during 100 µg/ml rifampicin exposure, these extreme rifampicin concentrations were not used for the experiments. In the same manner, exposure of Msm NCF cells to < 2.5 µg/ml isoniazid concentration showed ~80% survival (data not shown). In contrast, the exposure of Msm SCF cells to 15 µg/ml isoniazid concentration resulted in < 10% survival. Likewise, the Msm NCF cells when exposed to 0.4 mM H2O2, the percentage survival was > 90%, while the exposure of the cells to 1 mM H2O2 resulted in < 10% survival. Hence, these extreme concentrations of rifampicin, isoniazid and H2O2 and were not used for the experiments. Similarly, exposure of Msm NCF cells to 10 mM NaNO2 (pH 5) was observed to be lethal (data not shown). Hence, the Msm cell samples were exposed to 7.5 mM of acidified sodium nitrite for 30 min. The durations of exposure to the stress agents for the chosen range of concentrations were also standardised keeping the range of survival in view.
|
2017-05-04T00:13:34.765Z
|
2017-03-21T00:00:00.000
|
{
"year": 2017,
"sha1": "0057670ba3664a74736a0ee8568e8c8af832fa08",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2017.00463/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f918ac41e4e392124c69cc30d6971253c20ea5ea",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.