id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
262105760 | pes2o/s2orc | v3-fos-license | Case report: dermoscopic and histological aspects of skin graft and perigraft hyperpigmentation in acral location
Little is known about the use of dermoscopy in skin grafting. We describe the case of a patient with skin grafting and surrounding pigmentation on acral region. The dermoscopic findings were similar to those of benign acral lesions (lattice-like pattern) and reactive pigmentations (fine striae). Histopathology revealed pigment leakage and increased number of melanocytes. It is believed that this phenomenon occured as the result of an inflammatory stimulus.
INTRODUCTION
Dermoscopy is a valuable instrument in dermatological examination, a non-invasive technique that helps distinguish between benign and malignant melanocytic lesions by identifying morphological structures not visible to the naked eye.The test is particularly useful in the acral region where it is simple, easy to interpret and particularly important because this is the most frequent area of melanoma in non-Caucasians. 1 Skin grafting is a surgical technique widely used to correct tissue loss, but little is known about the graft behavior and its dermoscopic features. 2We describe a case of graft and perilesional area hyperpigmentation in a patient submitted to skin grafting on the second right finger.We discuss the dermoscopic and histopathological aspects of this phenomenon.
CASE REPORT
Sixty-nine year-old male, farm worker with skin type VI, reported an episode of trauma in the second finger of the right hand 34 years ago when he was submitted to total skin graft, having the chest as the donor area.The patient did not recall when hyperpigmentation in the perigraft area first started.
On examination, we identified a black plaque with hair growth associated with a peripheral brown macule (Figure 1).The plaque corresponded to the graft itself, and the macule, to the area around the graft.At dermoscopy, the graft area presented homogeneous pigmentation, and the macular perigraft area had a latticelike pattern (Figures 2 and 3).The area of graft insertion presented a hypochromic scar region, fine striae perpendicular to the scar and dots / globules (Figure 4).We performed a biopsy in the graft-hyperpigmentation transitional area and in the perigraft hyperpigmentation area, following the algorithm proposed by Saida et al. 3,4 Two other areas were biopsied for academic purposes and with the patient's informed consent: the normal acral skin and the graft itself.
In the grafted area, histopathological examination showed hyperorthokeratosis in basket weave (a pattern also found, although more tenuously, in the normal acral skin), acanthosis, papillomatosis, and melanic epidermal hyperpigmentation mostly visible on the tips of epidermal ridges and pigmentary incontinence (Figure 5).
In the hyperpigmentation area surrounding the graft, the biopsy revealed the same histopathological features found on the graft area, with a higher number of melanocytes, less melanic hyperpigmentation and pigmentary incontinence (Figure 6).Added to these characteristics, the graft insertion area showed fibrosis in the superficial dermis.
DISCUSSION
The evolution and clinical aspects of skin grafts are rarely discussed in the literature.In clinical practice, it is observed that the skin grafts acquire, over the years, the phenotypic aspects of the receiving region.
In this case report, we observed that the graft had aspects both of glabrous skin represented by the presence of hair growth and active melanocytes, as well as acral skin, illustrated by the hyperorthokeratosis in basket weave and the melanin distribution pattern.Regarding the melanocyte counting, we noted that the number of melanocytes was greater in the grafted area and hyperpigmented macule than in the normal acral skin (Table 1).
A recent study demonstrated that fibroblasts stimulate dopamine oxidase activity in melanocytes, which would be one of the explanations for the periscar pigmentation. 5The authors believe that the increased number of melanocytes in the region around the graft is due to two mechanisms, post- From the dermoscopic point of view, homogeneous black pigmentation in the graft can be justified by epidermal hyperpigmentation and pigment leakage, and the increase in pigmentation in the furrows can be explained by the predominant location of melanin in epidermal ridges. 6,7n the graft-acral skin transitional area we identified hypochromic scarring, justified by the insertion of the graft in the acral skin, fine and homogeneous striae and some spots.According to the study by Botella-Estrada et al. the presence of fine and homogeneous striae is associated with reactive pigmentation (p=0.026), and the spots with lesion recurrence (p <0.0001). 8Applying the latter concept to the present case, the predominant structures are the striae and because it is not a case of melanocytic tumor excision scar we can not affirm that this is a reactive pigmentation or de novo melanocytic proliferation event, which justifies the regional biopsy. 9 The region corresponding to the macule around the graft shows a similar dermoscopic pattern to that found in benign melanocytic lesions, a lattice-like pattern. 7Dermoscopy of acral region visualizes melanin granules arranged in columns in the stratum corneum; so benign pigmented lesions in this anatomical area may mimic melanocytic nevi. 6n the case reported here, dermoscopy was critical to discuss diagnostic hypotheses for this phenomenon, indicate the best approach, and to guide the biopsy, improving the histopathological examination performance. 10espite numerous articles on dermoscopy, little is known about the use of this technique in skin grafts.Here, we could glimpse at a new use of dermoscopy; however, studies that include more patients are still necessary.Then, we could define clinical and dermoscopic patterns of skin graft pigmentation and know when we are facing a high-risk lesion.q FIGURE 1: Clinical aspect of the lesion.Clinical photography of the second right finger skin graft
FIGURE 2 :
FIGURE 2: Dermoscopic exam of the graft.Dermoscopic photography of the graft
Table 1 :
Correlation between observed area and histopathological findings inflammatory and melanocyte migration from the graft. | 2017-06-18T09:21:41.499Z | 2014-05-01T00:00:00.000 | {
"year": 2014,
"sha1": "0acae50c3f25e0ddd489ed9a88462637c4f2e1e9",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/abd/a/jXkMP63GTWTL4shRFtN6gPc/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e623c95d44d8ff286522780740c454ef212fa90f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1720351 | pes2o/s2orc | v3-fos-license | Mining the Web for Relations between Digital Devices using a Probabilistic Maximum Margin Model
Searching and reading the Web is one of the principal methods used to seek out information to resolve problems about technology in general and digital devices in particular. This paper addresses the problem of text mining in the digital devices domain. In particular, we address the task of detecting semantic relations between digital devices in the text of Web pages. We use a Na¨ıve Bayes model trained to maximize the margin and compare its performance with several other comparable methods. We construct a novel dataset which consists of segments of text extracted from the Web, where each segment contains pairs of devices. We also propose a novel, inexpensive and very effective way of getting people to label text data using a Web service, the Mechanical Turk. Our re-sults show that the maximum margin model consistently outperforms the other methods.
Introduction
In the digital home domain, home networks are moving beyond the common infrastructure of routers and wireless access points to include application-oriented devices like network attached storage, Internet telephones (VOIP), digital video recorders (e.g., Tivo), media players, entertainment PCs, home automation, and networked photo printers. There is an ongoing challenge associated with domestic network design, technology education, device setup, repair, and tuning. In this digital home setting, searching the Web is one of the principle methods used to seek out information and to resolve problems about technology in general and about digital devices in particular (Bly et al., 2006). This paper addresses the problem of automatic text mining in the digital networks domain. Understanding the relations between entities in natural language sentences is a crucial step toward the goal of text mining. We address the task of identifying and extracting the sentences from Web pages which expressed a relation between two given digital devices in contrast to sentences in which these devices cooccur.
As an example, consider a user who is looking for information on digital video recorders (DVR), in particular, on how she can use a DVR with a PC. This user will not be satisfied with finding Web pages that simply mention these devices (such as the many products catalogs or shopping sites), but rather, the user is interested in retrieving and reading only the Web pages in which a specific relation between the two devices is expressed. The user is interested to learn that, for example, "Any modern Windows PC can be used for DVR duty" or that it is possible to transfer data from a DVR to a PC ("You can simply take out the HD from the DVR, hook it up to the PC, and copy the videos over to the PC"). 1 The specific task addressed in this paper is the following: given a pair of devices, search the Web and extract only the sentences in which the devices are actually involved in an activity or a relation in the retrieved Web pages.
Note that we do not attempt to identify the type 1 In italic are real sentences extracted from Web pages.
of relationship between devices but rather we classify sentences into whether the relation or activity is present or not, and thus we frame the problem as a binary text classification problem. 2 We propose a directed maximum margin probabilistic model to solve this classification task. Maximum margin probabilistic models have received a lot of attention in the machine learning and natural language processing literature. These models are trained to maximize the smallest difference between the probabilities of the true class and the best alternative class. Approaches such as maximum margin Markov networks (M3N) (Taskar et al., 2003) have been considered in prediction problems in which the goal is to assign a label to each word in the sentence or a document (such as part of speech tagging). It has also been shown that training of Bayesian networks by maximizing the margin can result in better performance than M3N in a flat-table structured domain (simulated and UCI repository datasets) and a structured prediction problem (protein secondary structure) (Guo et al., 2005). Given this background, we draw our attention to the application of maximum margin probabilistic models to a text classification task. We consider a directed model, where the parameters represent a probability distribution for words in each class (maximum margin equivalent of a Naïve Bayes). We evaluate the maximum margin model and compare its performance with the equivalent joint likelihood model (Naïve Bayes), conditional likelihood model (logistic regression) and support vector machines (SVM) on the relationship extraction task described above, as well as several other classification methods. Our results show that the maximum margin Naïve Bayes outperforms the other methods in terms of classification accuracy. To train such a model, manually labeled data is required, which is usually slow and expensive to acquire. To address this, we propose a novel, inexpensive and very effective way of getting people to label text data using the Mechanical Turk, an Amazon website 3 where people earn "micro-money" for completing tasks which are simple for humans to accomplish. The paper is organized as follows: in Section 2 we discuss related work. In Section 3 we review joint likelihood and conditional likelihood models and maximum margin Naïve Bayes. In Section 4 we describe the collection of the training sentences, and how Mechanical Turk was used to construct the labels for the data. Section 5 introduces the experimental setup and presents performance results for each of the algorithms. We analyze Naïve Bayes, maximum margin Naïve Bayes and logistic regression in terms of the learned probability distributions in Section 6. Section 7 concludes with discussion.
Relation extraction
There has been a spate of work on relation extraction in recent years. However, many papers actually address the task of role extraction: (usually two) entities are identified and the relationship is implied by the co-occurrence of these entities or by some linguistic expression (Agichtein and Gravano, 2000;Zelenko et al., 2003).
Several papers propose the use of machine learning models and probabilistic models for relation extraction: Naïve Bayes for the relation subcellularlocation in the bio-medical domain (Craven, 1999) or for person-affiliation and organization-location (Zelenko et al., 2003). Rosario and Hearst (2005) have used a more complicated dynamic graphical model to identify interaction types between proteins and to simultaneously extract the proteins.
Maximum margin models
Probabilistic graphical models and different approaches to training them have received a lot of attention in application to natural language processing. McCallum and Nigam (1998) showed that Naïve Bayes can be a very accurate model for text categorization.
Since probabilistic graphical models represent joint probability distributions whereas classification focuses on the conditional probability, there has been debate regarding the objective that should be maximized in order to train these models. Ng and Jordan (2001) have compared a joint likelihood model (Naïve Bayes) and its discriminative counterpart (logistic regression), and they have shown that while for large number of examples logistic regression has a lower error rate, Naïve Bayes often outperforms logistic regression for smaller data sets. However, Klein and Manning (2002) showed that for natural language and text processing tasks, conditional models are usually better than joint likelihood models. Yakhnenko et al. (2005) also showed that conditional models suffer from overfitting in text and sequence structured domains.
In recent years, the interest in learning parameters of probabilistic models by maximizing the probabilistic margin has developed. Taskar et al. (2003) have solved the problem of learning Markov networks (undirected graphs) by maximizing the margin. Their work has focused on likelihood based structured classification where the goal is to assign a class to each word in the sentence or a document. Guo et al. (2005) have proposed a solution to learning parameters of the maximum margin Bayesian Networks.
Surprisingly, little has been done in applying probabilistic models trained to maximize the margin to simple classification tasks (to the best of our knowledge). Therefore, since the Naïve Bayes model has been shown to be a successful algorithm for many text classification tasks (McCallum and Nigam, 1998) we suggest learning the parameters of Naïve Bayes model to maximize the probabilistic margin. We apply the Naïve Bayes model trained to maximize the margin to a relation extraction task.
Joint and conditional likelihood models and maximum margin
We now describe the background in probabilistic models as well as different approaches to parameter estimation for probabilistic models. In particular, we describe Naïve Bayes, logistic regression (analogous to conditionally trained Naïve Bayes) and then introduce Naïve Bayes trained to maximize the margin. First, we introduce some notation. Let D be a corpus that consists of training examples. Let T be the size of D. We represent each example with a tuple s, c where s is a sentence or a document, and c is a label from a set of all possible labels, c ∈ C = {c 1 ...c m }. Let D= s i , c i where superscript 1 ≤ i ≤ T is the index of the document in the corpus, and c i is the label of example s i . Let V be vocabulary of D, so that every document s consists of elements of V . We will use s j to denote a word from s in position j, where 1 ≤ j ≤ length(s).
Generative and discriminative Naïve Bayes models
A probabilistic model assigns to each instance s a joint probability of the instance and the class P (s, c). If the probability distribution is known, then a new instance s new can be classified by giving it a label which has the highest probability: Joint likelihood models learn the parameters by maximizing the probability of an example and its class, P (s, c). Naïve Bayes multinomial, for instance, assumes that all words in the sentence are independent given the class, and computes this probability as P (c) length(s) j=1 P (s j |c). Each of P (s j |c) and P (c) are estimated from the training data using relative frequency estimates. From here on we will refer to joint likelihood Naïve Bayes multinomial as NB-JL.
Since the conditional probability is needed for the classification task, it has been suggested to solve the maximization problem and train the model so that the choice of the parameters maximizes P (c|s) directly. One can use a joint likelihood model to obtain joint probability distribution P (s, c) and then use the definition of conditional probability to get P (c|s) = P (s, c)/ c k ∈C P (s, c k ). The solutions that maximize this objective function are searched for by using gradient ascent methods. Logistic regression is a conditional model that assumes the independence of features given the class, and it is a conditional counterpart to NB-JL (Ng and Jordan, 2001).
We will now introduce a probabilistic maximum margin objective and describe a maximum margin model that is analogous to Naïve Bayes and logistic regression.
Maximum margin training of Naïve Bayes models
The basic idea behind maximum margin models is to choose model parameters that for each example will make the probability of the true class and the example as high as possible while making the probability of the nearest alternative class as low as possible. Formally, the maximum margin objective is Here P (s, c) is modeled by a generative model, and parameter learning is reduced to solving a convex optimization problem (Guo et al., 2005). In order for the example to be classified correctly, the probability of the true class given the example has to be higher than the probability of getting the wrong class or where j = i and c i is the true label of example s i . The larger the margin γ i is, the more confidence we have in the prediction. We consider a Naïve Bayes model trained to maximize the margin and refer to this model as MMNB. Using exponential family notation, let P (s j |c) = e w s j |c . The likelihood is P (s, c) = e wc len(s) j=1 e w s j |c . Then the log-likelihood log P (s, c) = w c + len(s) j=1 count(s j )w s j |c = w·φ(s, c) (4) where w is the weight vector for all the parameters that need to be learned, and φ(s, c) is the vector of counts of words associated with each parameter φ(s, c) = (...count(s j c)....) in s for class c.
The general formulation for Bayesian networks was given in Guo et al., and we adapt their formulation for training a Naïve Bayes model. The parameters are learned by solving a convex optimization problem. If the margin γ is the smallest log-ratio, then γ needs to be maximized, where the constraint is that for each instance the log-ratio of the probability of predicting the instance correctly and predicting it incorrectly is at least γ. Such formulation also allows for the use of slack variables ξ so that the classifier "gives up" on the examples that are difficult to classify.
This problem is convex in the variables γ, w, ǫ. B is a regularization parameter, and δ(c i , c) = 1 if c i = c and 0 otherwise. The inequality constraint for probabilities is needed to preserve convexity of the problem, and in the case of Naïve Bayes, the probability distribution over the parameters (the equality constraint) can be easily obtained by renormalizing the learned parameters.
The minimization problem is somewhat similar to ℓ 2 -norm support vector machine with a soft margin (Cristianini and Shawe-Taylor, 2000). The first constraint imposes that for each example the log of the ratio between the example under the true class and the example under some alternative class is greater than the margin allowing for some slack. The second constraint enforces that the parameters do not get very large and that the probabilities sum to less than 1 to maintain valid probability distribution (the inequality constraint is required to preserve convexity, and the probability distribution can be obtained after training by renormalization).
Following Guo et al. (2005), we find parameters using a log-barrier method (Boyd and Vandenberghe, 2004), the sum of the logarithms of constraints are subtracted from the objective and scaled by a parameter µ. The problem is solved sequentially using a fixed µ and gradually lowering µ to 0. The solution for a fixed µ is obtained using (typically) a second order method to guarantee faster convergence. This solution is then used as the initial parameter values for the next µ. In our implementation we used a limited memory quasi-Newton method (Nocedal and Liu, 1989).
Data and labels 4.1 The problem of labeling data
One major problem of natural language processing is the sparsity of data; to accurately learn a linguistic model, one needs to label a large amount of text, which is usually an expensive requirement. For information extraction, the labeling process is particularly difficult and time consuming. Moreover, in different applications one needs different labeled data for each domain. We propose a creative way of convincing many people to label data quickly and at low cost to us by using the Mechanical Turk. Similarly, Luis von Ahn (2006) creates very successful and compelling computer games in such a way that while playing, people provide labels for images on the Web.
Collecting data and label agreement analysis
To collect the data, we identified 58 pairs of digital devices, as well as their synonyms (for example, computer, laptop, PC, desktop, etc), and different manufacturers for a given device (for example Toshiba, Dell, IBM, etc). The devices alone were used to construct the query (for example 'computer, camera', as well as a combination of manufacturer and devices (for example 'dell laptop, cannon camera'). Each of these pairs was used as a query in Google, and the sentences that contain both devices were extracted resulting in a total of 3624 sentences. We use the word 'sentence' when referring to the examples, however we note that not all text excerpts are sentences, some are chunks of text data.
To label the data we used the Mechanical Turk (MTurk), a Web service that allows you to create and post a task for humans to solve; typical tasks are labeling pictures, choosing the best among several photographs, writing product descriptions, proofreading and transcribing podcasts. After the task is completed the requesters can then review the submissions and reject them if the results are poor.
We created a total of 121 unique surveys consisting of 30 questions. Each question consisted of one of the extracted statements with the devices highlighted in red. The task for the labeler was to choose between 'Yes', if the statement contained a relation between the devices, 'No' if it did not, or 'not ap- plicable' if the text extract was not a sentence, or if the query words were not used as different devices (as for noun compounds such as computer stereo). 4 Each survey was assigned to 3 distinct workers, thus having 3 possible labels for all 3624 sentences. 5 We used Fleiss's kappa (Fleiss, 1971) (a generalization of kappa statistic which takes into account multiple raters and measures inter-rater reliability) in order to determine the degree of agreement and to determine whether the agreement was accidental. Kappa statistics is a number between 0 and 1 where 0 is random agreement, and 1 is perfect agreement.
In order to compute kappa statistic, since the computation requires that the raters are the same for each survey, we mapped workers into 'worker1', 'worker2', 'worker3' with 'worker1' being the first worker to complete each of the 121 surveys, 'worker2' the second, and so on. The responses are summarized in Table 1.
The overall Fleiss's kappa was 0.41 6 , and therefore, it can be concluded that the agreement between the workers was not accidental.
We had perfect agreement for 49% of all sentences, 5% received all three labels (these examples were discarded) and for the remaining 46% two la-4 This dataset, including all the MTurk's workers responses is available at http://www.cs.iastate.edu/˜oksayakh/relation data.html 5 The requirement for the workers to be different was imposed by the MTurk system, which checks their Amazon identity; however, this still allows for the same person who has multiple identities to complete the same task more than once. 6 The kappa coefficients for categories 'Yes' and 'No' were 0.45 and 0.41 respectively (moderate agreement) and for category 'not applicable' was 0.15 (slight agreement). bels were assigned (the majority vote was used to determine the final label). For these cases, we noticed that some of the labels were wrong (however in most cases the majority vote results in the correct label) but other sentences were ambiguous and either label could be right. To assign the final label we used majority vote, and we discarded sentences for which 'not applicable' was the majority label.
We rewarded the users with between 15 and 30 cents per survey (resulting in less than a cent for a text segment) and we were able to obtain labels for 3594 text segments for under $70. It also took anywhere between a few minutes to a half-hour from the time the survey was made available until it was completed by all three users. We find Mechanical Turk to be a quite interesting, inexpensive, fairly accurate and fast way to obtain labeled data for natural language processing tasks.
We used this data to evaluate the classification models as described in the next section.
Experimental setup and results
The words were stemmed, and the data was smoothed by mapping all the words that appeared only once to a unique token smoothing token (resulting in a total of approximately 2,800 words in the vocabulary). We performed 10-fold crossvalidation, with smoothed test data where all the unseen words in the test data were mapped to the token smoothing token. We used the exact same data in the folds for all four algorithms -MMNB, NB-JL, logistic regression and SVM. Since MMNB, SVM, and logistic regression allows for regularization, we used tuning to find the optimal performance of the models. At each fold we withheld 30% of the training data for validation purposes (thus resulting in 3 disjoint sets at each fold). The model was trained on the resulting 70% of the training data for different values of the regularization parameters, and the value which yielded the highest accuracy on the validation set was used to train the model that was evaluated on the test set.
As a baseline, we consider a classifier which assigns the most frequent label ('Yes'); such a classifier results in 53% accuracy. validation with tuning data. We compared the accuracies of the maximum margin model with the accuracy of generative Naïve Bayes, logistic regression and SVM as shown in Table 2. The MMNB has the highest accuracy followed by NB-JL and then SVM with RBF kernel. Even after tuning, logistic regression did not reach the performance of MMNB and NB-JL.
Since MMNB is trained to maximize the margin, we compared it with the Support Vector Machine (linear maximum margin classifier). Counts of words were used as features (resulting in the bag of words representation 7 ). We ran our experiments with linear, quadratic, cubic and RBF kernels. SVM was tuned using the validation set similarly to MMNB. We also experimented with Perceptron and Decision Tree using binary splits with reduced errorpruning, which are methods commonly used for text classification (due to lack of space, we will not describe these methods and their applications, but refer the reader to Manning and Schütze (1999)). Among all the known methods, the maximum margin Naïve Bayes is the algorithm with the highest accuracy, suggesting that it is a competitive algorithm in relation extraction and text classification tasks.
Analysis of behavior of Naïve Bayes, maximum margin Naïve Bayes and logistic regression
We analyzed the behavior of the parameters of the probabilistic models (Naïve Bayes, MMNB and logistic regression) on the training data. For each example in the training data we computed the probability P (c = noRelation|s) using the parameters from the model, and examined the probabilities assigned to examples from both classes. We show these plots in Figure 1. As we see, the logistic regression discriminates between the majority of the examples by assigning extreme probabilities (0 and 1). However, there are some examples which are extremely borderline, and thus it does not generalize well on the test set. On the other had, Naïve Bayes does not have such "sharp" discrimination. Maximum margin Naïve Bayes has "sharper" discrimination than Naïve Bayes, however the discrimination is smoother than for logistic regression. The examples which are more difficult to classify have probabilities that are more spread out (away from 0.5), as opposed to the case of logistic regression, which assigns these difficult examples to probability close to 0.5. This suggests that maximum margin Naïve Bayes, possibly has a better generalization ability than both logistic regression and Naïve Bayes, however to make such a claim additional experiments are needed.
Conclusions
The contribution of this paper is threefold. First, we addressed the important problem of identifying the presence of semantic relations between entities in text, focusing on the digital domain. We presented some encouraging results; it remains to be seen however, how this would transfer to better results in an information retrieval task. Secondly, we considered a probabilistic model trained to maximize the margin, that achieved the highest accuracy for this task, suggesting that it could be a competitive algorithm for relation extraction and text classification in general. However in order to fully evaluate the MMNB method for relation classification it needs to be applied to other classification and or relation prediction tasks. We also empirically analyzed the behavior of the parameters learned by maximum margin model and showed that the parameters allow for better generalization power than Naïve Bayes or logistic regression models. Finally, we suggested an inexpensive way of getting people to label text data via Mechanical Turk. | 2014-07-01T00:00:00.000Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "13c4b88244a00235ce01668b747a6d3ad895f2c8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "13c4b88244a00235ce01668b747a6d3ad895f2c8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
250928629 | pes2o/s2orc | v3-fos-license | Comparison of Online Sexual Activity Among Iranian Individuals With and Without Substance Use Disorder: A Case-Control Study
The most important practical concerns in addiction medicine are the non-substance addiction and related addictive behaviors among individuals with substance use disorder. On the other hand, technological advances, and easy access have increased the frequency of online sexual activities (OSAs) as one of these behaviors. This study aimed to compare the prevalence of OSAs, based on the Internet Sex Screening Test (ISST) scores, among 60 patients with substance use disorder referred to Iran Psychiatric Hospital and 60 non-dependent individuals. The results showed significant negative correlations between the ISST scores and age, age at the onset of substance use, and substance use duration. There was a significant difference between the ISST scores of the case and control groups (P = 0.001). Patients who start using substances at an early age and have a great duration of substance use are more likely to engage in other addictive behaviors such as OSAs. Therefore, it is critical to consider OSAs and other addictive behaviors in patients with substance use disorder to provide better care for this vulnerable community.
INTRODUCTION
Addiction is a chronic recurrent psychiatric disorder in which a person has a psychological and physiological need to use a substance and shows withdrawal symptoms if it is discontinued. Many studies have shown that substance use disorder is often associated with other major psychiatric disorders such as anxiety and mood disorders (1). But the new definition of addiction in the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5) is not limited to the use of substances (2). And different addictive behaviors have been clustered like sex addiction, gambling addiction, and internet addiction. Internet addiction can manifest as addiction to online shopping, online computer games, cyberspace, social media, and online pornography. It brings more concerns for clinicians, patients, and their caregivers and needs to be addressed in a clinical context (3). Addictive behaviors are the unexplored area of research in which the co-occurrence of substance use disorder and other addictive behaviors and their problematic consequences in societies nowadays are discussed in different consensus (4).
Addiction is one of the leading health problems in Iran these days, and many studies have been done on different aspects of addiction in the last decades. Still, there are lack of robust scientific data on different addictive behaviors (5). Addictive behavior activates the reward and pleasure system in individuals. Recent research showed the underlying etiology of all types of addiction is the reward system and the mesolimbic dopaminergic pathway.
The Reward system provides a sense of reward and accomplishment; it encourages individuals to engage in healthy behaviors and regulates motivation and emotion. Repetitive exposure to stimuli like substances or other addictive activities can cause dysregulation at molecular and cellular levels. This dopamine dysregulation and dysfunctional reward systems have been found in all types of addiction. Severe consequences of dysregulation in dopamine and reward circuits are major public mental health issues (6).
Due to the brain circuit complexity, destructive behaviors are perceived as a method to escape discomforts and inconveniences and fail to reduce the urge and craving. This dysregulation in the choice and loss of control will lead to a pathological pattern of use, neglect of personal and professional aspects of life, and devastating consequences in societies (7).
Online sex addiction is one of the latest types of online addictive behaviors. Various factors such as technological and innovative advancements, user-friendly, inexpensive, anonymous access through millions of websites available 24/7, the absence of the risk of a sexually transmitted disease (STD) and risk of abortion, less physical trauma, psychosexual dissatisfaction, and relationships without commitment, have increased the prevalence of online sex addiction. Online sex addiction (OSA) is defined as a set of compulsive online sexual activities (OSAs), e.g., watching online pornography, downloading videos with sexual content, engaging in erotic chats and sharing one's most profound sexual fantasies with others, online role-playing based on another person's sexual fantasies, performed primarily for pleasure. However, over time, this sense of purpose and pleasure is replaced by self-destruction, decreased control over sexual behaviors, and a strong desire to continue the activity (8,9).
According to Young, some warning signs such as spending notable amounts of time on the internet for cybersex, preoccupations with sexual content while using the internet, feeling guilt or shame about using the internet for sexual purposes, masturbating while using the internet, and engaging in erotic chat rooms, and preference of cybersex as the primary form of sexual gratification, can indicate the presence of online sex addiction (10). Online sex addiction can have devastating and irreversible consequences, including feeling tired, ashamed, remorseful, lonely, disappointed in not having a healthy sex life, frustrated with time-wasting, and absent from work (11,12). A U.S. survey found that 13% of internet searches contained sexual terms, 35% of internet downloads were related to pornography, and 30.6% of users engaged in sexual conversations with other people online (13).
From the perspective of Iranian culture, family formation is very blessed, family formation is an integral part of people's lives, and marriage is the only official way to have sexual activities (8). On the other hand, many people cannot start a family at a young age due to socio-economic issues. According to society's cultural, traditional model, young individuals often live with their family members until marriage. Sometimes they have a financial dependency and emotional attachment to their family.
Many studies have shown that one of the leading causes of divorce in society is sexual issues, and the lack of formal education and proper sexual knowledge and lack of sufficient skills in marriage may lead to curiosity in people to learn sex education through informal methods and sexual use of the Internet (14). Also, like many other underlying causes of people's tendency to addictive behaviors, maladaptive coping styles and lack of an efficient system to control impulses against failures, anger, anxiety, and stress in life, can be the underlying causes of online sex addiction (15). The comorbidity of anxiety and mood disorders and substance use has been shown to have a higher prevalence among people involved in online sex addiction (16). Highlighting the co-occurrence of other addictive behaviors with substance use disorders is crucial to explore more about biology, epigenetic, and environmental factors underlying different addictive behaviors that help clinicians to identify new strategies for prevention management and novel treatment interventions. A limited number of studies have evaluated the prevalence of OSA among individuals with substance use disorder. Their findings have shown that online sex addiction is more prevalent among the mentioned group (17,18). To the best of our knowledge, no previous study has investigated OSAs in Iran patients with substance use disorder. Therefore, this study compared the prevalence of OSAs among Iranian people with and without substance use disorder.
Design and Participants
This case-control study was conducted at Iran Psychiatric Hospital, affiliated with the Iran University of Medical Sciences, in 2020. The research population included 60 individuals with substance use disorder and 60 persons in the control group (a total of 120). The participants were selected by non-random sampling. All persons referred to substance use clinics were interviewed, and those with the research inclusion criteria were recruited. The inclusion criteria for the case group were 18-50-year-old individuals with substance use disorder diagnosis, regular referral to the clinic in the past 6 months, ability to read, write, and use the internet, and informed consent for participation. The controls were selected to match the case group in terms of gender and age. The exclusion criteria were major psychiatric disorders, visual and hearing impairment, lack of fluency in Persian language, and intoxicated patients or patients with withdrawal symptoms. People who could not ensure constant participation in the study were also excluded.
Demographic Questionnaire
All demographic information was collected through questionnaires developed by the researchers.
RESULTS
A total of 120 individuals were allocated to a case group (diagnosed with substance use disorder; n = 60) and a control group (healthy individuals; n = 60). The characteristics of the participants are summarized in Table 1.
Opium was the most widely used substance in the case group (n = 39; 65%). Independent t-test results showed a significant difference between the two groups in terms of ISST scores (P = 0.001; df = 118; t = 3.56). Furthermore, the Pearson correlation coefficients showed that the ISST scores had a significant negative correlation with age (P < 0.001, r = −526), onset of substance use (P = 0.029, r = −283), and duration of substance use (P = 0.003, r = −378). There were no significant relationships between ISST scores and gender, marital status, and education level in the case or the control group.
DISCUSSION
The present study aimed to compare OSAs (based on ISST scores) in individuals with and without substance use disorder. The present study had a significant relationship between OSAs and substance use disorder. The results showed significant negative correlations between ISST scores and the age, age at the onset of substance use, and substance use duration. There was no significant correlation between ISST scores and gender, marital status, and education level in the case or the control group. The most commonly used substance was opium (65%). The high prevalence of OSAs was consistent with a study conducted in 2012 to investigate the rate of internet sex addiction (11). Likewise, a systematic review study in 2020 showed that OSAs were more common among people with substance use disorder (17). In 2018, Bosma et al. examined sexual behaviors among substance users. They used a pre-coded questionnaire designed by the researchers. Among 180 patients with substance use disorder, 31.6% mentioned a previous history of using pornography (18).
Moreover, in 2000, Schwartz et al. evaluated obsessivecompulsive behaviors in people with internet sex addiction. Similar to our findings, they detected a higher prevalence of substance use disorder in people with online sex addiction. They found that 73.7% of men and 42.9% of women with online sex addiction used substances, particularly cocaine and alcohol and drugs such as benzodiazepines (21). Similarly, in 2017, Morelli et al. studied internet addictive behaviors among adults and detected a significant association between internet addiction and high-risk sexual behaviors, alcohol consumption, and substance use disorder (22). Consistent with our study, Najavits L et al. evaluated the cooccurrence of addictive behaviors in patients with substance use disorder. Addictive behaviors like spending money addiction; sex or pornography addiction; work addiction; exercise addiction; self-harm addiction; computer or internet addiction; eating addiction to alcohol and opium users were predominant, and pornography and internet addiction was greater among younger male adults (23). In 2013, Sik Lee et al. investigated the prevalence of substance use disorder and sex addiction and the likelihood of an association between the two behaviors. The study included a large population of 13-18-year-old individuals and found that 85.2% of the participants were public users. In addition, 9.11% of the subjects were at potential risk of internet addiction, and 3.0% were high-risk. Substance use disorder differed in these three groups and was 1.7, 2.0, and 6.5%, respectively (24). Despite the differences in participants' age, which makes comparing their results and ours difficult, their reports emphasize the importance of considering OSAs from earlier generations.
In 2015, Weinstein et al. Evaluated the cybersex use predictive factors among male and female users. Consistent with our study, the male users were dominant. They reported that craving for pornography and relationship challenges were the predictive factors for the frequency of using online sex activities (25).
According to the findings of this study and previous studies, from the psychiatric viewpoint, there is a correlation between various pathological behaviors in the field of psychiatry, especially in behavioral addiction; unless they are identified and addressed, complete treatment of a disorder alone will not lead to full remission. A comprehensive look at the patient and the nature of the patient's behavior during their life frame is needed to prevent relapse, which causes distrust and compassion fatigue for the patients and their families (26).
The attention of addiction therapists to the correlation between addictive behaviors and the various comorbidities that exist between them and other major psychiatric disorders can be critical. Using effective pharmacological and nonpharmacological therapies such as cognitive-behavioral therapy, relapse prevention, and motivational interview programs, focusing on the maladaptive cognitions, and improving their social support network can help these patients and reduce the burden of addictive behaviors on the health care system.
Limitations
Our study had several limitations. First, its statistical population was small, and also, the subjects that were referred to the clinic where we collected our subjects were mostly men subjects. The gender distribution of our subjects was affected. Moreover, it was impossible to maintain gender balance as substance users are generally men. Further studies are recommended to discuss the reasons and factors that cause the gender difference in OSA, such as personality traits, psychosocial factors, more social stigma in women's addiction issues, different help-seeking behavior patterns, etc. (27). Since the tool used in this study does not have the power to report subtypes of online sexual activity, it is recommended to increase the sample size, use more female participants, and report the different subtypes of online sexual activity in future studies.
On the other hand, due to the cultural characteristics of the Iranian society, low participation rate, and the possibility of incorrect responses (considering the social stigma caused by the issue), there might be a gap between the obtained results and the existing reality. Social stigma leads to adverse social, psychological, and psychical outcomes. Fear and shame produced due to stigma decrease treatment-seeking behaviors, and the families may recommend substance users discontinue treatment to avoid the consequences of stigma. Further studies are needed to be done to evaluate the different online sexual sub-activities such as watching or downloading porn, engaging in erotic chats, etc., and show the correlation of each activity with gender, ethnicity, age, educational level, and other addictive behaviors.
Implications for Practice and Research
Based on our findings, OSAs are more prevalent among Iranian patients with substance disorders than individuals without substance use disorders. This calls for more careful attention for mental healthcare providers to explore deeply other addictive behaviors other than substance use. It should be noted that this attention is more important among traditional religious, particularly in Islamic societies like Iran, where involvement in any extramarital sexual activities like OSAs may have a lot of problematic consequences in various dimensions of life.
In recent years, as globalization has expanded and the use of cyberspace and online communications has risen, developing societies like Iran have shifted from traditional to modern societies, and individuals' definitions of the family construct and sexual activities in the context of marriage have changed (8). Due to this concern from patients, most clients probably do not report engagement in these activities spontaneously, even when they are distressing and problematic. Therefore, a healthcare worker needs to approach these activities in the history of the patients. However, the confidentiality of documentation of this issue should be carefully considered to protect the patient from unfavorable consequences.
A larger sample size and a more extended female target population will be required in future studies on the relationship between addictive behaviors such as OSAs and other psychiatric disorders. The factors involved in exacerbating OSAs and substance use disorder should also be identified to facilitate the formation of appropriate executive solutions and ultimately control behavioral addictions and increase the quality of life of people.
CONCLUSIONS
According to our findings, OSAs are more prevalent among Iranian patients with substance disorders than individuals without substance use disorders. In addition, individuals who start substance use at an early age and have a long history of substance use are more likely to engage in other addictive behaviors such as OSAs. Therefore, it is critical to consider OSAs and other addictive behaviors in patients with substance use disorder to provide better care for this vulnerable community. Further investigations by multicenter studies with a larger sample are needed to determine related factors.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethical Review Board of Iran University of Medical Sciences, Tehran, Iran (Code: IR.IUMS.FMD.REC.1399.274).
The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
SS, VR, FH, HA, and MSh: conceptualization and design. SS, PH, and MSh: data collection. SS, MSa, and MSh: initial draft preparation. SS, VR, MSa, PH, FH, HA, and MSh: editing and review. All authors contributed to the article and approved the submitted version. | 2022-07-22T13:58:58.168Z | 2022-07-22T00:00:00.000 | {
"year": 2022,
"sha1": "75b8a26d4f8ec22a184f814352ac92518e4e43af",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "75b8a26d4f8ec22a184f814352ac92518e4e43af",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259931612 | pes2o/s2orc | v3-fos-license | Topical combined tranexamic acid and epinephrine versus topical epinephrine in control of intraoperative bleeding of external dacryocystorhinostomy
Purpose To compare the efficacy of gauze soaked with combined tranexamic acid (TXA) (100 mg/ml) epinephrine 1:200,000 versus gauze soaked with only epinephrine 1:200,000 used to guard against intraoperative bleeding in external Dacryocystorhinostomy (DCR). Patients and methods The study included 33 patients; only 30 patients fulfilled the inclusion criteria and were divided randomly into 2 groups using the random numbers table, with 15 patients in each group. The first group (Group A) was operated upon using gauze soaked with combined TXA (100 mg/ml) and epinephrine 1:200,000, while the second group (Group B) was operated upon using gauze soaked only with epinephrine 1:200,000. Results The amount of bleeding was significantly lower in group A (29.4 ± 17.1 ml) compared to group B (49.1 ± 18.1 ml), with a P value = 0.005. In addition, the number of used gauzes and total surgical time was significantly lower in group A compared to group B, with P value = 0.008 and 0.01 respectively. Conclusion External DCR using gauze soaked with combined TXA (100 mg/ml) and epinephrine 1:200,000 showed a significant reduction in the amount of intraoperative bleeding compared to gauze soaked with epinephrine 1:200,000 only. The reduction in the amount of bleeding with the addition of TXA resulted in clearer surgical field, shorter surgical time and more surgeon satisfaction.Query
Introduction
Dacryocystorhinostomy (DCR) is a relatively common procedure performed by oculoplastic surgeons to create an anastomosis between the lacrimal sac mucosa and the nasal mucosa to bypass nasolacrimal duct (NLD) obstruction.External approach DCR remains the gold standard despite the wide use of the endoscopic approach [1].
Control of intraoperative bleeding during DCR is of utmost importance as bleeding may obscure the already narrow operative field, making recognition of the sac wall or nasal mucosa very difficult [2].
Many approaches have been adapted to avoid such complication, including pre-operative careful patient Vol:.(1234567890) preparation, controlling blood pressure, ruling out blood dyscrasias, discontinuing anticoagulant and antiplatelets drug intake.Also, intraoperative use of epinephrine along with local anesthetics unless contraindicated by medical cause, appropriate surgical technique to avoid known blood vessels with judicious use of cautery and raising the head end of the surgical table [7].
TXA (trans-4-aminomethyl cyclohexane carboxylic acid) is a synthetic lysine analogue that prevents breakdown of the blood clot by reversibly blocking the binding sites of plasminogen and prevents plasminogen activation to plasmin and the lysis of polymerized fibrin in the blood clot.Because of its hemostatic activity, wide availability, and limited side effects, it has also been widely studied for the prevention and treatment of haemorrhage in trauma and several types of elective surgery [8].
TXA is used topically during tooth extraction, orthopedic procedures, cardiac surgery, and many other surgical procedures [9].To our knowledge, this is the first study to investigate the use of topical TXA in external DCR.
The aim of this study is to compare the efficacy of gauze soaked with combined TXA (100 mg/ml) and epinephrine 1:200,000 versus using gauze soaked only with epinephrine 1:200,000 in intraoperative bleeding control in external Dacryocystorhinostomy.
Study design
This is a single-center, double-blind, prospective, randomized, clinical study.The study was conducted in the Ophthalmology Department, Zagazig Faculty of Medicine during the period between March 2022 and September 2022 to treat patients with epiphora caused by primary acquired nasolacrimal duct obstruction with or without + ve regurge test.
This research was approved by the Institutional Review Board of Zagazig University Faculty of Medicine (IRB#9231-27-3-2022) and was adherent to the ethical principles outlined in the Declaration of Helsinki as amended in 2013.Also, approval of this study design was obtained from the Pan African Clinical Trial Registry (ID number PACTR202206674510595).https:// pactr.samrc.ac.za/ Participants Patients aged more than sixteen years with primary acquired nasolacrimal duct obstruction with or without a positive regurge test were included in this study.
Patients with uncontrolled hypertension, known TXA allergy, personal or familial history of bleeding disorder, or on anticoagulant therapy, the diagnosis of coexisting nasal pathologies that could influence the outcome of the surgery, and patients with a history of trauma or laceration to the lacrimal passages were excluded from this study.
The surgical technique, likely post-treatment results and potential complications were explained to all patients.Written consent was obtained from all patients.Consent includes permission to publish their photos.
Finding that mean ± SD of the amount of bleeding after 60 min according to Hamed and Hamed [10] among group A (TXA group) was 6.7 ± 4.7 and that among group B (epinephrine group) was 11.1 ± 3.6, so sample size was calculated by openEpi program to be 30 eyes [15 eyes in each group] with confidence level of 95% and power of test 80% The study included 33 patients with primary acquired nasolacrimal duct obstruction.2 patients refused to sign the consent and didn't participate in the study, and one patient didn't fit for general anesthesia.Those patients were excluded from the study.30 patients were divided randomly into 2 groups using the random numbers table, with 15 patients in each group.The first group (Group A) was operated upon using gauze soaked with combined TXA (100 mg/ml) and epinephrine 1:200,000, while the second group (Group B) was operated upon using gauze soaked only with epinephrine 1: 200,000.A CONSORT flow diagram is shown in (Fig. 1).
Preoperative assessment
All patients were subjected to preoperative assessment including full ophthalmological examination, assessment of epiphora by fluorescein dye disappearance test, assessment of the patency of the lacrimal system by probing and syringing; in addition to ENT examination and laboratory investigations including bleeding and clotting times.
Surgical technique
All surgeries were performed under general anesthesia by a single surgeon after nasal packing of oxymetazoline 0.025%.
The different mixtures were prepared in coded 20 mL syringes before surgery.The coding list was opened only after the completion of the study.The surgeon and the patient did not know the type of the mixture that was used (Fig. 2).
In group A, soaked gauzes (standard size 4X4 cm) of combined TXA (100 mg/ml) and epinephrine 1:200,000 were used to control bleeding.A time of two minutes was fixed for each application of soaked gauze (Fig. 3a).
External DCR was performed with the standard technique.A skin incision was performed and blunt dissection to the periosteum overlying the anterior lacrimal crest was made.The periosteum was then incised and elevated.Osteotomy was made approximately 10 mm in front of the anterior lacrimal crest and inferiorly to expose the upper part of the nasolacrimal duct.A large flap was performed, a long vertical top to bottom incision was made with blade No. 11 on the medial sac wall.A vertical long top to bottom incision with the blade was made on the nasal mucosa opposite to that of the sac.Both the upper and the lower puncta were dilated with a punctal dilator, then each limb of the bicanalicular Crawford silicone stent was passed through the corresponding canaliculus and were drawn out of the nose.Both the lacrimal sac and the nasal mucosal flaps were sutured using a 5/0 vicryl suture.Finally, the orbicularis was sutured back with 6-0 vicryl followed by skin with 6-0 silk.In group B, the same procedure was performed except for the application of soaked gauze of only epinephrine 1:200,000.A time of two minutes was fixed for each application of soaked gauze (Fig. 3b).
The amount of blood loss in each case was measured at the end of the DCR operation from the suction bowel and calculated using a large 50 cc syringe.To determine the amount of bleeding, the amount of saline utilized for irrigation is deducted from the overall amount of bleeding.The duration of surgery was calculated by registering the time of operation start and end.
Follow up of patients
All patients were examined on the first postoperative day.The nasal pack, if any, was gently removed and hemostasis was assessed.The wound was cleaned with 5% betadine, and the patient was discharged on oral Amoxycillin/clavulanate (Hibiotic) one-gram tablets (Amoun pharmaceuticals company, Egypt) twice daily, ibuprofen (Brufen) 400 mg tablets (Abbot, Egypt) three times daily, and topical Antibiotic Gatifloxacin (Tymer) (Jamjoom Pharmaceuticals, Jeddah, Saudi Arabia) eye drops four times daily for 10 days.
One week postoperative, the skin sutures were removed, and oral medications discontinued.The patient was reviewed at 6 and 12 weeks.Tube removal was usually done at 12 weeks.
Outcome Measures
Our primary outcome measure was to compare the amount of intraoperative bleeding (measured in ml) during external DCR using gauze soaked with combined TXA (100 mg/ml) and epinephrine 1:200,000 versus using gauze soaked only with epinephrine 1:200,000.
Secondary outcome measures were to report the number of used gauzes in each group, measure surgical time, and assess the degree of surgeon satisfaction regarding the clarity of the surgical field and the ease of the surgery.
Statistical analysis
Statistical analysis of this study included 30 patients; with 15 patients in each group.Data were collected through patient history taking, examination, recording of intraoperative events including the amount of intraoperative blood loss, the number of used gauzes, the total surgical time, and the degree of surgeon's satisfaction as well as intra and postoperative complications.Qualitative variables were represented as numbers and percentages, while continuous quantitative values were expressed as mean ± standard deviation (SD).Pearson's Chi-square (X2) test, Paired sample T test and the ANOVA test were used in statistical analysis, a p value < 0.05 was considered statistically significant.The data were coded and analyzed using the Statistical Package for the Social Sciences (SPSS) V16 software.
Results
Thirty patients were included in this study; 15 patients in each group without a statistically significant Intraoperative bleeding (measured in ml), number of used gauzes, and the total surgical time were significantly lower in group A compared to group B. (Table 2).
Surgeon satisfaction regarding the clarity of the surgical field and the ease of surgery was compared between the two groups and graded into four grades.Although a clearer surgical field and more surgeon satisfaction were observed in group A compared to group B, statistical analysis showed no significant difference between the two groups.(Table 2).
Further statistical analysis was done using the ANOVA test to detect the relationship between the surgeon's satisfaction and intraoperative parameters.The analysis showed that the lower the amount of intraoperative bleeding measured in ml is associated with fewer number of gauzes used and clearer surgical field, and all these parameters are associated with higher surgeon satisfaction and shorter surgical time.
Discussion
Minimizing bleeding during external DCR is an important goal as bleeding can obscure the operative field especially the lacrimal sac and the nasal mucosa; and subsequently increase the DCR failure rate.Epinephrine has been widely used to reduce the per operative bleeding in many surgeries via its vasoconstrictor effect.Epinephrine also has a procoagulant effect by increasing the platelet aggregation via its alphaadrenergic effect [11].The vasoconstrictor effect of the epinephrine varies according to the vessel type whether arteries, arterioles, precapillary sphincters, capillaries, venules, and veins [12].However, delayed intraoperative bleeding may occur when the vasoconstrictive effect had wane leading to rebound bleeding by several mechanisms including local hypoxia of the tissues and to the acidosis caused by the prolonged vasoconstriction and β adrenergic effect causing rebound hyperemia [13].Therefore, addition of antifibrinolytic agent as TXA has been proposed, aiming to maintain the already formed blood clot.
Local application of TXA has been investigated in many surgical interventions to reduce intraoperative bleeding and maintain a dry surgical field with subsequently less surgical time.This addition proved its efficacy in facelift surgery [14], joint replacement [15], minor oral surgeries [16], as well as many dermatological procedures like Mohs micrographic surgery [17].
In this study, we investigated the efficacy of adding TXA to epinephrine-soaked gauze in reducing the total amount of bleeding in external DCR, aiming for the best bloodless intraoperative field during the procedure and reducing the total operative time.To our knowledge, this is the first study to investigate the use of topical TXA in external DCR.Agrwalla and Dora in 2016 assessed the efficacy of systemic administration of either oral TXA 500 mg tablets, ethamsylate tablets or botropase injection preoperatively versus placebo (Vitamin B complex) in reducing the amount of intraoperative bleeding and shortening the surgical time during lacrimal sac surgery.The authors found a significant difference in the mean operating time between the TXA group and the placebo group (30 min and 56 min, respectively).Also, the number of gauge pellets soaked with blood postoperatively was calculated and was significantly lower in the TXA group (22 pellets) versus the placebo group (38 pellets).[18] Our results were comparable with those in this study.The amount of intraoperative bleeding was significantly lower in group A (29.4 ± 17.1 ml) as compared to group B (49.1 ± 18.1 ml).The mean operative time was 36 min in group A and 46.1 min in group B, and a significantly lower number of gauzes were used in group A compared to group B (2.4 ± 1.1 and 4.2 ± 2 respectively, with p value = 0.008).However, we think that the topical route of TXA administration may be more convenient than oral administration to avoid any systemic hazards as GIT complications, hypersensitivity reaction and the rare thromboembolic events [18].
Our results nearly matched those of Caesar and McNab, who reported a mean time of 36 min after use of a mixture of local anaesthetic composed of a 1:1 mixture of 2% lidocaine with 1:100,000 epinephrine and bupivacaine with 1:100,000 epinephrine.Caesar and McNab also reported a mean blood loss of 4.5 mL, while in our study we reported a mean blood loss of 29.4 ± 17.1 ml in group A and 49.1 ± 18.1 ml in group B. The authors themselves stated that the reported blood losses in various studies have varied from a mean of 6.3 to 250 ml due to variable techniques used in measuring blood loss [3].
In our study surgeon satisfaction regarding the clarity of the surgical field and the ease of surgery was compared between the two groups and graded into four grades.Although a clearer surgical field and more surgeon satisfaction were observed in group A compared to group B, statistical analysis showed no significant difference between the two groups.This may be attributed to low number in each group in relation to high number of grades of satisfaction.However, no difference in final clinical outcome over the follow-up visits was noticed.
The results of this study suggest that the roles of epinephrine and TXA are complementary and the short time of action of the epinephrine could be avoided with the clot stabilization effect of TXA.This effect is supported by the laboratory work done by Zilinsky et al. in which lidocaine and adrenaline did not alter the effects of TXA on the stability and fibrinolysis of blood clots (17).
Limitations of the current study include
The current study has some limitations, including the lack of comparison with other surgical approaches like endoscopic DCR and the low number of patients.
Conclusion
During external DCR using gauze soaked with combined TXA (100 mg/ml) and epinephrine 1:200,000 showed a significant reduction in the amount of intraoperative bleeding compared to gauze soaked with epinephrine 1:200,000 only.The reduction in the amount of bleeding with the addition of TXA results in a clearer surgical field, shorter surgical time, and more surgeon satisfaction.
Fig. 3
Fig. 3 Surgical field in group A a, versus group B b, with less bleeding, better visualization and demarcation of anatomical structures | 2023-07-17T06:17:53.252Z | 2023-07-15T00:00:00.000 | {
"year": 2023,
"sha1": "58366494145792ffe8327025c53c3866dc56875e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10792-023-02789-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "3534c3223a48e9b016ad688af5596c0889c0e1b3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59930882 | pes2o/s2orc | v3-fos-license | Manganese in dwarf spheroidal galaxies
We provide manganese abundances (corrected for the effect of the hyperfine structure) for a large number of stars in the dwarf spheroidal galaxies Sculptor and Fornax, and for a smaller number in the Carina and Sextans dSph galaxies. Abundances had already been determined for a number of other elements in these galaxies, including alpha and iron-peak ones, which allowed us to build [Mn/Fe] and [Mn/alpha] versus [Fe/H] diagrams. The Mn abundances imply sub-solar [Mn/Fe] ratios for the stars in all four galaxies examined. In Sculptor, [Mn/Fe] stays roughly constant between [Fe/H]\sim -1.8 and -1.4 and decreases at higher iron abundance. In Fornax, [Mn/Fe] does not vary in any significant way with [Fe/H]. The relation between [Mn/alpha] and [Fe/H] for the dSph galaxies is clearly systematically offset from that for the Milky Way, which reflects the different star formation histories of the respective galaxies. The [Mn/alpha] behavior can be interpreted as a result of the metal-dependent Mn yields of type II and type Ia supernovae. We also computed chemical evolution models for star formation histories matching those determined empirically for Sculptor, Fornax, and Carina, and for the Mn yields of SNe Ia, which were assumed to be either constant or variable with metallicity. The observed [Mn/Fe] versus [Fe/H] relation in Sculptor, Fornax, and Carina can be reproduced only by the chemical evolution models that include a metallicity-dependent Mn yield from the SNe Ia.
Introduction
Manganese (Mn) is an iron-peak element that can be produced by both type II and type Ia supernovae (SNe). Theoretical works indicate that the SNe II yields of Mn should increase with metallicity (Woosley & Weaver 1995), which is supported by observations such as the rise in [Mn/O] with [O/H] increasing from −0.5 to 0.0 (e.g., Feltzing et al. 2007). Conversely, the question of the metal dependence of the SNe Ia yields remains a matter of debate. Shetrone et al. (2003) suggested that the SNe Ia yields of both Cu and Mn increase with metallicity, and McWilliam et al. (2003) brought additional arguments in favor of this hypothesis by comparing the Mn abundances in the Milky Way bulge, the solar neighborhood, and the Sagittarius dSph galaxy. These arguments in favor of a metallicity-dependent Mn yield by SNe Ia were however challenged by Carretta et al. (2004), who judge that the observational results gathered so far are too complex to allow a clear-cut conclusion to be drawn. ⋆ Based on observations made with the FLAMES-GIRAFFE multiobject spectrograph mounted on the Kuyen VLT telescope at Nevertheless, Cescutti et al. (2008), with their chemical evolution model, and Badenes et al. (2008), with their new method for measuring the metallicity of Type Ia supernovae, independently found additional evidence of the metal-dependence of the SNe Ia Mn yields, which was also suggested by theoreticians such as Ohkubo et al. (2006). Badenes et al. (2008) suggest the following explanation of the phenomenon: during the late evolution of the supernova (SN) Ia progenitor, the 14 N produced by the CNO cycle is converted into 18 F (before being finally transformed into 22 Ne), which is transformed into 18 O through β + decay. This increases the number of neutrons in the stellar core, which is the future white dwarf. The neutron excess η is proportional to the metallicity Z and is essentially preserved until the supernova explosion. Although this neutron excess leaves the production of the most abundant species (e.g. Fe) unaffected, the formation of elements with a larger number of neutrons than protons is favored at high Z during the SN Ia explosion. 55 Mn, with its 25 protons and 30 neutrons, is the most abundant of them; it is produced during incomplete Si burning (first as 55 Co, which then decays into 55 Mn). When compared with the abundance of an element insensitive to the neutron excess (especially Cr, which is also built during incomplete Si burning), the resulting Mn abundance can be expected to be an efficient tracer of the progenitor metallicity.
To shed light on the production mechanisms of Mn, we clearly require to investigate its abundances in a variety of galaxies, with different star formation histories. To date the number of systems in which this information is available is small and the number of stars is very limited: besides the Milky Way, about two dozens of stars have been analyzed in Sagittarius (Bonifacio et al. 2000;McWilliam et al. 2003;Sbordone et al. 2007), nine stars in Sculptor (Shetrone et al. 2003;Geisler et al. 2005;Tafelmeyer et al. 2010), and up to a maximum of five stars per galaxy in Draco (Shetrone et al. 2001), Sextans (Shetrone et al. 2003;Tafelmeyer et al. 2010), Carina (Shetrone et al. 2003), Fornax (Shetrone et al. 2003;Tafelmeyer et al. 2010), and LeoI (Shetrone et al. 2003).
DART, the Dwarf galaxy Abundance and Radial-velocities Team, allows a real step forward for Sculptor, Fornax, Carina, and Sextans. This ESO Large Program based on the FLAMES/GIRAFFE spectrograph at the VLT can encompass stellar samples of up to 80 stars per galaxy with optical spectra at relatively high resolution (R∼20,000). The abundances of most elements with measurable lines have already been published, except for manganese: the equivalent widths of this element are available, but the abundance determination is more complicated. Since manganese has an odd atomic number (Z = 25), it has a significant hyperfine structure (hereafter HFS), which broadens the spectral lines. This can lead to desaturation of the lines, which cannot be neglected as soon as the equivalent widths exceed a few tens of mÅ. Therefore, reliable abundances cannot be obtained by just using the equivalent width and total oscillator strength of a given line. All components of the hyperfine structure have to be taken into account. This work provides Mn abundances (with HFS taken into account) in the three Local Group dwarf spheroidal galaxies for an unprecedentedly large number of stars. This constitutes by far the largest sets of Mn abundances in any galaxy other than the Milky Way, and the size of our sample is comparable to e.g. the samples of stars in the thin and thick disks of our Galaxy considered by Feltzing et al. (2007). This paper is organized as follows. Section 2 introduces our sample of stars. Section 3 describes how we derived Mn HFS-corrected abundances, while Section 4 discusses the results. Section 5 presents chemical evolution models that reproduce the observations. Finally, Sect. 6 summarizes our results.
Observational material and analysis
In the following, we analyze five different samples. For four of them, the FLAMES/GIRAFFE HR10, HR13, and HR14 grisms were used, respectively, centered on 5488, 6273, and 6515 Å (see Tolstoy et al. 2006). The full abundance analysis papers of the DART FLAMES/GIRAFFE in Sextans and Sculptor are being written up. Surveys of the Fornax, Sculptor, and Sextans galaxies have already been been performed to search for extremely metal-poor stars (Tafelmeyer et al. 2010). The results for the Mn abundances of these stars, which were previously corrected for HFS, are incorporated in the present work. The results of the analysis of all elemental abundances besides Mn are published in Letarte et al. (2010) for Fornax and in Lemasle et al. (2012) for Carina. In a companion work, Venn et al. (2012) presented the chemical composition of 23 elements in nine bright Carina red giant branch stars observed with either the FLAMES/UVES fibers or the Magellan/MIKE spec-trograph. Their Mn abundances were corrected for HFS structure and their sample complements ours. In summary, all necessary data, such as equivalent widths and stellar parameters, were available for the present analysis of manganese.
Galaxy and stellar sample
• The Fornax dSph galaxy was studied by Letarte et al. (2010), who provided and discussed the abundances of a large number of elements. There are 72 stars with at least one measurable Mn line, 60 of which have three reliable Mn lines.
• In Sculptor, 76 stars have at least one measurable Mn line, 50 of which have a reliable average Mn abundance based on three lines (Hill et al., in prep) .
• Twenty-one stars constitute the stellar magnitude-limited sample (I < 18) in Sextans (Jablonka et al, in prep.). However, only 5 stars have reliable Mn equivalent widths.
• In Carina, 17 stars have at least one Mn line (Lemasle et al. 2012), but only 6 have detectable Mn i λ5407 Å and λ5420 Å lines, which were finally selected to compute the average Mn abundance.
The detailed analysis of the Mn lines and the composition of the final sample of stars is performed in Section 3.
Stellar atmosphere models and HFS corrections
The abundance analysis was performed with two codes, calrai on the one hand, and moog on the other, both used with the new MARCS 1 spherical models of stellar atmospheres (Gustafsson et al. 2003(Gustafsson et al. , 2008, under the LTE approximation (for Sculptor, the calrai abundances were determined using plane-parallel MARCS models). The computation of the radiative transfer was still done in a plane-parallel geometry.
The stellar effective temperatures, gravities, and turbulence velocities were adopted from the DART general analyses of each galaxy. Temperatures and gravities were determined from photometric data for Fornax, Sextans and Carina, and from spectroscopic data in the case of Sculptor.
In principle, equivalent widths were measurable for up to four Mn lines. All of these lines belong to the wavelength range of the HR10 FLAMES/GIRAFFE setup. One line, Mn i λ5432, belongs to the multiplet No 1 and is a resonance line, while the other three belong to the multiplet No 4. All four lines were significantly broadened by the hyperfine structure.
The Mn hfs-corrected abundances were derived in two steps: ⊲ First, the uncorrected Mn abundances were computed with calrai. The code was initially developed by Spite (1967) (see also the atomic part description in Cayrel et al. 1991), and has been continuously updated over the years. calrai was used to analyze all DART data sets. The DART results were partly summarized in Tolstoy et al. (2009a). The homogeneity of these analyses allows us to perform robust comparisons of the chemical patterns for all metallicity ranges and between galaxies.
⊲ Second, an HFS correction was computed with the August 2010 version of Chris Sneden's moog code 2 . On the one hand, for each line we computed the uncorrected Mn abundance (i.e. neglecting the hyperfine structure), using the abfind driver and the total log(g f ) value of the lines, taken from the Kurucz file gfhy0600.100 3 . The resulting abundances are very close to those given by calrai (see the Appendix for a comparison between the moog and calrai Mn abundances). On the other hand, we computed the abundances with the hyperfine structure, using the blends driver of moog and introducing all hyperfine components listed in the above Kurucz file.
Finally, the HFS correction ∆ h f s for each line was defined as the difference between the two above abundances. The line parameters and hyperfine components are given in Table 1.
As an example, the HFS corrections for the 72 stars in the Fornax dSph galaxy are shown in Fig. 1 as a function of equivalent width, for the four available lines (for 71 stars only in the case of the λ5516Å line). The behavior of the correction for the strongest line, Mn i λ5432Å , is especially noteworthy: the correction becomes increasingly negative as the equivalent width increases, then turns upward beyond 220 mÅ. This behavior reflects the curve of growth: the minimum correction (or maximum of its absolute value) coincides with the plateau of the curve of growth, while the desaturation effect of the hyperfine structure become unimportant in the linear part on the one hand, and on the strongly saturated part on the other. The scatter in the HFS corrections at a given equivalent width is due to the variety of stellar parameter values, especially for the micro-turbulent velocities. In Sculptor, the behavior of the HFS correction is similar, except that the rising branch (for the Mn i λ5432Å line) is much shorter, because of the lower metallicity. In Sextans, the HFS corrections are never larger than 0.35 dex, this maximum being reached for the λ5432Å line, which is the strongest. In Carina, the HFS corrections are smaller than 0.3 dex for the λ5407Å and λ5516Å lines, and smaller than 0.6 dex for the other two lines.
We note that the amplitude of the HFS correction may reach 1.6 dex; Fig. 1 illustrates how inescapable this correction is.
Final line-by-line abundances
The final Mn abundances were obtained by adding ∆ h f s to the initial abundances derived with calrai. For Carina, whose data was included later, we used only moog to determine the Mn abundance, because the results of this code perfectly match those of calrai, as shown in Fig. A.1, where we used the same spherical atmosphere models as for the abundance determination of Fe and other elements. Fig. 2 The Giraffe sample of stars at the center of the Fornax dSph is indeed more metal-rich than those at the center of the Sculptor dSph. Therefore, the equivalent widths of the λ5432Å line are larger in Fornax than in Sculptor and above 200 mÅ for most stars. The λ5432Å line is the most sensitive to non-local thermodynamic equilibrium (NLTE) effects because of its low excitation potential. Furthermore, it is so strong that its profile departs significantly from a Gaussian, thereby severely biasing the equivalent width estimated by the daospec code, which indeed assumes a Gaussian profile. As a consequence, we discarded the λ5432Å line in the computation of the average Mn abundances.
The λ5407Å line in Fornax also behaves in a slightly different way, with respect to the λ5420Å and λ5516Å lines. As for the λ5432Å line, this is probably due to the large equivalent widths of the most metal-rich stars of this galaxy, which may greatly exceed 200 mÅ. Therefore, we excluded all lines with EW > 230 mÅ (for the λ5407Å one, but also the two others) from the Fornax sample when computing the average Mn abundances. The safer and more stringent criterion of EW > 200 mÅ would have left only 25 stars with an average Mn abundance based on three lines. Including stars with 200 < EW < 230 mÅ raises the average [Mn/Fe] ratio by no more than 0.1 dex, without biasing too substantially the distribution of stars in terms of metallicity, hence this trade-off was deemed to be acceptable.
Average abundances and compilation of the [Mn/Fe] vs [Fe/H] diagram
To compute the final abundances, we used an average weighted by the inverse variances of the abundances obtained from the individual lines; these variances were propagated from the estimated errors in the corresponding equivalent widths.
⊲ In Fornax as well as in Sculptor, the average abundances were computed from the three lines λ5407Å, λ5420Å, and λ5516Å. Since some stars lack one or more of these lines, or the equivalent width of some of the lines is larger than 230 Å, only 60 stars are left out of the initial 72 ones. ⊲ in Sculptor, the average abundances could be computed from the same three lines as in Fornax, for a final sample of 50 stars. ⊲ In Sextans, keeping only those stars with three reliable lines would have resulted in only one single object. Therefore, all 5 stars (in addition to the EMP stars of Tafelmeyer et al. 2010) were included in the final sample, even though the average abundances are based on fewer than three lines in most cases. ⊲ In Carina, the initial sample of 17 stars shrinks to 6 objects having at least the two Mn lines λ5407Å, λ5420Å. The average Mn abundances are based on these two lines.
The average Mn abundances are computed from three lines in both Sculptor and Fornax but from only two lines in Carina, which might cause a zero-point problem, when our results for the two galaxies are compared. However, Fig. 2 shows that the λ5516Å line, which was not included in the average [Mn/Fe] values of Carina, yields [Mn/Fe] ratios that are in-between those derived using the two other lines (see e.g. the averages for Sculptor), such that neglecting the line does not change the average values by more than a few hundredths of dex at most. Another kind of zero-point problem does, however, arise between some published values and those of this work because of the different solar abundances adopted. We adopt log(N Mn ) + 12 = 5.39 and log(N Fe ) + 12 = 7.50, Sobeck et al. (2006) adopt 5.39 and7.52 respectively, andVenn et al. (2012) adopt 5.43 and 7.50. This difference of a few hundredths of dex remains smaller than the uncertainties and was therefore neglected.
Fig. 1.
Hyperfine structure correction (defined as the abundance with hfs correction versus abundance without it) as a function of equivalent width for the Mn i lines λ5407, λ5420, λ5432, and λ5516 Å for red giants in the Fornax dSph galaxy. The λ5432 line was finally discarded (see text).
Discussion of possible NLTE effects
Whilst we took the line HFS into account, our abundances may still suffer from NLTE effects. Very few studies address this
Fig. 2. Final
Mn abundances for each of the four lines available for the stars in the Sculptor (blue), Fornax (red), Sextans (green), and Carina (dark green) dSph galaxies. The black horizontal line indicates the zero value; the dashed lines are the weighted averages for their respective galaxies. Note the strongly discrepant behavior of the λ5432 line.
problem for the manganese lines. Bergemann & Gehren (2007) examined the solar atmosphere for a total of 39 lines belonging to ten multiplets, and their line list includes the four Mn i lines we use here. They showed that the NLTE correction (defined as ∆X = log ε NLT E − log ε LT E , where ε is the ratio of the number densities of Mn to H) is at most on the order of 0.1 dex in absolute value. The maximum correction, ∆X = +0.11, applies to the λ5432Å line, closely followed by the other three (+0.09 for λ5420Å, and +0.085 for λ5407Å, and λ5516Å). Unfortunately, these corrections cannot be applied directly to our case, because the surface gravities and metallicities of our sample are very different from solar. Bergemann & Gehren (2008) On the observational side, Feltzing et al. (2007) argued that the excitation balance is unaffected by departure from LTE in their sample, based on the identical behavior of lines with different excitation potentials, when plotting the abundance as a function of T eff , log g, and [Fe/H]. However, they did not exclude possible departures from ionization balance. In addition, all their stars are either on the main sequence or the subgiant branch, and none have [Fe/H] < −1. Furthermore, we have only one line in common with Feltzing et al. (2007), Mn i λ5432, which, as argued above, we chose to discard because it is probably the most sensitive to NLTE effects and its equivalent width is biased ow- , and Carina (model E in dark green) dSphs are followed. The continuous lines show models with metallicitydependent SNe Ia Mn yields as in Cescutti et al. (2008). The dashed lines follow the evolution of [Mn/Fe] for the same SFHs, but with metal-independent SNe Ia Mn yields.
ing to its large strength. Therefore, the conclusion reached by Feltzing et al. (2007) cannot be generalized to our sample. Sobeck et al. (2006) determined Mn abundances for 200 stars in 19 globular clusters and for a comparable number of field stars with similar stellar parameters. They also neglected the NLTE effects, on the grounds that they should be small when considering [Mn/Fe], which involves two neutral species (Ivans et al. 2001). Interestingly, they found an average con-stant value <[Mn/Fe]>= −0.36 for their halo field stars, which is about 0.15 dex lower than the value found by Feltzing et al. (2007) Sobeck et al. (2006) show the same trend but systematically lower by about 0.1 dex. One possible explanation of this difference is a bias produced by there being only one line, Mn i λ6013Å, in common to Sobeck et al. (2006). Another explanation might be an NLTE correction that would be 0.15 dex larger for giants than for less evolved stars 4 , but this remains to be confirmed on theoretical grounds. The main results of the present study are summarized in Fig. 3. Our comparison sample is composed of the results of i) Sobeck et al. (2006) for the Milky Way globular clusters and field halo stars, ii) Cayrel et al. (2004) for field halo stars, and iii) Feltzing et al. (2007) for the Milky Way thin and thick disk stars. We also display the extremely metal-poor (EMP) stars found by Tafelmeyer et al. (2010) in Sculptor, Fornax, and Sextans, and the nine stars of Venn et al. (2012) in Carina. Finally, we show the stars studied by Shetrone et al. (2003) in the Sculptor, Fornax, Sextans, Carina, and Leo I dSph galaxies.
The extremely metal-poor (EMP) stars
Tafelmeyer et al. (2010) noted that the manganese abundances of their Sextans members, S11-04 and S24-72, were based on only one line, Mn i λ4823.52Å which differs from the lines we used. In spite of this difference, their [Mn/Fe] values are in good agreement with those of other Sextans stars of higher metallicity. They also agree with the values found in the Milky Way halo (Sobeck et al. 2006) and our results in Fornax and Sculptor. The Mn abundance of the most extreme EMP star, Scl07-50, was obtained from the three resonance lines of the triplet at λ ∼ 4030 Å, while that of Fnx05-42 (which is only slightly less iron-poor but has the lowest [Mn/Fe] ratio) was obtained from two lines of the same triplet. These resonance lines are expected to be strongly affected by NLTE. Hence, we drew an upward arrow at the position of the two most iron-poor stars, with an amplitude of 0.44 dex matching the NLTE correction of Bergemann & Gehren (2008) Feltzing et al. (2007) in the thick disk and Sobeck et al. (2006) in the halo and globular clusters. This agreement can only be considered qualitative, owing to the zero point issues raised earlier and the different kind of stars considered (dwarfs instead of giants) in the case of Feltzing et al. (2007). A closer look reveals some interesting features. In Fornax (red dots in Fig. 3), there seems to be a very slight correlation between [Mn/Fe] and [Fe/H], but the trend is essentially due to the small group of 4 stars near [Fe/H]= −1.4. Pearson's correlation coefficient is only 0.17 (for 60 stars), the more robust Spearman correlation coefficient is 0.05, and the Student-t test is 0.38. The relatively large difference between the two correlation coefficients is due to the small group of 4 stars around [Fe/H]∼ −1.4, which lie rather far away from the bulk of the data and cause the large value of Pearson's coefficient. In conclusion, even though future observations might confirm the trend suggested here in Fornax, we can only tell for the time being that it is not statistically significant. [Mn/Fe] might thus be considered constant with [Fe/H], with an average value < [Mn/Fe] >= −0.32 ± 0.02. Any cosmic dispersion must be smaller than about 0.09 dex, because the scatter in the [Mn/Fe] values around the mean amounts to ∼ 0.12 dex, while their average error is ∼ 0.07 dex.
Conversely, the 50 Sculptor stars display a global negative trend where Pearson's correlation coefficient is −0.569, Spearman's coefficient is −0.546, and the Student-t test is −4.51 (for 50 stars). This is clearly significant because the null hypothesis has a probability well below one percent. Zooming into Sculptor in Fig. 3, the relation does not appear, however, to be a precisely monotically declining line, but rather like a plateau followed by a decreasing linear function (Fig. 4). If real, a "knee" appears between [Fe/H]=−1.5 and −1.3.
The contrasting behaviors of [Mn/Fe] in Fornax and Sculptor seem difficult to explain entirely in terms of NLTE effects, primarily because the average surface gravities are the same (∼ 0.7 dex) in both cases. Moreover, while one would expect NLTE effects alone to produce in all galaxies the same monotonic relation with metallicity as seen in the Milky Way, the observed trends instead differ for each galaxy. An alternative explanation could be a systematic error in the HFS components (splitting, oscillator strengths), because on average, the lower the [Fe/H], the smaller the HFS correction.
We conducted two different tests to explore the possibility of erroneous HFS corrections: First, we excluded from the sample all Sculptor stars with HFS corrections larger than a given limit. Limiting the sample to the 45 stars with |∆ h f s | ≤ 1 for the three lines at 5407Å, 5420Å, and 5516Å still provides a Spearman rank correlation coefficient of −0.5 and a t-test of −3.75, implying a probability well below one percent that the correlation is random. Limiting ourselves further to |∆ h f s | ≤ 0.3 (24 stars), the correlation remains, with a probability of random occurrence being well below five percent. This suggests that only a very large relative error in ∆ h f s , on the order of 50%, could account for the trend we see in Sculptor, which spans almost ∼ 0.2 dex in [Mn/Fe]. This seems unlikely.
Second, instead of using the Kurucz line list, we extracted from Tables 1, 5, and 6 of Vitas & Vince (2003) the HFS components of the 5420Å and 5432Å lines, which are based on the laboratory measurements of Booth et al. (1983). We retained the uncorrected wavelength and oscillator strength values (the "λ ′ " and "log(g f ) ′ " ones). We computed the HFS correction for these data again for the seven Sculptor stars for which the original HFS corrections ranged from −0.10 to −1.60 for the 5420Å line, and from −0.34 to −1.07 for the 5432Å line. For the 5420Å line, we obtained the same ∆ h f s values as for the Kurucz components within 0.02 dex. For the 5432Å line, ∆ h f s was recovered to be within 0.01 dex for six stars, and within 0.03 for the last one.
Therefore, the HFS corrections appear to be very robust, especially as the uncorrected log(g f ) ′ values listed in the paper of Vitas & Vince (2003) differ only slightly from Kurucz' ones (the total log(g f ) value is −1.492 instead of −1.462 for the 5420Å line, and −3.740 instead of −3.795 for the 5432Å line).
In summary, the variation in [Mn/Fe] can probably be taken at face value and genuinely related to the nucleosynthesis of Mn. The decreasing trend in [Mn/Fe] with increasing [Fe/H] seen in Sculptor had been observed nowhere else, except for giants and subgiants in the globular cluster ω Centauri (Cunha et al. 2010;Pancino et al. 2011), where the anti-correlation is even more pronounced (see Fig. 4). Romano et al. (2011) attempted to interpret these last sets of results, but unsuccessfully, although they also found that a metallicity-dependent yield of SNe Ia would be more realistic than a constant yield.
Manganese and the α elements
Since the α-elements are mostly produced in massive stars while Mn can be produced by both SNe II and SNe Ia, the ratio of Mn to some of the α-elements may reveal at which point manganese is produced by one or the other nucleosynthetic route. Fig. 5 displays the cases of Mg and Ca, two α-elements with slightly different nucleosynthetic origins: Mg is produced in a hydrostatic phase of the evolution of massive stars, while Ca is instead produced during a type II supernova explosion (Woosley et al. 2002). Figure 5 Lemasle et al. (2012), 9 from Venn et al. (2012, and 5 from Shetrone et al. (2003)) lie close to the sequence defined by the Sculptor stars. However, the star at [Fe/H]= −1.4 lies outside the general trend defined by the sample of Lemasle et al. (2012), and the stars of Shetrone et al. (2003) do not show any trend. Our 5 Sextans stars define an increasing trend similar to that of Sculptor and possibly steeper, which needs confirmation by further observations.
In Fig. 4), and ii) the decrease in [Mg/Fe] due to SNe Ia, as can be most clearly seen at [Fe/H] > −1.6. The differential behavior of Mn and the α-elements can be attributed to their different nucleosynthetic paths : Mn is produced ever more in increasingly metal-rich core-collapse supernovae, and definitely more than in the metal-poor type Ia supernovae (McWilliam et al. 2003). To further investigate the relative roles of SNe II and SNe Ia, we introduce simple models of chemical evolution in the next section.
The nucleosynthesis of Mn
We now discuss the chemical evolution of the three galaxies of our sample with the largest number of stars, Sculptor, Fornax, and Carina, adopting a differential approach in which we compare models with and without metal-dependent SNe Ia Mn yields. Models A, C, and E are set up to follow the observations as closely as possible, whereas models B and D are extreme cases, with which we intend to test the influence of the choice of SFH on the results. These five models attempt to describe the extremes of the possible SFH for these galaxies, the true one lying somewhere within these boundaries. Their main characteristics are summarized in Table 2.
Models of chemical evolution
Models A and B refer to the Sculptor dSph. In model A, the SFR is a decreasing exponential function on a timescale of 1 Gyr. In model B, the SFR is also a decreasing exponential function, although on a shorter timescale of 100 Myr. Both models have a low star formation rate tail of 5 · 10 −5 M ⊙ /yr, stopping 5 Gyr ago. They both form a similar total mass of stars on the order of ∼ 1.5 · 10 6 M ⊙ , from a total initial mass of gas of 2 · 10 7 M ⊙ .
Models C and D refer to the Fornax dSph. Model C assumes an exponentially decreasing SFR on a long timescale of 10 Gyr, whereas model D, with an exponentially decreasing SFR on a short timescale of 100 Myr, has an extended tail with a star formation rate of 3 · 10 −3 M ⊙ /yr. The evolution of the models was stopped 1 Gyr ago and has an amplitude of star formation that is ten times higher than for the Sculptor models. Both Fornax models form a total mass of stars of ∼ 4.5 · 10 7 M ⊙ , from a total initial mass of gas of 3 · 10 8 M ⊙ .
For the SN Ia rate, which is a key component of our analysis, we underline that it was computed following Matteucci & Greggio (1986), hence expressed as: where ψ(t) is the SFR, M 2 is the mass of the secondary, M B is the total mass of the binary system, The IMF is represented by φ(M B ) and refers to the total mass of the binary system when computing the SNe Ia rate, f (µ) is the distribution function for the mass fraction of the secondary with γ = 2 and A is the fraction of systems in the appropriate mass range that can give rise to SNe Ia events. This quantity is fixed to 0.05 by reproducing the observed SNe Ia rate at the present time (Cappellaro et al. 1999). The metal-dependent yields of Fe and Mn for SNe II are taken from Woosley & Weaver (1995), with the difference that we halved the iron yields for SNe II, as suggested by Romano et al. (2010). These yields are represented by the red curve in Fig. 7 for SNe II with a 15 M ⊙ progenitor, which is taken as representative of the majority of the core-collapse SNe. We first implemented the hypothesis of Cescutti et al. (2008) that the metal dependence of the Mn SNe Ia yields is y ∝ (Z/Z ⊙ ) 0.65 (see the black line in Fig. 7), which led to the five models A1, B1, C1, D1, and E1. We then considered the Iwamoto et al. (1999) metal-independent Mn yields for a solar metallicity, taking the SNe Ia yields for iron from Iwamoto et al. (1999). This led to the five additional models A2, B2, C2, D2, and E2.
Does the Mn yield depend on metallicity?
Figure 3 unambiguously demonstrates that regardless of the galaxy and the assumed SFH, models for which there is no metallicity dependence for the Mn SNe Ia yields (dashed lines) predict a far too high [Mn/Fe]. In contrast, all five models with a metal-dependence (solid lines) do pass through the observed data points. This may well indicate that the form of the assumed metal-dependence of the SNe Ia yields is not fully correct. We did not try any fine-tuning at this stage, since our relatively simple models imply clearly enough that metal-poor SNe Ia should produce less Mn than metal-rich ones. Fig. 3 -1.4), at the end of the intermediate age peak of star formation, is shallower owing to the higher metallicity of the SNe Ia at that time.
Comparison with Lanfranchi's models
The chemical evolution models adopted here are very similar to the ones computed by Lanfranchi et al. (2003;, although there are two major differences : we do not consider galactic winds and our SFHs are quite different. While we adopt SFHs initially derived from color-magnitude diagrams, Lanfranchi et al. adjust their SF efficiency until the observations are reproduced.
Galactic winds or any other dynamical effects such as tidal and ram pressure stripping must have removed the gas in these dSphs, because none is detected. Moreover, as shown by Lanfranchi et al., the galactic winds can influence the chemical evolution at the end of the evolution of these galaxies, if one considers differential winds, i.e., that different elements can be expelled with different wind efficiencies. Nevertheless, to keep our models as simple as possible, galactic winds were not an option in our analysis. This does not affect our conclusions. Indeed the evidence that SN Ia Mn yields depend on metallicity does not arise from the latest stages of the galaxy chemical evolution, when winds would play a role, but much earlier. Moreover, given that Lanfranchi's wind efficiency is essentially the same for Fe and Mn, [Mn/Fe] is definitely not expected to change.
Conclusion
On the basis of the three Mn I lines at λ5407, 5420, and 5516 Å, we have derived the stellar abundances of manganese in three dSph galaxies, Sculptor (50 stars), Fornax (60 stars), and Carina (6 stars); Mn abundances in a fourth dSph galaxy, Sextans (5 stars), was based on only one to three Mn lines. These Mn abundances are corrected for HFSs, the correction reaching 1.6 dex for strong lines (EW∼ 200 mÅ).
Our analysis of the relation between the [Mn/Fe] and [Mn/α] abundance ratios and [Fe/H] has highlighted the following features : • The Mn abundances lead to sub-solar [Mn/Fe] ratios for all stars in all four of the studied galaxies, as expected from their low metallicity.
• The variation in [Mn/Fe] with [Fe/H] in Sculptor has two phases : a plateau at [Fe/H] < −1.4, followed by a ∼ 0.3 dex decrease at higher metallicity. This decreasing trend of [Mn/Fe] with [Fe/H] had only been observed previously in the globular cluster ω Centauri. In Fornax, there is a marginal suggestion of an increasing trend, but without any statistical significance.
• Our datasets in four different galaxies, and their comparison with the case of the Milky Way clearly demonstrates that the evolution of [Mn/α] as a function of [Fe/H] depends on the galaxy SFH. The variation in [Mn/α] can be interpreted in terms of the balance between the metal-dependent yields of type II and type Ia supernovae.
• Three simple chemical evolution models for Sculptor, Fornax, and Carina have been developed. The impacts of the type II and type Ia Mn yields, with and without any metal-dependence, have been investigated. They unambiguously demonstrate that the reproduction of the observations requires SNe Ia metal-dependent yields. The successive increase and decrease in [Mn/Fe] as a function of [Fe/H], as well as the amplitude of these variations, are the result of the increasing SNe II Mn yields with [Fe/H], combined with initially low SNe Ia yields that subsequently augment with metallicity. Tafelmeyer, M., Jablonka, P., Hill, V., et al. 2010, A&A, 524, A58+ Tolstoy, E., Hill, V., Irwin, M., et al. 2006, The Messenger, 123, 33 Tolstoy, E., Hill, V., & Tosi, M. 2009a, ARA&A, 47, 371 Tolstoy, E., Hill, V., & Tosi, M. 2009b, ARA&A, 47, 371 Venn, K. A., Shetrone, M. D., Irwin, M. J., et al. 2012, ApJ, submitted Vitas, N. & Vince, I. 2003, Serbian Astronomical Journal, 167, 35 Woosley, S. E., Heger, A., & Weaver, T. A. 2002, Reviews of Modern Physics, 74, 1015Woosley, S. E. & Weaver, T. A. 1995 Appendix A: Comparison between MOOG and CALRAI abundances The abundances of Mn that were uncorrected for the HFS were computed with both codes calrai and moog for the same atmosphere models. Therefore, it is possible to compare the results and check the consistency between the two codes. For Fornax, the raw (i.e. uncorrected for HFS) Mn abundances given by the two codes prove to be perfectly consistent (Fig. A.1).
For Sculptor, however, there is a systematic shift of about 0.1 to 0.2 dex, in the sense that the moog abundances are lower than the calrai ones for all four lines. The slopes are very close to 1, but tend to be slightly above unity.
The reason why the systematic zero-point shift is much larger in Sculptor than Fornax lies in the atmosphere models used. While spherical models were used in connection with the moog spectral synthesis code for both galaxies, plane-parallel models were used in connection with the calrai code in the case of Sculptor, leading to the overestimated abundances seen in Fig. A.2. Fig. A.1. Comparison between the Mn abundances (not corrected for hfs) obtained for the Fornax dSph galaxy using the moog code, and the ones obtained using the calrai code, for each of the 4 lines Mn i λ5407, λ5420, λ5432, and λ5516. In both cases, the abundances were determined using spherical atmosphere models. Fig. A.1, but for the Sculptor dSph galaxy. Here the calrai abundances were determined using plane-parallel atmosphere models, while the moog abundances are based on spherical models. | 2012-03-27T15:13:01.000Z | 2012-03-20T00:00:00.000 | {
"year": 2012,
"sha1": "c328f1982ae7478a0f5c8e329686ed93ddcb39a0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1203.4491",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c328f1982ae7478a0f5c8e329686ed93ddcb39a0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
211212865 | pes2o/s2orc | v3-fos-license | Effect of Different Exercise Loads on Testicular Oxidative Stress and Reproductive Function in Obese Male Mice
This study is aimed at investigating the effect of different exercise loads on the reproductive function of obese male mice and the underlying mechanisms. Male mice with high-fat diet-induced obesity were divided into obesity control (OC), obesity moderate-load exercise (OME), and obesity high-load exercise (OHE) groups. The OME and OHE groups were subjected to swimming exercise 5 days per week over a duration of 8 weeks, with the exercise load progressively increased to 2 h per day in the OME group and 2 h twice per day in the OHE group. In the OC group mice without exercise regimen, we observed a decrease in mRNA expression of antioxidant enzymes, increase in free radical products, upregulation of mRNA and protein expression of nuclear factor-κB and proinflammatory cytokines, inhibition of mRNA and protein expression of testosterone synthases, decrease in the serum testosterone level and sperm quality, and increase in sperm apoptosis. Although both moderate-load exercise and high-load exercise reduced body fat, only moderate-load exercise effectively alleviated obesity-induced oxidative stress, downregulated the expression of nuclear factor-κB and proinflammatory cytokines, and reversed the decrease in mRNA and protein expression of testosterone synthases, serum testosterone level, and sperm quality. These changes were not observed in the OHE group mice. Obesity-induced testicular oxidative stress and inflammatory response decreased testosterone synthesis and sperm quality. Moderate-load exercise alleviated the negative effect of obesity on male reproductive function by decreasing testicular oxidative stress and inflammatory responses. Although high-load exercise effectively reduced body fat, its effects on alleviating oxidative stress and improving male reproductive function were limited.
Introduction
Over the last four decades, the number of people with obesity worldwide has increased rapidly from 105 million in 1975 to 641 million in 2014 [1]. In addition, infertility rates have increased parallelly with obesity rates [2,3]. In some countries with high obesity incidence, monitoring of the total sperm count and sperm motility in males indicated an annual decrease of 1.5% [4,5]. Increasing evidence suggests that obesity damages reproductive health in males and causes late-onset male hypogonadism [6,7], which is characterized by low serum testosterone levels and relevant symptoms (poor libido, erectile dysfunction, diminished sperm quality parameters, and reproductive dysfunction) [8][9][10]. The mechanisms through which obesity affects male reproductive function are complex. Previous reports indicate that oxidative stress [11] and inflammatory responses [12][13][14] are associated with impaired function of Leydig cells. Furthermore, according to human [15] and animal studies [16], when oxidative stress and inflammatory responses in the semen of obese males are increased, the sperm motility is reduced, morphological defects are increased, and DNA damage and apoptotic rate of germ cells are increased [17]. However, it remains unclear whether there is a correlation between obesity, oxidative stress, and inflammatory response.
The effects of exercise on weight loss and body fat reduction are well-known, and exercise load is known to be positively correlated with body fat reduction; however, reports about the effects of exercise-mediated body fat reduction on improvement in male reproductive function are inconsistent [18,19]. We have previously reported that 8 weeks of moderate-or high-load exercise effectively reduced body fat, but the negative effects of obesity on male reproductive function were alleviated only by moderate-load exercise and not by high-load exercise [20]. Studies have shown that exercise load is closely related to oxidative stress [21]. Low-load exercise does not cause oxidative stress injury. Moderate-load exercise increases free radicals associated with an increased oxygen intake; as a positive adaptive response, it can also stimulate the expression and activity of antioxidant enzymes and enhance the body's antioxidant capacity [22]. However, due to an increased oxygen consumption during the heavyload exercise, a large number of free radicals is produced through various mechanisms. Since the free radicals accumulate in excess, it exceeds the body's ability to resist oxidative stress, attacks biological macromolecules and membrane structures, and causes oxidative damage to the body that could be linked to exercise-related hypoandrogenemia and diminished sperm quality [23]. Therefore, we hypothesized that the inconsistencies in the effect of different exercise loads are related to oxidative stress and the inflammatory response. This study provides an experimental basis to further determine the mechanisms by which exercise and obesity affect male reproductive function, and it also provides a theoretical basis to develop effective prevention methods.
Materials and Methods
2.1. Animals. Fifty male C57BL/6L mice (age, 4 weeks; weight, 16-19 g) were purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd. (Beijing, China) under permit number SCXK (Beijing) 2016-0006. All mice were housed under controlled experimental conditions (22 ± 5°C, 50 ± 10% relative humidity, and 12 h light/12 h dark cycle) and provided with food and water ad libitum. Each cage contained no more than five mice. All animal experiments in this study were carried out in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and approved by the Animal Ethics Committee of Shenyang Sport University.
Obese Mouse
Model. Ten mice were randomly selected to form a normal control (NC) group that was provided a normal diet (ND), while the remaining 40 mice were fed with a high-fat diet (HFD). The ND and HFD were formulated according to previously reported nutrient formulations [16], and the feed formulations were provided by Jianmin Company Ltd. (Shenyang, China). After 10 weeks of feeding, six obesity-resistant mice were eliminated from the HFD group, while the remaining mice reached body weights of more than 120% of the mean body weight of the NC group mice, thereby satisfying the criteria for an obese animal model [16]. The mice that satisfied the criteria were stratified according to their body weight and randomly assigned to the following three groups: obesity control (OC), 10 mice; obesity moderate-load exercise (OME), 12 mice; and obesity high-load exercise (OHE), 12 mice. Differences in the body weights of mice within the three groups were not significant (P > 0:05).
Exercise Intervention.
The mice in the OME and OHE groups were subjected to 8 weeks of exercise intervention, which involved free swimming without interference in a plastic pool of diameter of 45 cm, water depth 60 cm, and water temperature of 32 ± 1°C. A previously described exercise program [16] was adopted, which consisted of 2 days of acclimatization training followed by 8 weeks of proper swimming training. The exercise load was progressively increased during the training period, with an initial duration of 20 min once per day in the OME group and 20 min twice per day (6 h interval between the two sessions) in the OHE group. During weeks 1 and 2, the training time was increased in increments of 10 min until reaching 120 min per day and 120 min twice per day at the end of week 2 in the OME and OHE groups, respectively. These exercise loads were maintained for the subsequent 6 weeks of training.
Sample Collection.
To observe the adaptive responses of mice to long-term exercise, sample collection was performed 36-40 h after the last exercise session in the OME group and the OHE group. The mice in both groups were fasted for 12 h before sample collection to eliminate the effect of exerciseinduced stress responses and diet on the various indicators. Each mouse was weighed and subsequently anesthetized by intraperitoneal injection of pentobarbital (50 mg/kg body weight; Sinopharm Chemical Reagent Co., Ltd., Shanghai, China). Blood samples were collected from the orbital venous plexus and centrifuged for 20 min (4°C, 900 g) to separate the serum, which was stored at -80°C before serum testosterone testing. Simultaneously, rapid separation of the testis and epididymis was performed, and the sperm count, motility, and apoptosis rate in the epididymis were measured and assessed [16]. The separated testis was immersed in liquid nitrogen for rapid freezing and stored at -80°C until further use. The peritesticular, perirenal, and mesenteric fat was separated and weighed on an electronic balance to determine the abdominal fat content of each mouse.
Cauda Epididymal Sperm Count and Motility
Measurements. The epididymis was removed from one side of the mice and placed in 1.0 mL of HEPES buffer. The epididymis was then cut at the junction between the corpus and the cauda epididymis, and the cauda was placed in a well containing 1.0 mL of HEPES buffer. The epididymis was cut into several segments with a pair of scissors and then gently pressed to release the semen from the vas deferens to mixing with the buffer. The number of sperm per microliter was recorded using a hemocytometer (15 μL per side). Sperm count and motility were assessed in accordance with the World Health Organization guidelines (≥200 sperms were counted per sample). The sperm count was determined using a hemocytometer. Sperm motility was assessed blindly under a light microscope by classifying 200 sperms per animal as either progressive motile, nonprogressive motile, or immotile. Motility was expressed as percentage of total 2 Oxidative Medicine and Cellular Longevity motile sperm (progressive motile and nonprogressive motile sperms) [16].
2.6. Hormone Measurement. Samples for concentration measurement of serum total testosterone were processed using a commercial enzyme-linked immunosorbent assay (ELISA) kit according to the manufacturer's protocol, and their absorbance was measured using a Multiskan GO 1510 (Thermo Fisher Scientific, Waltham, MA, USA). The ELISA kit was supplied by R&D Systems (Minneapolis, MN, USA). The detection limit of the testosterone kit was 0.75-24 ng/mL. The intra-assay coefficient of variation (CV) was less than 10%, and the interassay CV was less than 15% for the ELISA kit. All measurements were conducted in the Key Laboratory of Exercise Science of Shenyang Sport University.
Measurement of Sperm Apoptosis.
After membrane removal, the other epididymis of each mouse was cut into pieces and incubated in saline at 37°C for 10 min to disassociate the sperm. The sample was centrifuged at 400 g for 5 min after filtering, and then the supernatant was discarded to collect the cells. Phosphate-buffered saline was added to form a sperm suspension, and 5 μL of Annexin Vfluorescein isothiocyanate and 5 μL of propidium iodide were added. The suspension was then mixed gently and incubated at 20°C away from light for 10 min. Measurements were performed using a flow cytometer (CytoFLEX; Beckman Coulter, Brea, CA, USA) within 1 h, with a minimum of 10000 spermatozoa examined during each measurement. Forward scatter/side scatter gating was adopted to eliminate disturbances from cell debris and cell aggregation. After sorting the spermatozoa and cell debris using scatter signals, the live, dead, and cells were distinguished on a bivariate fluorescent signal scatter plot. The excitation wavelength used was 488 nm; green fluorescence (480-530 nm) was detected using the FL1 channel, while red fluorescence (580-630 nm) was detected using the FL2 channel. The positive cell rate and mean fluorescence intensity were analyzed using CellQuest software (BD Biosciences, Franklin Lakes, NJ, USA). The percentage of early apoptotic sperm relative to the total sperm count was calculated for each group.
Isolation of RNA and Real-Time PCR
Analysis. An RNA extraction agent (Vazyme Biotech Co., Ltd., Nanjing, China) was used to extract total RNA from the testis of each mouse according to the manufacturer's instructions. Subsequently, a reverse transcription kit (Promega, Madison, WI, USA) was used for the reverse transcription of 1 μg of total RNA to cDNA in a 96-well thermal cycler (Applied Biosystems, Foster City, CA, USA). The target mRNA content was measured in a real-time PCR amplification system (Applied Biosystems) using a real-time PCR amplification kit (Promega) in accordance with the manufacturer's instructions. All primers were designed and synthesized by Sangon Biotech (Shanghai) Co., Ltd. (Shanghai, China). The primer sequences for the target genes are shown in Table 1. Each sample was amplified in triplicate; GAPDH was used as the housekeeping gene, and the 2 -ΔΔCt method was used to calculate the relative expression levels.
2.10. Western Blotting. After weighing the separated testis from each mouse, RIPA lysis buffer and phenylmethylsulfonyl fluoride, a protease inhibitor, were added, and the testis tissue was cut into pieces and homogenized in an ice bath. The homogenate was centrifuged, and the supernatant was removed for protein quantification in a microplate reader 3 Oxidative Medicine and Cellular Longevity (Thermo Fisher Scientific) using the BCA protein assay kit (Beijing Dingguo Changsheng Biotechnology Co., Ltd., Beijing, China). The target protein was separated by sodium dodecyl sulfate gel electrophoresis; 30-50 μL of protein lysis buffer was added to each well. The separated target protein and the internal control β-actin were transferred onto a nitrocellulose membrane and blocked for 1 h using 5% nonfat dry milk blocking buffer. After adding the primary antibody (rabbit anti-mouse; ABclonal, Wuhan, China), the nitrocellulose membrane containing the target protein was incubated overnight (12 h) at 4°C. The target proteins included NF-κB (ABclonal, A2547), tumor necrosis factor-(TNF-) α (11948; Cell Signaling Technology, Danvers, MA, USA), IL-1β (12426; Cell Signaling Technology), IL-10 (5261; Cell Signaling Technology), SF-1 (10976; Santa Cruz Biotechnology, Dallas, TX, USA), StAR (58013; Abcam, Cambridge, UK), P450scc (175408; Abcam), and β-actin (sc-1496; Santa Cruz Biotechnology). Subsequently, the membrane was incubated at 20°C for 1 h with the fluorescent dye-labeled secondary antibody (IRDye@ 800CW-labeled goat anti-rabbit; LI-COR, Lincoln, NE, USA) at 1 : 15000 dilution. Finally, the nitrocellulose membrane was inserted into the Odyssey infrared imaging system (LI-COR) to quantitatively analyze the protein bands using Image Studio imaging software provided with the system. The final result was reported as the target protein content/β-actin content ratio [16].
2.11. Statistical Analysis. The data are expressed as mean ± SE. Multiple group comparisons were performed by oneway analysis of variance (one-way ANOVA) followed by a Student-Newman-Keuls post hoc test to conduct multiple comparisons. The results were considered significant for P values of <0.05. These analyses were performed using SPSS 18.0 software (SPSS Inc., Chicago, IL, USA).
Effect of HFD and Exercise on Body Weight and Abdominal
Fat Content. After 18 weeks of high-fat diet feeding, the body weight (Figure 1(a)), abdominal fat content (Figure 1(b)), and liposome ratio of the OC group (Figures 1(b) and 1(c)) were significantly higher than those of the NC group.
After 8 weeks of exercise intervention, the body weight ( Figure 1(a)), abdominal fat content (Figure 1(b)), and lipid ratio of the OME and OHE groups were significantly lower than those of the OC group; the decrease in the OHE group was higher than that in the OME group (Figures 1(a)-1(c)).
Effects of Obesity and Exercise on Testosterone Level and
Sperm Quality. Compared with those in the NC group, the OC group had a significantly decreased serum testosterone level (Figure 2 After 8 weeks of moderate-load exercise, the mRNA expression of SOD and GSH-Px in the OME group showed a significant recovery (Figures 5(a)-5(c)). In the OHE group, there were no significant changes in the three antioxidant enzymes (Figures 5(a)-5(c)); there was a significant difference in the mRNA expression of SOD and GSH-Px between the OHE and OME groups (Figures 5(a)-5(c)). Figure 4: Effect of high-fat diet and exercise on the testicular oxidative stress product. MDA: malondialdehyde; H 2 O 2 : hydrogen peroxide; NOS: nitric oxide synthase; NO: nitric oxide. Data are mean ± SE; NC: normal control; OC: obesity control; OME: obesity moderate exercise; OHE: obesity high exercise, vs. NC: * P < 0:05, * * P < 0:01; vs. OC: # P < 0:05, ## P < 0:01; vs. OME: △ P < 0:05, △△ P < 0:01. 6 Oxidative Medicine and Cellular Longevity the mRNA and protein expression of IL-10 increased significantly (Figures 6(g)-6(h)). In the OHE group, there were no significant changes in the mRNA and protein expression of NF-κB, TNF-α, IL-1, and IL-10 (Figures 6(a)-6(h)); however, there were significant differences between the expression levels in the OME and OHE groups (Figures 6(a)-6(h)).
Effect of Obesity and Exercise on mRNA and Protein
Levels of SF-1, StAR, and P450scc in the Testicular Tissue. Figure 7 shows that the mRNA and protein levels of SF-1, StAR, and P450 in the OC group were significantly lower than those in the NC group (Figures 7(a)-7(f)). The mRNA and protein levels of SF-1, StAR, and P450scc increased significantly in the OME group (Figures 7(a)-7(f)), while those in the OHE group were not significantly changed (Figures 7(a)-7(f)). There were significant differences in the mRNA expression of SF-1, StAR, and P450 and protein levels of SF-1 and P450 between the OME and OHE groups (Figures 7(a)-7(f)).
Discussion
To understand the mechanisms by which exercise affects reproductive function in men with obesity, we conducted a serial of experiments. Our previous studies indicated that both long-term moderate-or high-load exercise could effectively reduce body fat and alleviate leptin resistance. Interestingly, only the moderate-load exercise could alleviate the negative effects of obesity on the male reproductive function [20]. We hypothesized that this phenomenon might be related to the oxidative stress and the inflammatory response, as supported by our results, which are as follows: male obesity disrupted the balance between oxidation and antioxidation in the testicular tissue, induced oxidative stress, upregulated NF-κB, and triggered the inflammatory response, which reduced testosterone biosynthesis and sperm quality, thereby negatively affecting male reproductive function. Moderate-load exercise effectively alleviated the high oxidative stress induced by obesity, downregulated the expression of NF-κB and proinflammatory cytokines, and improved testosterone biosynthesis and sperm quality. However, high-load exercise did not alleviate the levels of obesity-induced oxidative stress and inflammatory response in the testicular tissue and did not significantly improve the reduced male reproductive function. Therefore, the oxidative stress-inflammatory response triggered by high-load exercise may have offset the inhibitory effects of body fat reduction on oxidative stress. Therefore, it is speculated that variations in exercise regimens have different effects on the male reproductive function caused by obesity via the inhibition/stimulation of oxidative-stress inflammatory response.
In addition to being one of the main factors affecting male infertility [24,25], oxidative stress is closely related to obesity and exercise [26,27]. Studies have shown that oxidative stress markers are positively correlated with the body mass index and body fat percentage [28]. Increased oxidative stress induced by excessive fat accumulation is an early promoter and key pathogenic mechanism of obesity-related metabolic syndrome [29]. Obesity can trigger systemic oxidative stress [30], which includes testicular and sperm oxidative stress, thereby resulting in the reduction of testosterone synthesis, spermatogenesis, and sperm quality [15,26]. This study had similar results, that is, in obese male mice, the serum testosterone levels were reduced; sperm quality parameters were decreased; sperm apoptosis was increased; NO, NOS, H 2 O 2 , and MDA levels in the testicular tissue were significantly increased; T-AOC concentration was decreased; catalase and GSH activities and mRNA expression were decreased; and SOD mRNA expression was significantly decreased, although the SOD activity did not change significantly. The mechanisms through which obesity induces oxidative stress in testicular tissue remain unclear. A previous study showed that elevated levels of glucose and free fatty acids led to an increase in mitochondrial ROS. Obesity induces excessive accumulation of lipids in adipocytes, which causes an increase in the substrate load in the mitochondria, promotes the expression of NADPH oxidase subunits, and leads to increased ROS, reduced SOD and GSH-PX activities, and increased oxidative stress in the mitochondria [31]. In obese mice, immunohistochemical results revealed increases in the number of Leydig cells, the number and volume of lipid droplets in the cells [10], and the level of MDA, an oxidative stress marker and lipid peroxidation product. Among the membrane structures of Leydig cells, the mitochondria and endoplasmic reticulum are rich in polyunsaturated fatty acids, which are highly prone to ROS attacks, resulting in the production of large amounts of MDA [32][33][34]. Obesity Oxidative Medicine and Cellular Longevity causes excessive accumulation of lipids in the body. For instance, in the Leydig cells of obese mice, the number and volume of lipid droplets are increased [10], causing an increase in the substrate load in the mitochondria, promoting the production of ROS in the mitochondria, and reducing the activity of SOD and GSH-PX [31]. Free radicals, when accumulated in excess, attack the unsaturated fatty acids (PUFA) in the membrane of mitochondria and endoplasmic reticulum, producing a large amount of MDA [33,34]. The toxicity of MDA induces a decrease in the cholesterol synthesis, cholesterol transfer, and steroid synthesis capabilities of the endoplasmic reticulum, ultimately resulting in reduced testosterone synthesis and spermatogenesis [10,35]. This association is corroborated by our result that the mRNA and protein levels of SF-1, StAR, and P450 in the testis tissue, along with the serum testosterone levels, were significantly lower in the OC group than in the NC group. Similarly, the sperm membrane surface and DNA molecules are rich in unsaturated fatty acids [35] that are also prone to ROS attacks, which generates large amounts of lipid peroxidation products. The lipid peroxidation products can harm the membrane integrity, fluidity, and permeability, as well as damage DNA structures and accelerate cell apoptosis, thereby resulting in increased defective sperm counts and reduced sperm motility [17]. These negative effects influence sperm capacitation and the acrosome reaction, thereby affecting the fertilization ability of the sperms [36]. Our experiment showed that the sperm quality parameters decreased and the apoptosis increased in the OC group. Therefore, obesity-induced fat accumulation in the testicular tissue triggered oxidative stress, inhibited testosterone synthesis and spermatogenesis, and reduced sperm quality, thereby negatively affecting obese male reproductive function. Obesity-induced oxidative stress causes dysregulated expression of inflammation-related adipokines in the adipose tissue [29], which is promoted by the inflammatory signal transcription factor NF-κB that plays a key role in oxidative stress-induced dysregulation of adipokine expression and is recognized as a major mediator of oxidative stress-induced signal transduction in adipose cells [37,38]. Studies have shown that ROS can activate IκB kinase, which promotes the degradation of IκB proteins. This results in the release of NF-κB dimers that translocate into the nucleus and control the gene transcription of certain proinflammatory cytokines (IL-1β, IL-6, TNF-α, and IL-8) [39]. Addition of the antioxidant N-acetyl cysteine has shown to impair NF-κB activation and inhibit TNF-α [40]. However, other studies have reported that proinflammatory cytokines, such as IL-1β and TNF-α, can inhibit the gene expression of testosterone synthases StAR, 3β-HSD, and P450c17 via the activation of NF-κB, which results in decreased testosterone synthesis within Leydig cells [13,14]. Based on these findings, it was concluded that cytokines, such as TNF-α and IL-1β, simultaneously act as the downstream targets and stimulants of NF-κB, thereby activating NF-κB and further causing a continuous and magnified inflammatory response [41,42]. In this study, we observed that obesity induced an increase in oxidative stress in the testicular tissue, which simultaneously increased the mRNA and protein levels of NF-κB, TNF-α, and IL-1; decreased the mRNA and protein levels of the 9 Oxidative Medicine and Cellular Longevity anti-inflammatory cytokine IL-10; and decreased the mRNA and protein levels of key testosterone synthases SF-1, StAR, and P450. Therefore, a long-term high-fat diet induced the production of large amounts of ROS in the testes, activating NF-κB, triggering the inflammatory response, and inhibiting testosterone biosynthesis [16,17], which may be one of the main mechanisms for decreased serum testosterone levels in obese male mice.
In addition to reducing the mass of white adipose tissue (WAT), exercise training can also reduce oxidative stress in these tissues. Farias et al. [43] found that exercise training induced a reduction in the expression of the NAPDH oxidase NOX2 in the WAT and increased the enzyme activity of Mn-SOD, thereby reducing oxidative damage [43,44]. However, very few studies have examined the effect of exercise on oxidative stress in the testicular tissue. In this study, the MDA, H 2 O 2 , NOS, and NO levels were significantly reduced in the testicular tissues of obese mice, whereas the T-AOC levels and activities and the mRNA expression of SOD, GSH, and catalase were significantly increased after 8 weeks of moderate-load exercise intervention. These results are consistent with the increased testosterone levels and improved sperm quality achieved by moderate-load exercise. However, these effects were not observed after high-load exercise; this may be related to the exercise load, which is closely associated with oxidative stress. It has been established that the oxygen demand increases during exercise and the oxygen consumption in the skeletal muscles increases by more than 100-fold compared to sedentary conditions; moreover, under exercise conditions, the free radical levels also increase [21]. On the other hand, the increases in free radicals can stimulate increased antioxidant enzyme activities, thereby preventing cell damage caused by excessive free radical production [22]. The effect of this positive adaptive response on the male reproductive function is typically manifested as a significant increase in the serum testosterone level and the quality, count, and DNA integrity of the sperm [45]. However, excessive exercise load leads to the production of large amounts of free radicals that exceed the body's antioxidative capacity; this excess of free radicals can induce damage to the male reproductive function. In addition, studies have shown that the testicular tissues of male rats subjected to strenuous exercise exhibited increased oxidative stress levels, decreased antioxidant enzyme activities, decreased levels of the key steroidogenic enzymes, and decreased testosterone synthesis and spermatogenesis, indicating a correlation between strenuous exercise-induced oxidative stress and reproductive dysfunction [23,46]. In this study, oxidative stress, testosterone synthase expression, serum testosterone levels, sperm quality, and sperm apoptosis rate in the OHE group were not effectively improved compared with those in the OC group. Thus, oxidative stress induced by high-load exercise may have offset the protective effects of fat reduction against oxidative stress. Nevertheless, the molecular mechanisms underlying this hypothesis are not clear. Next, to find further evidence for this hypothesis, we measured the expression of cytokines related to the inflammatory response.
Interestingly, the expression analysis of testicular tissue in obese mice revealed that moderate-load exercise was asso-ciated with decreased mRNA and protein levels of NF-κB, IL-1β, and TNF-α, along with increased mRNA and protein levels of the anti-inflammatory cytokine IL-10. These observations are consistent with the changes in oxidative stress markers, mRNA and protein expression of the key testosterone synthesis enzymes (SF-1, StAR, and P450), serum testosterone level, and sperm quality parameters. Our findings in the OME group were similar to the results described in a study by Zhao et al. [45], which demonstrated that early-life or lifelong appropriate exercise effectively alleviated age-induced oxidative damage in the testes and downregulated the expression of the proinflammatory cytokines IL-1β and TNF-α and the inflammatory signaling pathway component NF-κB but increased the levels of the antiinflammatory cytokine IL-10, enhancing testosterone biosynthesis, serum testosterone levels, and sperm quality parameters. In addition, previous in vitro studies had established that TNF-α and IL-1β could inhibit testosterone synthesis in rats by inhibiting the mRNA and protein expression of P450scc [47], 17 α-hydroxylase/17,20-lyase (P450c17), and 3β-hydroxysteroid dehydrogenase in rat Leydig cells [48]. Conversely, IL-10 can inhibit the synthesis of the proinflammatory cytokines TNF-α, IL-1α, and IL-1β [45]. Based on these previous findings and the outcomes in this study, it is suggested that long-term moderate-load exercise can inhibit the expression of proinflammatory cytokines by reducing oxidative stress and simultaneously promoting testosterone synthesis by enhancing anti-inflammatory cytokine expression.
In this study, we found that long-term high-load exercise did not significantly improve the mRNA and protein expression of NF-κB and proinflammatory cytokines in the testicular tissues of obese mice, which was consistent with our data for the oxidative stress, the testosterone synthesis factors SF-1, StAR, and P450, and the sperm quality. However, another study showed that high-load exercise training increased the levels of IL-1β, IL-6, IL-8, and TNF-α in the seminal plasma [49]. An earlier study in adult males also showed that after lipopolysaccharide stimulation, long-term high-load exercise training reduced the numbers of blood monocytes, neutrophils, and dendritic cells, along with decreases in the synthesis of IL-1β, IL-6, TNF-α, and macrophage inflammatory protein-1β [50]. The discrepancies between our results and those of previous studies may be related to the differences in the study subjects. Because of ectopic lipid deposition, the testicular tissues of obese male mice are in a state of high oxidative stress and high inflammatory response [10]. Although high-load exercise can reduce whole-body lipid deposition and ectopic lipid deposition, thereby reducing energy overload in the mitochondria and alleviating excessive ROS production [51], the high-load exercise can also increase the synthesis of free radicals [52], offsetting these positive effects in obese subjects. A limitation of this study was that the relationship between oxidative stress and inflammation in reproduction was not confirmed in vivo by injection of an antioxidant. Assessing this relationship will provide a theoretical basis to develop treatments for improving the reproductive function in men with obesity. This aspect will be further explored.
Conclusions
Long-term high-fat diet induces obesity, causing excessive ectopic deposition of lipids, triggering oxidative stress in the testis tissue, possibly triggering inflammatory response via NF-κB, and reducing testosterone biosynthesis and sperm quality. Moderate-load exercise can be effective in lowering body fat, alleviating obesity-induced, high oxidative stress in the testis tissue, downregulating the expression of NF-κB and proinflammatory cytokines, increasing the testosterone biosynthesis, and improving the sperm quality. Although the high-load exercise is better in reducing body fat, it has a negligible effect on reversing the high oxidative stress, the inflammatory response in the testis tissue, the testosterone biosynthesis, and the sperm quality in obese male mice. Overall, different exercise regiments may have different effects on the male reproductive function caused by obesity through the inhibition/stimulation of oxidative-stress inflammatory response.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request. | 2020-01-30T09:15:14.851Z | 2020-01-27T00:00:00.000 | {
"year": 2020,
"sha1": "3ab338522f5d55b1825099a86afbabf0954957f1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/3071658",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ec63764338cdfe4978a250a7305fa7f433a15eb4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267004941 | pes2o/s2orc | v3-fos-license | Boundary Spanning in Local Governance: A Scoping Review
Complex societal challenges require collaboration between organizations, often with conflicting priorities and ways of working. Connecting organizations has come to be referred to as boundary-spanning. There is a need to understand the features of boundary-spanning at the local level, since policy-makers and practitioners from different sectors need not only to work together but also to relate to the recipients of their interventions. Addressing this gap, a scoping review was conducted. The review highlights the need to carve out a contextualized conceptualization of boundary-spanning that accounts for the distinctive features of this work when embedded in local community context.
Background
While societal challenges are often rooted in complex, global, and interacting factors, many of these challenges find expression in and require a response at the local level.To consider just a few: poverty, (juvenile) offending, public 1 Utrecht University, The Netherlands 2 Vrije Universiteit Amsterdam, The Netherlands health crises, and terrorism all can trace causes back to complex global and systemic issues; yet, a significant feature of responding involves practitioners and policy-makers tackling these issues in villages, neighborhoods and cities.Given the mismatch between the complex and convoluted nature of these challenges and the fragmentation and specialization in policy and practice, most professionals tasked with addressing such challenges find themselves called upon to collaborate, often through "whole-of-government" and "wholeof-society" approaches (Christensen & Laegreid, 2007;Papademetriou & Benton, 2016).Although the need for holistic and complex responses to complex challenges can seem self-evident, it is widely acknowledged that the reality of collaborative governance is problematic.
For example, a significant proportion of offenders suffer from mental health problems; thus prevention and rehabilitation require, at the very least, collaboration between the justice and health sectors (van Dijk et al., 2021).Similarly, preventing radicalization to extremism calls for collaboration among youth workers, police, social care, and schools-actors operating within different institutional systems and with different responsibilities, for whom the prevention of radicalization typically is not their primary objective (Stephens & Sieckelinck, 2019).The different internal logics and telos of these sectors means that collaboration has to overcome differences in goals, practices, priorities, and language.For those operating at the frontline, such as youth workers, district nurses and community police officers, the challenge extends not only to ensuring a smooth collaboration but also to ensuring that the fruits of that collaboration benefit the recipients of their interventions.That is to say, success cannot be viewed only from the vantage point of how well information and experience flow across sectoral boundaries, but also by the extent to which the exchanges actually connect with and respond to the challenges faced in a local community (Turrini et al., 2009).
This scoping review complements scholarship employing concepts like "intergovernmental relations," "cross-sector collaborations," and "governance networks" (e.g., Bryson et al., 2006;Klijn & Koppenjan, 2016;Stoker, 1995).Although significant thought and attention have focused appropriately on the models, systems and structures that can facilitate such collaboration, there is a need to take seriously that in the end it is people who are doing the work of crossing sectoral and disciplinary boundaries (van Meerkerk & Edelenbos, 2018;Williams, 2002).This focus on the individual is useful to accentuate several dimensions of local governance, specifically boundaryspanning between and beyond formal organizations.Policy-makers and practitioners from different sectors (e.g., justice, health, education) need not only to work together but also to relate to people and families that are the recipients of their interventions.
Boundary-Spanners as Local "Fixers"
The concept of boundary-spanning has its roots in organizational studies and business management, addressing the spanning of boundaries within and between companies (e.g., Marrone, 2010;Schotter et al., 2017).A rich and extensive literature has developed examining the characteristics of successful boundary-spanners, the challenges of boundary-spanning and the type of institutional context and leadership that enable boundary-spanning.More recent work has developed the concept outside of business settings, including examining its application in governance and public management (e.g., van Meerkerk & Edelenbos, 2018;Williams, 2012).
While much can be drawn from the business and organizational literature on boundary-spanning, it is clear from this more recent work that there are distinct features of boundary-spanning in the public context.For example, van Meerkerk and Edelenbos (2018) point to the likelihood of public boundary spanners having less autonomy than in the private sector, embedded in hierarchical and political environments, and needing to deal with a variety of constituencies, often with conflicting demands.
In their work, van Meerkerk and Edelenbos (2018) suggest that the various contexts give rise to a need for different profiles of boundary-spanners: no single form meets the varying needs and social realities of differing contexts.Of particular relevance to the work in bounded geographic settings such as neighborhoods is the notion of "Boundary Spanners as Fixers."They describe the characteristics of these boundary spanners as being rooted in formal institutional organizations while aiming to fit with local communities and neighborhoods; viewing their role as more than just a job; and having strong personal relationships.
Such professionals present an interesting and important category.Not only are they embedded in hierarchical and political work contexts, but they also are embedded deeply in a local context and connected to local communities.Given the central role of this category of boundary spanners in the day-to-day work of local governance and the extent to which current challenges require "joined-up" responses at the grassroots, it is imperative to understand this role in more depth.That is to say, it is timely to develop a more comprehensive conceptualization of the particularities of boundary-spanning in local contexts.To this end, we aim to build on the work of van Meerkerk and Edelenbos (2018) by mapping the existing knowledge on what may be a nuanced set of elements to consider for boundary-spanning in local contexts.In order to do so, this scoping review addresses two questions: (a) How is boundary spanning conceptualized in relation to local governance?(b) What are the particular characteristics of boundary spanning in local governance arrangements?
First, we outline the method we adopted for this scoping review and set out the process of study selection.The results of the review are then presented, organized around the two research questions.A discussion of the results is followed by a look at the implications for future research.
Method
A scoping review is ideally suited to mapping and identifying gaps within an expansive body of literature.Scoping-reviews are marked by their systematic and transparent approach, with each stage clearly documented, aiming for a replicable review process (Arksey & O'Malley, 2005).They differ from systematic reviews in that they do not distinguish studies by their quality and as such cannot assess the strength of evidence on a given topic.This allows, however, for including a broader range of literature, the mapping its conceptual as well as empirical contours, and identifying gaps.The term scoping review has been used rather loosely, in order to promote greater consistency in approach.Colquhoun et al. (2014) propose a common definition for such reviews: "a form of knowledge synthesis that addresses an exploratory research question aimed at mapping key concepts, types of evidence, and gaps in research related to a defined area or field by systematically searching, selecting, and synthesizing existing knowledge."In line with this aim for a consistent approach, our scoping review adopted the process outlined by Arksey and O'Malley (2005) and further developed by Colquhoun et al. (2014).This involves (a) identifying the research question, (b) using this as a guide for identifying relevant literature, (c) following an iterative process of study selection, (d) extracting relevant variables from the papers (data charting), and (e) analyzing and reporting the results.Our scoping review was reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews (PRISMA-ScR) statement (Tricco et al., 2018;prisma-statement.org).In this section we outline the search strategy and the approach to study selection and data charting.
Protocol and Registration
Our protocol was drafted in March 2022, reporting the rationale, research question, search strategy, and eligibility criteria.It was retrospectively registered on the open science framework and can be accessed at: https://osf.io/rkqam/?view_only=bc4b218d1b7b44d4aa43f296abbacc4d.
Eligibility Criteria
A set of eligibility criteria were established in the protocol (Table 1).To be eligible for inclusion articles had to be peer-reviewed journal articles written in English; the peer-review criterion aimed to ensure a base-line quality for the papers while the English criterion reflected the capacity of the reviewers.No date limits were set on the eligibility as we were interested in all papers written on boundary spanning in local contexts.Further, to be eligible articles had to address boundary-spanning in a local context, between public bodies or between public and private bodies.Articles on boundary spanning within organizations or wholly between private bodies were excluded.This was a reflection of our aim to address boundary spanning in the context of public governance, meaning spanning within organizations and between private organizations did not have a direct bearing on crossing organizational boundaries in the context of local public governance.In line with this reasoning, articles were excluded when they were in fields far from public governance, including business, sales, marketing, engineering, and sustainability.a literature researcher (LS).Search terms included controlled terms as well as free text terms.Synonyms for "local governance" were combined with variants for "boundary spanning."The search was performed without date or language restrictions.Duplicate articles were excluded by a medical information specialist (LS) using Endnote X20.0.1 (Clarivate ™ ), following the Bramer-method (Bramer et al., 2016).The full search strategies for all databases can be found in the Supplementary Information.Synonyms for "boundary spanning" were not included in the search string.Our decision not to include synonyms for boundary-spanning had two main-drivers.First, we were interested in how the boundary-spanning concept itself was being utilized in this context; although there are close synonyms for boundary-spanning such as "knowledge broker" and "connector," boundaryspanning as a concept has a rather clear and distinct set of features that have emerged through an extensive literature dating back to the 1970s.Second, the inclusion of "boundary-spanning" alone yielded a significant number of hits for an extensive review.
Study Selection
Consistent with the methodology of a scoping review, study selection was an iterative process.One reviewer (WS) screened all potentially relevant titles and abstracts for eligibility using Rayyan (Ouzzani et al., 2016).
Studies were excluded if they clearly were not relevant due to: (i) referring to boundaries in the contexts of physics or other hard science fields (ii) referring to boundary spanning within a single organization or (iii) referring to boundary spanning only within the context of private companies.Two reviewers (WS & RS) then independently screened the remaining 243 studies.Studies were included if they met the eligibility criteria in Table 1.
After both reviewers had blindly reviewed one hundred articles, a midreview calibration was conducted using the comparison feature offered by Rayyan.During this calibration moment 16 papers were identified as differently categorized.The two reviewers discussed these differences in judgment and identified two ambiguities in the eligibility criteria: the definition of local context and the question of boundary spanning within the context of health organizations.Regarding the first, we developed a shared definition of local context as being within the limits of a city or municipality.The defining feature was that they fell within a layer of local governance rather than spanning cities or municipal areas.Regarding the second, we noted that a number of boundary-spanning articles addressed boundaries between different layers of the healthcare system at a local level.As we discussed these examples, however, it was clear that many were examples of intra-rather than inter-organizational boundary spanning since the entities were lodged within an overarching body.
These discussions led to clarification of the definitions of the eligibility criteria.Thus, the criterion "local context" was expanded to include neighborhood, municipality, or city.The initial 100 articles were re-reviewed along with the remaining articles to ensure this more encompassing criterion was applied.After all articles had been through a round of exclusion/inclusion by both reviewers, discrepancies in judgment were resolved through a consensus procedure, discussing the differences until consensus was reached on all articles.Where it was ambiguous from the title and abstract as to whether the article met the eligibility criteria, the full text of the articles was reviewed.
Data Charting
The data from the 38 eligible studies were charted using an extraction grid created with Microsoft Excel.The grid (Table 2) consisted of 15 columns, which can be divided between general data on the study (e.g., title, year, sample size) and data on findings relevant to the research questions.For general features of the study, we collected data on the authors, year of publication, title, journal, methodology, sample size and country of origin.For findings relevant to the research question, we abstracted (i) the definition of boundary spanning presented in the paper, (ii) the description of the actors whose boundaries were being spanned (for example, a health organization and a provider of social welfare services, a school and a local community), (iii) the individuals with the role of boundary spanner; these were the people the article identified as carrying out the work of boundary spanning.
We also abstracted data on specific local issues.This variable involved a level of interpretation by the reviewers given that the local context generally was not an explicit focus of the articles.In order to provide standards for this, we charted all references to locally specific issues as defined by (a) references to working in close physical proximity, (b) references to working within a bounded context (e.g., references to the influence of working within one municipality, one city, or one neighborhood), and (c) references to any relationship to a community or a neighborhood.The first author charted the data from the eligible studies, and the second author reviewed the data.Etzkowitz, 2012;Tushman, 1977) and the role of agents who facilitate interaction and arbitrate conflict among team members (Sonnenwald, 1996).
The boundary spanners may help to bring potential collaborators together, align problem definitions, and resolve differences between various groups and organizations, and language barriers among collaborators (Maglaughlin & Sonnenwald, 2005;Young, 2010) (Aldrich & Herker, 1977;Dobbie & Richards-Schuster, 2008;Williams, 2002).Boundary spanners link two or more organizations whose goals and expectations are likely to be, at least partially, conflicting (Miles, 1980).They inter-act with other people inside their organization and negotiate system interchanges with another organization (Steadman, 1992) "Very clearly, their exceptional utility as leaders here is tied to their skill, expertise and their perceived permanence in their communities.They are in and of their communities (Miller, 2007, emphasis added), consequently they know, respect and believe in the 'rituals and cultures' (McLaren, 1986) of the people.
But not only do they know, respect and believe in their neighbours, their neighbours know, respect and believe in them-and they trust that the boundary spanners will continue (Miles, 1980).Individuals or whole organizations can play a boundary spanning role (Steadman, 1992), which has been further conceptualized in terms of spanning 'across and up-wards', or 'downwards' (Rugkåsa et al., 2007) as the literature would have us expect (Durose, 2009;Engbersen et al., 2007).They also let the knowledge flow through the system (Wagenaar, 2007) in the other direction, when they explain to residents, for instance, the way local government works or help members of civil society present themselves favorably to a public manager.They can, as Healey (1992, p. 17)
Synthesis and Reporting of Results
First, we summarized the general characteristics of articles, providing a descriptive account of the research methodologies, sample size and country of origin of the studies.In order to understand how the concept of boundary spanning is currently being drawn upon in relation to collaboration in local contexts we examined a number of features of the papers: (1) how boundary-spanning was defined, ( 2) what kinds of boundaries were identified as being spanned, (3) the features of the boundary-spanning role, and (4) issues specific to boundary-spanning in a local context.In practice this involved reading through each item in the data charting table and grouping those with the same or conceptually similar content.A descriptive account then was then written of common elements that emerged in multiple studies, with additional notes of outliers from these common descriptions.The process could be likened to a thematic content analysis.The results are presented as a narrative account in the following section.
Source Selection
The literature search generated a total of 2,607 references: 1,131 in Scopus, 905 in Web of Science, 277 in PsycInfo and 294 in IBSS.After removing duplicates of references that were selected from more than one database, 1,439 references remained.The initial review of the titles and abstracts led to the removal of 1184 articles that clearly were not relevant, leaving 255 for retrieval.After searching the resources available to the reviewers, including emailing authors of papers, 12 articles could not be retrieved and thus had to be excluded from the review.This left 243 articles to be assessed for eligibility.After resolving all differences in judgment between the reviewers, 48 articles were included.During the course of data charting, all the articles were fully read.During the reading of full texts it became clear that an additional 10 articles only referred to boundary spanning in a tangential manner and did not address the role of a boundary spanner, leaving 38 papers to include in the final review.The flow chart of the search and selection process is presented in Figure 1.(Page et al., 2021;prisma-statement.org)
Study Characteristics
The majority of the studies are case studies of boundary-spanning in a specific context.In addition, there also were a literature review, a survey, and three conceptual and commentary papers.Twenty-eight of the studies used interviews, often in combination with document analysis (13) and observation (12).Seven studies drew on surveys and two on focus groups.The studies included a wide range in sample size, from an in-depth ethnographic study of one practitioner (Kovács, 2020), to interviews of over 200 participants (Lindsay et al., 2021), to a survey with 385 participants (McCuaig et al., 2019).The median sample size across the studies is 37.5 (excluding nonempirical papers).By virtue of the selection criteria, a defining feature of all the papers is that they deal with boundary spanning within a locally-bounded geographic context-a city, neighborhood, municipality, village or town.The majority of the studies were based in three countries: the USA (13), the Netherlands (13) and the UK (9).
Conceptualization of Boundary Spanning in Local Contexts
In this section we present the synthesis of the data abstracted from the articles on the definition of boundary spanning, the boundaries identified as being spanned, and the features of the boundary-spanning role.
Defining Boundary-Spanning.The studies varied greatly in the extent to which they paid attention to defining boundary spanning, ranging from no explicit definition to extensive discussions of definitions.Across the definitions, however, a clear core notion of boundary spanning emerged that involved bridging the gap between two or more organizations; most definitions pointed to a key role in translating between the different priorities, logics or languages of the different organizations.This core definition is encapsulated in Miles's (1980, p. 62) widely cited definition that boundary-spanning refers to "positions that link two or more systems whose goals and expectations are at least partially conflicting."Around this common core, though, are differences over the scope and position of the role.At one end of the spectrum, the scope of the role is primarily one of facilitating the flow of information between organizations: "[they] facilitate information sharing back and forth across the organizational boundaries, and help match needs and resources" (L.K. Bradshaw, 1999, p. 39).Most go somewhat further and emphasize the role of translating between cultures; for example: "Boundary spanners (or collaborative managers) can be defined as individuals who work across different organizational cultures and exercise influence through formal and informal channels in order to strengthen the connections between actors" (Guarneros-Meza & Martin, 2016, p. 240), and "They deal with people on both sides of the boundary and specialize in negotiating the interactions between systems" (Van Hulst et al., 2012, p. 438).Some, however, highlight a more extensive and complex scope of work: "The boundary spanner has been defined as delivering a range of functions, including: providing local coordination as an 'anchor point' between collaborating agencies; linking stakeholder groups within and beyond the boundary spanner's own organization; managing tensions and conflicts between partners; building trust and shared values; demonstrating leadership in pursuing the partnership's goals; promoting innovation in policy solutions that reflect inter-disciplinary approaches; and (crucially) networking to share information and practice" (Lindsay & Dutton, 2012, p. 514).
Another key divergence between definitions concerns the positioning of boundary spanners.In some definitions, the role is clearly defined as one who is embedded within one organization and reaches out to others: for example: "Boundary spanners are individuals who act on behalf of their organization in an interorganizational interaction, by linking their unit to external areas" (Callens & Bouckaert, 2019, p. 1113).In other definitions, however, the rootedness in one organization is less apparent: "people with a foot in both worlds" (Etz et al., 2008, p. 396) and "Boundary spanners work in positions between two or more systems (e.g., the juridical system and the health system, different organizations).They deal with people on both sides of the boundary and specialize in negotiating the interactions between systems" (Van Hulst et al., 2012, p. 438).
The Boundaries Being Spanned.Across the studies, a plethora of organizations and sectors are identified as entities involved in some configuration of boundary-spanning.Here, we use the term entity to capture the range of actors involved in boundary-spanning relationships in these studies, including organizations, communities, and local governmental bodies.
These can be categorized broadly in the fields of health (11), social care/ welfare (8), education (7), judicial (4), and local government (4).Other entities include emergency services (1), farming (1), and the private sector (1).In most studies, the boundaries being spanned are between two entities: however, 11 studies involved spanning boundaries among three or more entities.
Outside more clearly defined organizations, 11 studies identified "community" as an entity into which or from which boundaries were spanned.. "Community" often seemed to refer to a local population; for example, "boundary spanners who broker connections between the school district and the community" (Brown, 2017, p. 369) or "reaching beyond clinic walls to create community linkages" (Etz et al., 2008, p. 391).At other times, studies more explicitly referred to communities defined by ethnic or cultural connections (Carlsson & Pijpers, 2020).
Boundary Spanning Role Holder.We also examined the nature and positioning of the boundary spanning role, looking at two dimensions (Table 3).The first relates to how the role came to be defined as "boundary-spanning."Three categories were identified: (1) individuals who are boundary-spanners because working across boundaries is inherent to the role, while not necessarily serving as the defining feature of their work (e.g., school principals), (2) roles that were specifically created to bridge between organizations, and (3) individuals who were identified during the course of the research as carrying out boundary-spanning work.Examples of the first category include school principals (L.K. Bradshaw, 1999), central office administrators (Honig, 2006), and frontline workers (Lindsay et al., 2021).The second categorycreated roles -includes positions such as "health brokers" (Harting et al., 2011), "care sport connectors" (Hermens et al., 2017), and "refugee-student family mentors" (Koyama & Kasper, 2021).The final category contained leaders who demonstrated specific boundary-spanning qualities in carrying out their work (Dudau et al., 2018), and emergency service personnel or managers who demonstrated boundary spanning capabilities during the course of a crisis (Gil-Garcia et al., 2016).
The second perspective from which the boundary spanning role was considered was the positioning of the role within an organizational hierarchy.Again, three categories were identified: (1) boundary-spanners holding a leadership or management position, (2) boundary spanners working in a frontline role, and (3) boundary spanners positioned externally to the organizations.
In summary, in studies examining boundary-spanning within a local context, a clear core definition of the boundary-spanning role emerged, with a wide spectrum of what the scope of that work looks like.The entities involved in boundary-spanning cluster in the fields of health, social care/welfare, education, and, significantly, the "community."Those holding these roles are in some cases called upon to span boundaries as a result of the nature of their work more generally, while in other cases the central mandate of their role is to span boundaries.
Characteristics of Boundary Spanning in Local Settings
This section presents evidence about specific features of boundary-spanning in local contexts.Five key themes were identified in the articles: (1) the role of physical proximity, (2) the complexity of local conditions, (3) power imbalance, (4) frontline activities, and (5) the nature of relationships.
Physical Proximity.Across the papers, several key issues related to close physical proximity.First, some studies focus on individuals, due to their living in close proximity, interacting outside of formal settings, which can shape boundary-spanning possibilities.Miller (2008) and Van Hulst et al. (2012) refer to the deep relationships boundary spanners develop as a result of being embedded within their communities.
In some articles, however, physical proximity was not equated with greater ease of boundary crossing.Being physically proximate does not necessarily provide a clear picture of or relationships with other organizations (Etz et al., 2008;Harting et al., 2011).Further, Carlsson and Pijpers (2020) point out that when thinking about spanning into a community, thinking in terms of physical proximity can be a barrier.They note that while a boundary spanner's role may be to connect to the local community of a neighborhood, the reality may be that the communities in which people interact are not geographically bounded, but rather span a wide area bringing together those with shared culture or interests.
Complex Local Contexts.Among the challenges authors identified that boundary spanners faced in local settings were the complexity and uniqueness of such contexts.One expression of this complexity was identifying appropriate partners given the plethora of organizations working within a neighborhood, for example, to address overlapping issues.For example, Harting et al. (2011, p. 66) describe the challenges for local health brokers faced by myriad fragmented projects operating in one setting: "Developing the content of the role was difficult and hampered by the complexity of health issues and the local situation."In the context of education, successful boundary spanning required working with a whole neighborhood and not just specific organizations (Honig, 2006).
A number of studies identified individuals who were able to successfully cross boundaries in local contexts and pointed to their understanding of the local conditions.Nissen (2010, p. 379), examining successful youth work professionals, reported that: "Their ability to adapt to local circumstances allowed them to gain power and access across diverse groups.Failure to do this risked alienating one of these groups, impeding progress toward the vision."Similarly, Kovács (2020, p. 140) pointed to effective boundary spanners' "extensive knowledge of local conditions and 'holistic problem orientation', which allows them to prioritize the complex and interrelated neighbourhood problems."Further, in a boundary-spanning role aimed at addressing labor market inclusion, Lindsay et al. (2021, p. 932) emphasize the importance of "deep community knowledge."This seems to refer to an in-depth understanding of the various issues and concerns at play in a community as well as to its sources of strength.
This centrality of deep understanding of local conditions also evidently extends to the articulation of a rather different kind of relationship and orientation on the part of the boundary-spanner.Honig (2006) describes the effective boundary spanner as one who works with the whole neighborhood in a "servant or service capacity" (p.365).Miller (2008) emphasizes the role of trusting and loving relationships in successful boundary-spanning leaders: ". . .not only do they know, respect and believe in their neighbours, their neighbours know, respect and believe in them" (p.370).Van Hulst et al. (2012) highlight the value of "speaking the local language" and an "intense way of relating" (p.442).In these particular studies, the role of boundary spanner comes across as a vocation rather than a job, with deep sincerity about and commitment to the local area, requiring the ability to develop respect from a wide range of constituencies.It is perhaps in light of this embedded role that a number of studies report on the emotional labor involved in such roles (McCuaig et al., 2019;Needham et al., 2017;Rugkåsa et al., 2007).Boundary spanning roles can require significant emotional labor in building and sustaining trusting relationships; in those settings dealing with challenging personal situations the emotional labor is likely to be higher (Needham et al., 2017).
The Question of Power.Where "the community" is identified of as one of the entities in a boundary spanning effort, a number of studies point to the power imbalance that exists, with formal institutions having sway over financial resources and access to information (Nederhand et al., 2016(Nederhand et al., , 2019)).Two studies explicitly refer to the notion of "spanning downwards"; a conceptualization of boundary spanning that includes this hierarchical relationship cannot be treated as the same as boundary spanning relationships between two organizations in which the imbalance of power is less central (Roussy et al., 2020;Rugkåsa et al., 2007).
This imbalance of power is addressed in two different ways.In some studies, the role of the boundary-spanner takes on that of an advocate, championing the voice of the less powerful entity.For example, "their primary loyalties were to their community-based constituents, and they both possessed inherent desires to learn from and advocate for those who have traditionally been oppressed" (Miller, 2008, p. 362).Guarneros-Meza and Martin (2016) also explicitly frame the boundary-spanning role in terms of advocacy.Other studies, however, reported that this power imbalance was handled by having an external boundary spanner who sits outside of the power imbalance and can play a role in negotiating the boundaries and issues at play.For example, Miller (2009) describes the creation of the role of "systems advocate" who stood outside of school and homeless shelter institutions, but had enough understanding of both contexts to bridge the gap between them.
Discussion: Theorizing Boundary-Spanning beyond Formal Organizations
The local context is distinguished by the fact that the spanning of boundaries does not happen between organizations alone, but often involves spanning the boundary from organizations into communities and neighborhoods.This is rather a different prospect than can be found in the roots of boundary-spanning scholarship based on examinations of spanning boundaries within or between businesses or other formal organizations.It also occurs in a context in which there is an expectation of inter-sectoral collaborative functioning at the "frontline" in the face of "wicked" problems, such as preventing radicalization, addressing poverty, and dealing with the intersection of crime and mental health.These are challenges that not only require multi-sectoral responses, but also typically are not amenable to simple solutions.Given such circumstances, this scoping review aimed to map how boundary spanning is currently conceptualized and the evident gaps in the current literature.
It is apparent from the number of studies identified both that boundaryspanning is drawn upon as a concept to understand and frame what is happening in local settings and that it is applied to collaborations involving a host of different organizations and sectors.Notably, despite one paper's reference to the "lack of definitional clarity" around boundary spanning (Brown, 2017), a relatively solid core definition surfaced in the research examined, centered on boundary spanning as facilitating some collaboration between systems that often have different languages and priorities.The divergences center mostly on the scope of what this involves.This is not trivial, given the challenging journey other concepts have faced in moving across fields.Indeed, many social scientific concepts by their very nature are contested and the subject of ongoing definitional discussion; one needs only think of the debates surrounding key concepts such as democracy.Given this, boundary-spanning seems rather stable.
This relative stability of the boundary spanning concept enables much of the research to focus on application.It is in its broad and varied application that the divergences in the concept emerge.This suggests that perhaps more nuanced and contextualized understandings of boundary spanning are required.Indeed, this is demonstrated in van Meerkerk and Edelenbos's (2018) profiles of different kinds of boundary-spanners: the fixer, the bridger, the broker, and the innovative entrepreneur.It is notable, however, that within the literature reviewed, a generalized notion and definition of boundary-spanning is called upon, and there is not yet an evident body of scholarship is grounded and developing contextualized notions of boundary spanning that addresses the distinctive features of the work in local and community contexts.That is to say, there seems to be a space for developing a more refined conceptualization of boundary-spanning in the context of its being embedded in local community settings.
Such embeddedness, not just in organizational and political contexts but also in local and community contexts, evidently suggests a more laden role than that of spanning the boundary between a business and government or between two departments in local government.Echoing van Meerkerk and Edelenbos (2018), this review highlighted the significance of the relationships built by those working in particular local contexts.These relationships can be marked by a deep commitment to the local setting, which at least in some cases highlights the role of boundary spanner as being more than doing a job, with deeper motivational roots.Although all forms of boundary-spanning require understanding the different cultures, languages and priorities of the systems involved, a local community seems to stand rather apart from organizations.Although organizations often are heterogeneous and complex, they usually are bound together by overarching organizational priorities and purposes.There are formal statements of organizational aims and missions, and individual participants become acculturated to the language and norms of the system: indeed major streams of human resource work are aimed at helping employees understand and take on the organizational culture (Bellot, 2011).
This rarely is the case with community, and particularly communities of place.Communities of shared interest or shared interaction may be more cohesive (T.K. Bradshaw, 2008); however, much of the frontline work takes place within communities of place, connected by virtue of geographic proximity, and not necessarily much more.A community of place is likely to be quite diverse in composition, priorities, language, and expectations.It is hard to view this through the same lens as an organization.A similar distinction emerges when we consider communities and local contexts as places where everyday living is taking place.The impact of boundary spanning is directly on the lived experience of individuals, rather than on the functioning of an organization.Further, and as highlighted in the studies reviewed, the question of power comes to the fore, with rather intractable imbalances in power between organizations, particularly governmental bodies and the communities with which they are seeking to span boundaries If we consider again the cases of the teachers, police officers, and youth workers required to collaborate in the prevention of extremism in an urban neighborhood, the challenge before them is not only to align their own sometimes conflicting roles, responsibilities and priorities (Stephens & Sieckelinck, 2019), but also to build and maintain strong relationships with young people and families living within the neighborhood.The challenges are not insignificant: the very act of spanning the boundary between youth workers and police may work against efforts to span the boundary between youth workers and communities, if community members perceive the relations of youth workers with police as undermining trust in the confidentiality of the youth worker.The priorities of the police to ensure public safety do not necessarily easily cohere with the priority of the pedagogical perspective of the youth worker.The concerns and priorities of the local community include both the fears of some elderly residents and the discontent of some youth, the resolution of which do not necessarily seem immediately compatible.
Taken as a whole, these features highlight a distinctive and complex context in which boundary-spanning local governance is occurring.Indeed, the core definition of boundary-spanning appears to be critical, with individuals required to translate between systems with conflicting priorities and differing languages and cultures.As such, fuller understanding of what can be learned about those individuals charged with crossing numerous and varied boundaries in myriad contexts is likely to be of value.Yet, given the existing expansive base of research on boundary-spanning generally, carving out a clearer conceptualization and literature on boundary-spanning efforts embedded in local community contexts may provide a valuable basis for those policymakers and practitioners faced with the day-to-day demanding task of collaborating amongst themselves and spanning into local communities around pressing challenges.The foundations of such a conceptualization already exist, for example, with the distinction drawn between vertical and horizontal spanning and the recognition that boundary spanners often have to cross boundaries into other organizations (horizontal) and into systems with a different power status (vertical) (Guarneros-Meza & Martin, 2016).Further developing contextualized conceptualizations of boundary-spanning, while keeping them embedded in the broader literature on boundary-spanning as whole, offer a promising avenue for assisting those charged with navigating this complex yet essential work.
Limitations
There are a number of limitations to this study.First, only English-language studies were included, meaning that it is conceivable that important scholarship on this topic in other languages has been missed.Going further into the question of boundary-spanning in local communities would benefit from a scan of the non-English literature.Second, our literature search did not include synonyms for boundary-spanning.A number of terms such as "broker" and "connector" have connotations similar to boundary spanning.However, for the purpose of this study, we were specifically interested to uncover how the specific term boundary-spanning is being conceptualized in relation to collaborations based in local contexts.Third, by selecting a scoping review rather than a systematic review we explicitly do not assess the quality of the evidence.This, however, allowed for the inclusion of a wider range of studies and is more suitable for mapping the conceptualization of boundary-spanning.Finally, only one reviewer (WS) screened titles and abstracts, while the standard is for two reviewers at each level of the scoping review process.Although two reviewers would have been preferable, it is worth noting that at this stage of the review, articles were excluded only when they were unambiguously not relevant because they dealt with very different concepts of boundary spanning such as those found in the natural sciences.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
A
comprehensive search was performed in the bibliographic databases Scopus/Elsevier (coverage: 1823-14 March 2022), APA PsycInfo/Ebsco (coverage: 1800-present), the Web of Science Core Collection/Claritvate (coverage: 1900-present) and the International Bibliography of the Social Sciences (IBSS)/ProQuest (coverage: 1951-present) in collaboration with
Table 1 .
Inclusion and Exclusion Criteria.
Table 3 .
Conceptualisation of Boundary Spanning Role.
Koyama and Kasper (2021)int to the spontaneous links created by family members or off-duty professionals embedded within the local context who end up taking on boundary spanning roles between health care and the community: "The municipality nurses running to the hospital to fetch medicines or going to the pharmacy to pick up patients' prescriptions, which they do in their free time because they're 'nice'…" (p.157).Embeddedness in a local context shaping the nature of boundary-spanning efforts is also identified byKoyama and Kasper (2021)who describe the importance of the informal interactions of the boundary spanner in community settings.In a somewhat similar vein, | 2024-01-17T16:32:58.950Z | 2024-01-11T00:00:00.000 | {
"year": 2024,
"sha1": "b3299f5f8cdcfb502077c6b625e61420c76f851f",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/00953997231219262",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "33d6e82282829ee5a085c0fba93406a36fb63c62",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"extfieldsofstudy": []
} |
226333202 | pes2o/s2orc | v3-fos-license | Exploring Hydrodynamic Instabilities along the Infalling High-Velocity Cloud Complex A
Complex A is a high-velocity cloud that is traversing through the Galactic halo toward the Milky Way's disk. We combine both new and archival Green Bank Telescope observations to construct a spectroscopically resolved HI~21-cm map of this entire complex at a $17.1\lesssim\log{\left({N_{\rm HI},\,1\sigma}/{\rm cm}^{-2}\right)}\lesssim17.9$ sensitivity for a ${\rm FWHM}=20~{\rm km}\,{\rm s}^{-1}$ line and $\Delta\theta=9.1\,{\rm arcmins}$ or $17\lesssim\Delta d_{\theta}\lesssim30~\rm pc$ spatial resolution. We find that that Complex A is has a Galactic standard of rest frame velocity gradient of $\Delta\rm v_{GSR}/\Delta L=25~{\rm km}\,{\rm s}^{-1}/{\rm kpc}$ along its length, that it is decelerating at a rate of $\langle a\rangle_{\rm GSR}=55~{\rm km}/{\rm yr}^2$, and that it will reach the Galactic plane in $\Delta t\lesssim70~{\rm Myrs}$ if it can survive the journey. We have identify numerous signatures of gas disruption. The elongated and multi-core structure of Complex A indicates that either thermodynamic instabilities or shock-cascade processes have fragmented this stream. We find Rayleigh-Taylor fingers on the low-latitude edge of this HVC; many have been pushed backward by ram-pressure stripping. On the high-latitude side of the complex, Kelvin-Helmholtz instabilities have generated two large wings that extend tangentially off Complex A. The tips of these wings curve slightly forward in the direction of motion and have an elevated \hi\ column density, indicating that these wings are forming Rayleigh-Taylor globules at their tips and that this gas is becoming entangled with unseen vortices in the surrounding coronal gas. These observations provide new insights on the survivability of low-metallicity gas streams that are accreting onto $L_\star$ galaxies.
INTRODUCTION
The star formation in galaxies is dependent on their ability to accrete gas onto their disks. Although both the Milky Way and Andromeda are surrounded by gas (e.g., Wakker & van Woerden 1997;Wakker et al. 2003;Braun & Thilker 2004;Lehner & Howk 2011;Lehner et al. 2015), their star-formation rates appear to be in a decline (see Bland-Hawthorn & Gerhard 2016 for a review). These galaxies may even be transitioning into the "Green Valley" (Mutch et al. 2011;Davidge et al. 2012;Bland-Hawthorn & Gerhard 2016), which is the region between blue star-forming and red quiescent galaxies on a color-magnitude diagram.
The halo clouds that surround the Milky Way are typically put into two different categories that are based on their local standard of rest (LSR) velocities. Intermediate-velocity clouds (IVCs) are a slower population (30 |v LSR | 90 km s −1 ) that tend to lie near the Galactic disk. The high-velocity cloud (HVC) population (|v LSR | 90 km s −1 ) has multiple origins, including galactic-feedback processes, halo-gas condensations, nearby low-mass galaxies, and intergalactic medium filaments; therefore, many HVCs provide replenishment the star-formation reservoir of our galaxy Richter 2017 for review).
As HVCs clouds travel through galaxy halos, they heated and ionized by photons that are escaping from the galaxies (e.g., Milky Way: Bland-Hawthorn & Maloney 1999, 2001Fox et al. 2005; Magellanic Clouds: Barger et al. 2013). Additionally, the hot coronal gas that surrounds them acts as a headwind that compresses their leading material and strips its outer layers through a process known as ram-pressure stripping (e.g., Putman et al. 2011;For et al. 2014). When the surrounding gas rubs against the HVC's surface, it promotes Kelvin-Helmholtz instabilities-a type of shear-driven disturbance-which can cause small cloudlets to fracture off the complex's main body (see Stone & Gardiner 2007;Bland-Hawthorn et al. 2007;Heitsch & Putman 2009). Rayleigh-Taylor instabilities, which are buoyancy-driven disturbances, further disrupt the complex because it is resting on top of less dense halo gas while situated in a galaxy's gravitational field. Combined, these processes can cause the skin of the cloud to become warmer, ionized, and more diffuse than its core. Internal temperature and density variations between these two gas phases can generate thermal instabilities, which can fragment the cloud (see Murray & Lin 2004). Fragmentation can also occur if stripped leading gas, due to ram-pressure stripping, collides with down stream material (see Bland-Hawthorn et al. 2007;Tepper-García et al. 2015). As the surface area of the HVC increases, it becomes more exposed to its environment, which will cause it to evaporate more rapidly into the surrounding coronal gas (e.g., Konz et al. 2002).
Complex A is plummeting towards the Galactic disk and could supply our galaxy with up to M total 2 × 10 6 M (neutral: Kunth et al. 1994;van Woerden & Wakker 2004; ionized: Barger et al. 2012) of new material (Z = 0.1 Z : Kunth et al. 1994;Schwarz et al. 1995;van Woerden et al. 1999;Wakker 2001;Barger et al. 2012). Its chemical composition indicates that it either originated from a low-mass galaxy or the intergalactic medium. However, as no complementary stellar stream has been found (Belokurov et al. 2010;Newberg et al. 2010), Complex A was not likely stripped from a satellite galaxy. This complex has an elongated morphology with multiple dense cores-dubbed A0-AVI and B-along its ∆L ≈ 6.4 kpc length (∆θ ≈ 35 • ; Barger et al. 2012). Because this infalling cloud spans 3 z 7 kpc above the Galactic disk (Wakker et al. 1996(Wakker et al. , 2003Ryans et al. 1997;van Woerden et al. 1999; Barger et al. 2012;Lehner et al. 2012), it probes a range of halo conditions that vary with height above the disk. Table 1) and with the eight high H i column density cores A0-AVI and core B labeled. The two purple shaded regions mark the locations of our new observations that primarily span the trailing portion of this complex (PI Barger: GBT13B-068). The region highlighted in red spans the leading portion of this gaseous stream (PI Verschuur: ID GBT1010A-003). The yellow (PI Chynoweth: GBT09A-046) and blue (PI Martin: GBT107A-003) regions indicate observations cover the central region of this HVC. We additionally circle the emission in our surveyed region that is associated with the M81 galaxy at (l, b) = (142. • 1, 40. • 9).
In this study, we investigate how HVC Complex A is affected by its environment with new and archival Green Bank Telescope (GBT) H i 21-cm observations. We describe these observations and their reduction in Sections 2 and 3. We outline our Gaussian decomposition procedure in Section 4 and explore the H i morphology and kinematic structure along the length of the complex in Section 6 and discuss morphological features that are indicative of hydrodynamic instabilities occurring within different regions of Complex A. Finally, we summarize our main conclusions in Section 8.
OBSERVATIONS
Our H i 21-cm emission-line survey of Complex A spans a 600-square degree area across the sky. This survey is composed of new and archival 100-m Robert C. Byrd Green Bank Telescope (GBT) observations that are spectroscopically resolved over the −230 ≤ v LSR ≤ −90 km s −1 velocity range 1 and are spatially resolved at ∆θ = 9. 1 or 17 ∆d θ 30 pc at the distance of Complex A (6.3 d 11.3 kpc: Barger et al. 2012). The upper velocity limit of this survey (v LSR ≤ −90 km s −1 ) is to avoid contributions from the Milky Way's disk and to reduce the contribution from the neighboring HVC Complex C (see Figure 1). For the core A0 region, which lies nearest to the Galactic disk, we truncated this ve-locity limit to v LSR ≥ −130 km s −1 in an effort to avoid Milky Way contamination. It is important to note that we did not search for H i emission associated with Complex A below a Galactic latitude of b < 21. • 5 as confusion with the Milky Way becomes to great. Our new observations from program GBT13B-068 (PI Barger) span more than a 175 square degree region across the sky and survey the trailing half of Complex A (see Figure 1 and Table 1). Each individual observation had a 4 second exposure time for 50.8 hours of integrated on-target time for all 45,695 sightlines. We centered these L-Band (1.15 ≤ ν L-Band ≤ 1.73 GHz) observations on the H i 21-cm line (ν = 1420.4 MHz) and took them in the on-the-fly (OTF) spectral-line mapping mode. These observations span a bandwidth of ∆ν = 12.5 MHz, which corresponds to 16,384 channels that have a ∆v channel = 0.0583 km s −1 channel width.
The archival GBT H i 21-cm observations sample (1) cores AII and AIII over a ∼215 deg 2 region on the sky (PI Chynoweth: GBT09A-046), (2) core A0 over a ∼60 deg 2 region (PI Verschuur: GBT10A-003), and (3) cores AI, AIII, AIV, and AV over a ∼260 deg 2 region (PI Martin: GBT07A-104, GBT08A-083, and GBT10A-078). Table 1 summarizes the angular extent, angular and spectral resolution, and the sensitivity of these datasets. The observations from the Verschuur and Planck programs were taken with a 4 second exposure time and the ones from the Chynoweth program were taken with 5 second exposures. Together, these archival . Example H i 21-cm spectra along cores A0-B and a high-latitude wing that is adjacent to core AVI, which we labeled "W1".
We include the Galactic coordinates of each spectra and the reduced Chi-squared for our fits at the top of the spectral figures. The light blue Gaussian emission-line profiles represent the component solutions to our Gaussian decompositions routine that is described in Section 4 and the dark blue traces the total fit. The bottom panels display the residuals of our fits (see Equation 4).
Figure 4.
Vertically stacked residuals image for each of the ∼ 1.2 × 10 5 sightlines explored in this study as a function of velocity, where the residuals are defined as the difference between the H i observations and the Gaussian decomposition fit. We exclude a small region of our survey shown in Figure 6 that is contaminated by emission from the M81 galaxy system contained between 141 • l 144 • and 40.5 • b 43 • . In the bottom right hand corner, the column density of the residuals increases where core A0 overlaps with the Milky Way; to avoid confusion with our Galactic disk, we only report on the properties of the H i emission below v LSR ≤ −130 km s −1 in this region. datasets stretch over a 470 square degrees region across the sky along ∼123,000 sightlines (see Figure 1).
DATA REDUCTION
For this project, we used a reduction and calibration procedure that is very similar to the one used by Nidever et al. (2010) with only a few small modifications. We calibrated antenna temperature (T A ) of our frequencyswitched H i 21-cm dataset comparing the flux of our target with the flux of a well characterized objects, 2 using the relationship: Here, the T ref sys (ν) is the system temperature that we determined from the reference spectrum, F ref (ν) is the reference flux of a calibration target, and F sig (ν) is the flux of the on-target signal. We assumed that brightness temperature (T B ) is roughly equal to the antenna temperature (i.e., T B ≈ T A ). Our calibration objects included standard S7 and S8 reference targets located 2 We performed this calibration with the GETFS program in the GBTIDL (http://gbtidl.nrao.edu/) software. at (l, b) = (207. • 00, −1. • 00) and (207. • 00, −15. • 00) (Williams 1973). We additionally observed the center of the core AVI at (160. • 23, 43. • 04) during each observational run, enabling us to measure the H i emission along this sightline very accurately and to use it as a substitute flux calibration target whenever both S7 and S8 were below the horizon.
Once calibrated, we binned the spectra to a ∆v bin = 0.8 km s −1 velocity spacing to decrease small scale fluctuations in the signal and further smoothed the spectra with a Gaussian kernel and then removed the continuum level. We fit the baseline for each integration and polarization separately after masking out the Galactic emission between −100 v LSR +100 km s −1 and emission lines above 2-sigma. For the XX polarization, we fit a 5th-order polynomial to the continuum. We used the same procedure for the YY polarization with the addition of a sinusoidal component in the fit to remove a standing wave that has a period of ν ≈ 1.6 MHz in the GBT spectra (see Nidever et al. 2010). Next, we centered the baseline at T B (v) = 0 mK km s −1 by subtracting the median emission-free spectral height of all spectra in an observing session from each polarization. Finally, we averaged the two polarizations together to produce reduced spectra. The resultant spectra has a typical root-mean-square (RMS) noise that is T B ≈ 75 mK per 0.8 km s −1 channel. This corresponds to a spectral noise sensitivity of log(N H i, 1σ / cm −2 ) = 17.7 for a line with a half-maximum of FWHM = 20 km s −1 (see Figures 2 and 4), using (2) to convert between T B and N H i noise sensitivity under the assumption that the spectral noise is Gaussian in nature (Wolfe 2014). Therefore, our 3-sigma detection limit is log(N H i, 3σ / cm −2 ) = 18.2 for a line with a FWHM = 20 km s −1 width. We determined the N H i of our lines from the T B under the assumption that emitting gas is optically thin to self absorption: For the archival datasets, we use the already reduced and calibrated observations that were shared with us by those program leaders. The reduction and calibration procedures outlined above for our new GBT observations from the GBT13B-068 (PI Barger) program were the same procedures used to reduce the GBT10A-003 (PI Verschuur) dataset, which they outline in their study that explored the physical conditions of Complex A's core A0 (Verschuur 2013). The calibration and continuum level removal techniques in for the GBT07A-104 (PI Martin) observations are described in Boothroyd et al. (2011) and Martin et al. (2015).
The observation and reduction procedures used on the GBT09A-046 (PI Chynoweth) dataset differs substantially from the other datasets. These observations were taken in position switching mode instead of frequencyswitching mode that was used for all other datasets in our survey. For each observing session, Chynoweth et al. (2011) used a reference spectrum positioned at the edge of their observing grid as a reference signal as their flux calibrator. Through this calibration scheme, most of the Milky Way zero-velocity is removed, but the Complex A features remain essentially intact. The spectra was then binned to ∆v bin = 5.2 km s −1 .
We combined and resampled all of the new and archival datasets-except the GBT09A-046 (PI Chynoweth) dataset-on a large, uniform grid in Galactic coordinates with ∆θ = 3. 5 spatial steps and ∆v = 0.8 km s −1 velocity bins. Because the velocity sampling and angular resolution of the GBT09A-046 (PI Chynoweth) dataset differ the most from the other datasets, we only used those observations when other data were not available. In Figure 2 we shows the 1σ sensitivity map of our all observations.
GAUSSIAN DECOMPOSITIONS
We determined the component structure of the H i 21cm emission of Complex A by modeling its emission lines with Gaussian profiles. We characterized the area, center position, and width for each fitted emission line. We determined the number Gaussians to model the emission by minimizing the reduced chi-squared ( χ 2 ) with the MPFIT 3 IDL routine (Markwardt 2009). We selected the fit that used the smallest number of Gaussians to achieve a reduced-chi squared that was within 0.25 of the best fit to avoid arbitrarily adding more and more Gaussian profiles that are not of physical significance to better match the H i emission.
Our initial guesses for the Gaussian fit parameters determined by iteratively searching for peaks in each spectrum. We identified peaks as locations with the highest N H i (v) emission above the 1 standard deviation noise level of a smoothed spectrum. We then masked out all regions with v peak ± 25 km s −1 and repeated this search for emission-lines in the unmasked region. We then fit the spectrum using for lines at velocity positions with ∆v = 10 km s −1 for the initial guess of the line width.
We checked the quality of each fit by searching the residuals for any remaining unfitted emission lines, where the 3 The IDL MPFIT routines are available at http://purl.com/net/mpfit. residuals are the spectral signatures remaining after the fit is subtracted from the H i emission spectrum: We only report results for fitted line profiles that have a signal-to-noise ratio (S/N) greater than 2 and that are above the 3-sigma detection limit of our survey at log(N H i, 3σ / cm −2 ) ≥ 17.7. We defined the area of the signal in the S/N to as the area of the fitted Gaussianline profile and the area of the noise to be equivalent to the area of a rectangle that has a height equal to the standard deviation of the continuum and a width equal to the FWHM of the fitted Gaussian-line profile. This step was done to remove unrealistic fits that characterized spikes in the noise and to ensure that the faint emission that is associated with the diffuse outer envelope of Complex A was kept.
In Figure 3, we illustrate representative Gaussian decompositions of three sightlines along cores AIII, AVI, and B and the corresponding residuals of those fits. For each of these sightlines, there is a brighter component that is associated with the H i core. Additionally, we often find fainter and wider components, which trace the diffuse gas in the outer envelope of this Figure 4, which are typically log residuals(v)/[ cm −2 km s −1 ] < 17.5 per velocity bin.
COMPLEX A COORDINATE SYSTEM
Because Complex A has a long, filamentary structure it is useful to have a coordinate system where the equator lies along the great circle of this extended H istructure. We define a "Complex A" coordinate system with a pole at (l, b)=(202 • ,−40 • ) and the origin of the longitude axis (the ascending node) defined such that the center of the A0 core at (l, b) = (133. • 9, +25. • 1) corresponds to (l CA , b CA ) = (0 • , 0 • ). As in the Magellanic (Wakker 2001) and Magellanic Stream coordinate systems (Nidever et al. 2008), l CA decreases along Complex A towards higher Galactic latitudes. Figure 7 shows the column density of Complex A and Figure 8 shows the position-velocity diagram in this new coordinate system. -20). These line widths were through determined by decomposing the H i spectra into Gaussian components (see Section 4). We additionally mark the gas temperature that these widths would correspond to for a pure thermal broadening scenario, where the 8,000 T 12,000 K lines is where Hα emission peaks.
6. NEUTRAL GAS MORPHOLOGY AND KINEMATICS
Global Properties
Complex A is an elongated stream with multiple dense H i cores along its length (see Figures 6 and 7). These cores tend to be more compressed on the lower Galactic latitude and longitude side (or higher l CA side) and more diffuse and elongated on the opposite side, indicating that core A0 represents the leading end of this stream and core AVI represents the trailing end (see more discussion below). This stream is wider at its trailing end (see Figures 6 and 7). Because the leading edge of Complex A is much closer (d ≈ 6 kpc) than the tailing gas (d ≈ 10 kpc; see Barger et al. 2012), this means that the wider angular extent of cores AVI and B corresponds to a much larger physical width at ∆θ core A0 ≈ 0.10 kpc/degree vs ∆θ core AVI ≈ 0.17 kpc/degree. However, the relatively inline A0-AVI cores suggests that they are part of the main body of Complex A and that core B represents material that fractured off this gas stream.
There is a relatively coherent Galactic standard of rest (GSR) velocity gradient along the length of Complex A, where its leading gas traveling slower relative to the Milky Way than its trailing gas (see lower righthand panel in Figure 6); this indicates that that Complex A is deccelorating. The GSR velocity gradient along its ∆θ ≈ 33 • (or ∆L = 5.7 kpc) body relative to its leading edge is ∆v GSR /∆θ = 4.2 km s −1 /degree (or ∆v GSR /∆L = 25 km s −1 / kpc). Assuming an average velocity of v GSR ≈ −70 km s −1 , it has taken Complex A ∆t = 80 Myrs to travel the length of its body, corresponding to decceleration of a GSR = 55 km/yr 2 . At a constant acceleration, this complex will cross the Galactic plane at b = 0 • in ∆t 70 Myrs; this is an upper limit as this time should decrease due to an increasing gravitational pull as the Complex A approaches the Milky Way's.
We also find that there is a graduate increase in the width of the H i line toward the trailing end of Complex A. In Figure 9, we have plotted the median line widths for each of the major core regions and the two high-latitude wings as a function of Complex A longitude. In the core A0 region, the median FWHM line width is roughly FWHM ≈ 19 km s −1 at l CA ≈ 0 • , but grows to FWHM ≈ 25 km s −1 at l CA ≈ −28 • for cores A6 and B. We have included the histogram distributions of the line widths for all fitted components in each of these regions in Figure 10. In general, the histogram distributions for each core region are relatively well behaved with an easy to identify peak in the number of components at a particular line width, except for the core A0. For this leading core, the line widths peak between 10 FWHM 23 km s −1 and include a much larger distribution of narrow lines than any other core region. These narrow lines suggest that this core is cooling rapidly, presumably because this low metallicity core is mixing with the higher metallicity gas near the Milky Way's disk.
Assuming that the H i emission lines are only broadened by thermal broadening, the increasing median line widths along the length of Complex A would correspond to a rise in the hydrogen gas temperature by roughly 4,400 K from T H i, median = 8,700 K along the leading edge of the complex to 13,100 K along its trailing edge (see Figure 9). Overall, this is relatively inline with the typical gas temperature that Barger et al. (2012) found for the warm ionized phase of this complex at T Hα = 12, 600 K in the direction of the H i cores, where their WHAM Hα observations were resolved at ∆θ = 1 • and have an angular area that is larger than the GBT H i observations by a factor of A θ, WHAM ≈ 43A θ, GBT . However, the elevated line widths on the trailing end of Figure 12. Same as Figure 11, but for the core AI region over the −230 ≤ v LSR ≤ −90 km s −1 velocity range.
Complex A could also signify that this gas is experiencing an increase in non-thermal motions. If that is the case, then the emission lines associated with the trailing end of Complex A would be additionally non-thermally broadened by FWHM non−thermal = 14.1 km s −1 , assuming a thermal broadening of FWHM thermal ≈ 20 km s −1 . This significant contribution to the non-thermal broadening of the line width would indicate that the trailing gas is being more disrupted.
The higher H i column density cores in this stream are connected by lower column density gas. Many of the H i cores are compressed in the direction of their motion, including cores A0, AI, AII, and AIII. These cores also tend to be moving towards the Galaxy faster (i.e., larger negative LSR velocities) than the lower column density gas that surrounds and connects them. This global trend is especially apparent in movie found in Figure 5, which rotates Complex A through 3 dimensional position-position-velocity space, and in Figure 8. We additional provide two sets of zoomed in positionposition and position-velocity maps that are scaled by the H i column density and FWHM line width of each core region in Figures 11-20. This global morphology is characteristic of ram-pressure stripping in which the surrounding coronal gas and incident Galactic photons act as a headwind that compresses the leading gas and strips the outer layers of this stream to form a lagging diffuse tail that travels in the anti-direction of motion.
The fragmented morphology of Complex A could be a result of thermal cooling instabilities or a slow "shock cascade." Cooling instabilities often arise as a result of density inhomogeneities in which the high density gas cools more efficiently. As the complex descends toward the disk, it will sweeps up coronal gas, compressing the leading gas (Kereš & Hernquist 2009). Further, gas that the complex sweeps up gas near the Milky Way's disk will have a higher metallicity (Z CA = 0.1 Z : Kunth et al. 1994;Schwarz et al. 1995;van Woerden et al. 1999;Wakker 2001;Barger et al. 2012 and will promote cooling. Fragmentation is expected to occur once the stream sweeps up roughly its own mass in ambient material (Murray & Lin 2004), indicating that Complex A has already accreted a substantial material during its journey. Unfortunately, the sparseness of metallicity measurements along the length of the complex means that the level of metal mixing and accretion cannot currently be constrained. In a "shock cascade" scenario, leading material that is stripped via ram-pressure will be slowed by non-conservative forces and can then collide with down stream gas (Bland-Hawthorn et al. 2007;Tepper-García et al. 2015). This shock cascade can disrupt and and fragment the down stream gas. The rapidly varying line widths with position and column density, between 10 FWHM 40 km s −1 (see Figures 11-20) are a strongly indicator that the low and high column density H i gas is either not in thermal-dynamical equilibrium Figure 13. Same as Figure 12, but for the core AII region.
or that the low column density gas is experiencing more severe non-thermal motions.
Interestingly, the gas in the core AII region has a much lower H i column density than the gas in the adjacent cores that connect to it. Further, this relatively wispy core region is moving much slower toward the Galactic disk than cores AI and AIII at ∆v LSR ≈ 80 km s −1 offset from core AI and ∆v LSR ≈ 30 km s −1 offset from core AIII (see Figure 8), indicating that it is much more influenced by coronal-gas interactions. However, although core AII is morphologically much more disrupted, its higher column density gas still has a narrow line profile (10 FWHM 20 km s −1 ; Figure 13). This suggests that the gas in the core AII region is still able to remain relatively cool and compact. Using mapped Hα observations, Barger et al. (2012) found that roughly half of Complex A is ionized. This warmer and lower density phase acts as skin that shields the H i cores from direct interactions with the surrounding coronal gas. The gas in core AII could also be "drafting" the leading gas in core AI, such that it is not experiencing a direct headwind, though we do not know the locations of these cores in 6-dimensional position and velocity space and therefore cannot tell how well aligned core AII is behind core AI.
Numerous H i structures protrude or are fractured off Complex A's main body, which is an indication that its gas is subject to hydrodynamic instabilities. As these offset structures still have a relatively cohesive structure in H i, they were likely recently stripped off the complex. This displaced gas is now more exposed to the incident ionizing radiation and the surrounding coronal gas as this material now has a larger surface area and is no longer "drafting" behind the cloud. This increased exposure will cause them to be heated and ionized quicker, which will lead to them rapidly evaporating (Konz et al. 2002). We identify these offset structures and discuss how Rayleigh-Taylor and Kelvin-Helmholtz instabilities are working with ram-pressure stripping producing these structures in the following subsections.
Rayleigh-Taylor Instability Structures
Complex A is surrounded by hot coronal gas, which means that it is essentially resting on top of a lower density medium while being influenced by the Milky Way's gravitational field. This is an unstable configuration that can drive buoyancy related disturbances known as Rayleigh-Taylor instabilities. If these instabilities are strong enough, they can generate globules and spikes ("finger" like structures) that drip through the warmer coronal medium that lies below Complex A toward the center of the Milky Way's gravitational field. These globules will therefore form on the lower Galactic latitude edge of this complex (see Figure 7). Further, the morphology will include a compressed edge that forms Figure 14. Same as Figure 12, but for the core AIII region.
when globules push through the lower density coronal gas below it.
There is an H i arch that hangs off core A0 at (l, b) ≈ (132 • , 23 • ) by a thin filament (see Figure 11). The high-latitude portion of this arch is more compressed, indicating that it was the material that initially pushed through the coronal gas when the globular began its departure from core A0. This gas arches in Complex A's direction of motion, which is unusual as ram-pressure stripping should have pushed this material in the other direction. Instead, as core A0 is only z ≈ 2.7 kpc above the Galactic plane (Barger et al. 2012), this arched morphology in the direction of motion was likely created when the globular interacted with the denser gaseous medium near the Milky Way's disk. The thin connecting filament further indicates that this globular will soon fracture off core A0.
On the lower latitude edge of core AI, there is an H i structure that looks like a skewed "loop" at (l, b) ≈ (141 • , 26. • 5) (see Figure 12). This material connects to core AI at (140 • , 27 • ) and extends in the anti-direction of Complex A's motion and then curves back up toward core AI. This loop appears to represent a Rayleigh-Taylor spike that was elongated and pushed backward due to ram-pressure stripping by the surrounding coronal gas. A Rayleigh-Taylor spike also lies on the lower latitude side of core AIII at (1, b) ≈ (149 • , 31 • ) (see Figure 14). However, this relatively shorter and wider structure projects downward in Galactic latitude and does not curve backward, indicating that this gas only recently "dripped" off core AIII.
There is a mini stream that branches off core AIV's lower latitude edge and points in the anti-direction of Complex A's motion (see Figure 15). This mini stream represents a complex Rayleigh-Taylor instability structure that is strongly influenced by ram-pressure stripping. The gas that is positioned directly under core AIV has only recently "dripped" off core A4. Below the beginning of this mini stream, there is a mini H i knot at (1, b) ≈ (154 • , 33 • ) that appears to be the start of a new Rayleigh-Taylor spike. At (151 • , 35 • ), there is a small H i knot that branches off in the direction of Complex A's motion, which would occur if this globular is flowing into a low pressure pocket behind core AIII. Along this stream, there are two H i knots at (156 • , 35 • ) and (159 • , 36 • ) that indicate that there are smaller Rayleigh-Taylor fingers forming off other fingers.
The gas associated with core B (Figure 18) does not align with the main body of Complex A (i.e., cores A0-AVI; see Figure 6). The gas that connects core B to core AVI has a relatively lower column density (log N H i / cm 2 18.5) and is more diffuse compared to the A0-AVI cores. The entire core B region likely represents a very large globule that is being swept away via ram-pressure stripping. However, this core region might be more diffuse than the other Rayleigh-Taylor Figure 15. Same as Figure 12, but for the core AIV region.
fingers if it is being further disrupted by a turbulent wake that trails behind Complex A as it travels through the Galactic halo.
Kelvin-Helmholtz Instability Structures
In addition to Rayleigh-Taylor Instabilities, Kelvin-Helmholtz Instabilities are also influencing Complex A. As the outer layers of the complex "rubs" against the surrounding coronal gas, small tangential perturbations from shear-flow disturbances can form on its surface. If they become amplified, then some of affected gas will raise tangentially off the complex in the ±b CA directions. Elevated material can then be more easily swept away by the surrounding coronal gas through ram-pressure stripping as high pressure zones form on the leading edge of these structures and low pressure zones on the trailing edge. This elevated gas additionally can be influenced by Rayleigh-Taylor Instabilities in the direction of the host galaxy's gravitational potential well if it is able to maintain a gas density that is greater than the halo density and if it is not overpowered by ram-pressure stripping.
Kelvin-Helmholtz instabilities can affect all portions of the complex that are directly sliding against the surrounding coronal medium, but their signatures are more difficult to identify on the lower Galactic latitude half (or higher b CA half) of Complex A. This is because they are occurring in tandem with Rayleigh-Taylor instabilities and ram-pressure stripping, which have a stronger morphological impact on this HVC as evident by the numerous globules that hang from it (see Figure 7). We therefore only identify Kelvin-Helmholtz instability structures on the higher latitude side of Complex A. However, we stress that these Kelvin-Helmholtz instabilities could be exacerbating the Rayleigh-Taylor structures that form on the lower latitude edge of this complex.
Three small Kelvin-Helmholtz structures branch off of cores A0 and A1, which are marked in Figures 7. All of these structures point roughly perpendicularly off the surface of Complex A with a slight tilt in the direction of Complex A's motion. This is interesting as ram-pressure affects should cause these structures to tilt in the antidirection of motion, but interactions with the denser gas near the Milky Way's disk may have affected their orientation. This unusual orientation is also shared by the low-latitude globular that hangs off of core A0. As core A0 is the leading core, its leading edge is being heated eroded away by direct interactions with denser material near the Milky Way's disk. These interactions assisted in the formation of the Rayleigh-Taylor globular at (l, b) ≈ (132 • , 23 • ) and the Kelvin-Helmholtz structure at (133 • , 27 • ) (see Figure 11). All three of the Kelvin-Helmholtz structures are attached to cores A0 and A1 are connected by a thin filament, indicating that they will soon detach and evaporate into the surrounding coronal medium (see Figures 11 and 12). Addition- Figure 16. Same as Figure 12, but for the gas distribution of the core AV region. ally, the higher H i column density sub-cores that have formed at the tips of these structures might indicate that these they are developing or will develop Rayeigh-Taylor fingers.
Two high-latitude "wings" protrude from Complex A (see Figure 6), one between cores AIII and AIV at (l, b) ≈ (147 • , 41 • ) (see Figure 20) and another off core AVI at (160 • , 44 • ) (see Figure 19). Because the stems of these wings extend perpendicularly off of Complex A, this indicates that they were formed by Kelvin-Helmholtz instabilities. These structures subsequently became elongated due to interactions with the surrounding coronal as this HVC fell through the Galactic halo. Interestingly, sub-H i cores have formed in the tips of these wings which may indicate that they are starting to form Rayleigh-Taylor fingers. The odd forward leaning morphology of these wings could be a result of buoyancy instabilities that are causing this higher density gas to fall faster toward the disk. However, in the case of wing 1, unseen eddies or a low pressure zone in the turbulent wake that lies behind wing 2 could also be causing this wing to curl forward (see Figure 7).
In a hydrodynamical simulation of gas streams, Murray & Lin (2004) found that wings can form as a result of evolving thermal and Kelvin-Helmholtz instabilities. They found that as the wings grow, they can curve in the direction of motion of the main cloud due to a combination of Rayleigh-Taylor instabilities and entan-glement with vortices that formed in the surrounding coronal gas, which erode away the middle of the wing on its leading side. The numerous cloud fragments that lie behind these wings indicates that there is substantial turbulent mixing behind them, presumably caused by a wake that follows these wings.
DISCUSSION
The HVCs that are infalling onto the Milky Way will generally need to travel for a tens to hundreds of million years to reach the Galactic disk as they typically lie |z| 10 kpc above or below the disk (van Woerden et al. 1999;Wakker 2001;Wakker et al. 2007Wakker et al. , 2008Thom et al. 2006Thom et al. , 2008Smoker et al. 2011;Richter et al. 2015) and are moving with speeds of 50 |v z | 200 km s −1 relative to the disk. While they are traversing the Galactic halo, they are gradually eroding away into the surrounding coronal medium. Heitsch & Putman (2009) and Bland-Hawthorn et al. (2007) predict that HVCs with M H i < 10 4.5 M will become fully ionized through Kelvin-Helmholtz instabilities within τ KH 100 Myr and therefore they will not typically reach the Milky Way's disk. Kwak et al. (2011) project that up to 70% of the hydrogen in HVCs with masses of M H i 10 5 M can remain neutral for a few hundred million years, which means that the large complexes are likely to survive their journey. However, higher mass HVCs that have a stream morphology, or that have a fractured sur- Figure 17. Same as Figure 12, but for the gas distribution of the core AVI.
face are similarly vulnerable to rapid evaporation due to their increased surface area. Additionally, HVCs can become even more vulnerable to their surroundings if they become fragmented as a result of thermal instabilities (see Murray & Lin 2004) or "shock cascade" processes (see Bland-Hawthorn et al. 2007;Tepper-García et al. 2015).
While hydrodynamical instabilities assist in the destruction of HVCs, heat conduction (Vieser & Hensler 2007;Armillotta et al. 2017), self-gravity, and Magnetic fields (Chandrasekhar 1961;Grønnow et al. 2018) are all processes that can suppress them (see Plöckinger &Hensler 2012 andGrønnow et al. 2018). As HVCs move through hot halo gas, they are being heated via thermal conduction, advection, and ionizing radiation, which means that these instabilities are at least partially suppressed due conduction. Although self-gravity would help stabilize HVCs, it is unlikely that these complexes are embedded within dark matter halos as their corresponding H i Virial distances would place them millions of parsecs away (Oort 1966;Freeman & Bland-Hawthorn 2002).
The net effect that magnetic fields have on shaping HVCs is uncertain as hydrodynamic instabilities have been found to be mildly (Banda-Barragán et al. 2016) and strongly (McCourt et al. 2015;Goldsmith & Pittard 2016) suppressed and even enhanced (Grønnow et al. 2017) in magnetohydrodynamic simulations (see Grønnow et al. 2018). It may be the case, however, that magnetic fields affect each kind of hydrodynamical instability differently. For instance, Banda-Barragán et al. (2016) and Grønnow et al. (2018) found that magnetic fields inhibit Kelvin-Helmholtz instabilities, which helps to protect clouds against ablation by reducing their contact with halo material. In the case of thermal instabilities, Ji et al. (2018) found that magnetic fields appear to promote thermal instabilities, which aids in their fragmentation. Similarly, Gregori et al. (1999), Grønnow et al. (2017), and Grønnow et al. (2018) found that magnetic fields enhanced Rayleigh-Taylor instabilities in the z direction. Regardless, hydrodynamical effects dominate over magnetohydrodynamics in shaping clouds during most of their journey through the Galactic halo with the exception of when they are near the Galactic plane because the compressed leading edge of these clouds amplifies their magnetic field strength (see Grønnow et al. 2017).
While there is uncertainty as to whether or not all HVCs have a magnetic field, the Smith Cloud (Hill et al. 2013), Magellanic Bridge (Kaczmarek et al. 2017), andLeading Arm (McClure-Griffiths et al. 2010) all have a detected magnetic field. However, as these three HVCs represent material that has been displaced from a galaxy Mao et al. 2012). It is unknown if the HVCs that originate from an intergalactic medium filament or though halo gas condensations will have a magnetic field.
No magnetic field has been directly measured for Complex A. Nonetheless, Verschuur (2013) placed indirect constraints on the strength of the Complex A's toroidal magnetic field by assuming that its broad H i lines are a causes by a combination of thermal broadening and magnetic turbulence that results from Alfvén waves. In that study, they surmised that the lack of Hα emission detected in the Wisconsin Hα Mapper (WHAM) Northern Sky Survey (NSS) of the Milky Way (Haffner et al. 2003) was an indication that this complex is colder than T H i < 7 × 10 3 K and therefore the H i emission should only have narrow emission-line profiles of FWHM < 25 km s −1 . Assuming a distance of d ≈ 200 pc to Complex A, they derived a field strength of B ≈ 5 µG with their model. However, there are two major issues with their assumptions: (1) The WHAM NSS does not span the kinematic extent of Complex A (WHAM NSS: −100 v LSR +100 km s −1 ), so no Hα emission from this complex would be present in this survey. Barger et al. (2012) mapped the Hα emission in Complex A using the WHAM telescope and they detected Hα emis-sion from the entire complex, which varied in strength from 30 I Hα 100 mR. Therefore, broad H i line profiles with 25 FWHM 35 km s −1 is not surprising as Hα emission peaks between 8,000 T Hα, peak 12,000 K.
(2) Complex A is much farther away than d = 200 pc. Barger et al. (2012) also found that the inferred level of ionization based on the Hα emission could be produced by photoionization from the Milky Way and the extragalactic background if core A0 lies roughly 6.3 d 6.5 kpc, which is in agreement with distances derived via absorption-line studies (Wakker et al. 1996;Ryans et al. 1997;van Woerden et al. 1999;Wakker et al. 2003). At this much larger distance, the Verschuur (2013) model predicts that the magnetic-field strength would be on the same order as the external field at B 1 µG.
While the predicting power of hydrodymanical simulations continue to improve, they are currently unable to fully resolve the detailed physics that are influencing HVCs. Smooth particle hydrodynamic simulations struggle to produce high resolution models that incorporate ram-pressure stripping, Kelvin-Helmholtz and Rayleigh-Taylor instabilities, turbulent motions, and magnetic fields (see Mocz et al. 2015) as these codes can suppress entropy generation, underestimate vorticity generation, and impede efficient gas stripping (Sijacki et al. 2012). Adaptive mesh refinement codes have difficulty modeling diffusion as two mediums rub past each other at supersonic bulk velocities (Mocz et al.
2015)
, which often leads to a suppression of shear instabilities as HVCs are moving through the Galactic halo at subsonic, transonic, and supersonic speeds (Kwak et al. 2011). Moving Voronoi mesh simulations can introduce noise on small spatial scales that are associated with mesh's motion (Bauer & Springel 2012;Hopkins 2013), which can result in second order instabilities in shear flows (Mocz et al. 2015). Theoretical efforts are further hindered by having few observationally resolved examples of these processes to anchor their models.
Until now, only one complete dataset of high resolution (∆θ ≈ 9. 1) and high sensitivity (log N H i / cm −2 ≈ 17.2) H i 21-cm dataset exists for one of the Galaxy's HVC complexes: the Smith Cloud. This infalling complex is presently located at a Galactic height of z ≈ −3 kpc and is estimated to impact with the Galactic disk in roughly 27 Myr (Lockman et al. 2008). The chemical composition of this HVC (Z = 0.53 +0.21 −0.15 Z : Fox et al. 2016) indicates that it likely originated from a Galactic fountain. Although the present mass of this complex of M total 2 × 10 6 M (neutral: Lockman et al. 2008;ionized: Hill et al. 2009) is much larger than anticipated from the energetic processes occurring within the Galactic disk (Fox et al. 2016), hydrodynamical simulations suggest that when its high metallicity gas mixes with the surrounding coronal gas it can provide an avenue for the halo to cool and condense onto the complex (e.g., Marinacci et al. 2010;Marasco et al. 2013;Fraternali et al. 2015. The cooled coronal gas can accrete onto the HVC, enabling it to grow as it travels through the halo (Armillotta et al. 2016). Because this HVC has a measured average magnetic field strength along the line-of-sight (LOS) of B LOS 8 µG (Hill et al. 2013), it could be at least partially resistant to Kelvin-Helmholtz instabilities. Nonetheless, Rayleigh-Taylor instabilities have been observed in the H i of this complex (Betti et al. 2019).
However, low metallicity halo clouds that are inefficient at cooling will instead more easily erode into the halo (e.g., Joung et al. 2012). HVC Complex A is one such low metallicity cloud (Z = 0.1 Z : Kunth et al. 1994;Schwarz et al. 1995;van Woerden et al. 1999;Wakker 2001;Barger et al. 2012). This HVC further has no detected magnetic field, so it is unknown if they are suppressing or enhancing hydrodynamic instabilities along its length. In this study, we have resolved the H i morphology of this complex in unprecedented detail, which enable us to identify morphological structures that are associated with ram-pressure stripping and thermal, Rayleigh-Taylor, and Kelvin-Helmholtz instabilities. This study provides the first opportunity to trace all of these hydrodynamic instability signatures in a low metallicity HVC that may have originated from an intergalactic-medium filament. Figure 12, but for wing 2 (W2)-a high-latitude cloud fragment that lies off the core AIV region at (l, b) ≈ (147. • 6, 41. • 6). Just offset the tip of this wing, there is a high column density (log N H i / cm 2 ≈ 20) cloudlet at (142 • , 41 • ) that is associated with M81 galaxy, which is enclosed within a red rectangle in the left-hand panels. This M81 emission has been removed from the center and right-hand panels.
Resolution matters. Previous observations of this gas stream using the Leiden/Argentine/Bonn (LAB) survey with a ∆θ = 0. • 6 resolution (Hartmann & Burton 1997) severely spatially smoothed the emission so that only its bulk structure is resolved. Giovanelli et al. (1973) and Davies et al. (1976) presented high angular resolution observations of Complex A at 10 ≤ ∆θ ≤ 20 , but they were much less sensitive at 0.5 T B, 1σ 1 K and therefore these observations provided limited information as to how this complex is interacting with its environment. Although the H i 4-PI (HI4PI) survey does have ∆θ = 16. 2 resolution with a 1-sigma T B, 1σ = 43 mK per ∆v bin = 1.29 km s −1 or 3-sigma log(N H i, 3σ / cm −2 ) = 18.1 sensitivity for a line with FWHM = 20 km s −1 (HI4PI Collaboration et al. 2016), the beam size of the GBT observation we are presenting in this study span an angular diameter that is a factor of 3.2× smaller. At the angular resolution and sensitivity of the HI4PI survey, Rayleigh-Taylor instability fingers are difficult to identify (see Figure 3 of Westmeier 2018). Without high resolution observations of HVCs, like the ones presented this study, we will not be able to identify morphological and kinematical signatures of hydrodynamic instabilities, which are needed understand the survivability these complexes as they traverse the Galactic halo and anchor simulations.
SUMMARY
In this study, we explore the kinematics and morphology of the neutral hydrogen gas of Complex A. We present a kinematically resolved H i 21-cm map of this HVC over a −230 ≤ v LSR ≤ −90 km s −1 velocity range that spans a 600-square degree area across the sky. This survey has a sensitivity of 17.1 log(N H i, 1σ / cm −2 ) 17.9 for lines with a FWHM = 20 km s −1 width. We finish with the main conclusions of our study: 1. Bulk Motion: There is a Galactic standard of rest frame velocity gradient of ∆v GSR /∆L = 25 km s −1 / kpc along the ∆L ≈ 6.4 kpc length of Complex A. This corresponds to a deceleration rate of a GSR = 55 km/yr 2 , which will place this complex at the Galactic plane in ∆t 70 Myrs.
2. Ram-Pressure Stripping: Numerous H i cloudlets along the Complex A exhibit morphological signatures that are shaped by ram-pressure stripping. The cores A0, AI, AII, and AIII tend to be compressed in the direction of motion with diffuse following gas (see Figures 6-14). Much of the gas that extends off the low-latitude edge of the complex is tilted in the anti-direction of motion. This includes an "H i loop" that extends off of core A0, wispy gas that hangs from core AII, a relatively multi-core filament and a small filament that branches off of core AIV, and the entire core B region (see Figure 7).
Rayleigh-Taylor Instabilities:
We have identified numerous Rayleigh-Taylor fingers that hang from the lower latitude edge of the Complex A stream. This includes a finger that hangs off core A0 and curves upward in Complex A's direction of motion (see Figure 11), suggesting that this low-latitude gas is interacting with higher density gas near the Galactic disk. Fingers also extend off cores AI, AII, AIII, and AIV (see Figures 12, 14, and 15). The entire elongated and diffuse core B region has the morphology of a large globular that branches off core AVI and has been pushed backward due to ram-pressure stripping (see Figures 6 and 18). Additionally, the high density H i subcores at the tip of the two high-latitude wings suggests that they could be forming globules.
Kelvin-Helmholtz instabilities: Because both
Rayleigh-Taylor instabilities and Kelvin-Helmholtz instabilities are simultaneously affecting the gas on the lower latitude edge of Complex A, the Kelvin-Helmholtz signatures are difficult to isolate on this edge of the complex. On the high-latitude edge, there are two wings that extend tangentially from Complex A that were formed through Kelvin-Helmholtz instabilities. After their initial formation, ram-pressure stripping elongated this gas and a combination of Rayleigh-Taylor instabilities and/or vortices in the surrounding coronal gas that are eroding caused them to curl slightly in the direction of motion. | 2020-10-29T09:04:07.996Z | 2020-10-23T00:00:00.000 | {
"year": 2021,
"sha1": "04da4783dce7b28067a542877d09a68adabdc2a9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2101.11746",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "02550a1cfd345e3b4adfc10eaf037fff45df0fe1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
14120596 | pes2o/s2orc | v3-fos-license | Abstract: Anatomic and Histologic Investigation of Nasolabial Rejuvenation with Wire Subcision and Adjunctive Filler Injection
Suday, Sptem er 5, 2016 MethodS and MaterialS: Sprague Dawley rats were injected subcutaneously with two HA fillers, VYC20L [20 mg/mL] or HYC-24L+ [24 mg/mL], to create a projecting bolus. Four days post-injection, recombinant human hyaluronidase (HX) or ovine hyaluronidase (VIT) were administered at varying dose levels (5U/0.1mL bolus, 10U/0.1mL bolus, and 30U/0.1mL bolus). 3D images were captured to quantify the loss of projection at six time points over 72 hours. Histology was performed to confirm degradation at 2 weeks post-administration.
Anatomic and Histologic Investigation of Nasolabial Rejuvenation with Wire Subcision and Adjunctive Filler Injection
Avery C. Capone, MD; Ahmed Hashem, MD; James E. Zins, MD introduCtion: Nasolabial complex (NLC) rejuvenation with injectables is limited by densely adherent perioral and nasolabial crease tissues. Release of myodermal attachments may create a potential space for filler deposition, attenuating deep nasolabial creases associated with aging. Incisionless separation of these attachments has been described using subcision wires 1 . Adjunctive filler injection may promote a youthful nasolabial contour 2 . The anatomic basis for these techniques is not fully defined. This study histologically describes nasolabial wire subsicion with and without filler placement compared to filler injection alone.
MethodS:
Of fourteen NLCs in seven fresh cadavers, eleven NLCs were subcised (SurgiWire Incisionless Dissector, Coapt Systems, Inc.), eight also underwent filler injection. One NLC was injected without subcision. Two were controls (no intervention). Injectable silicone (Dragon Skin, Smooth-On, Inc.) simulated dermal filler, and 2mL were injected per NLC. Full thickness portions of the lip and cheek containing the NLC were excised. Specimens were sectioned perpendicular to the nasolabial crease, stained with Masson's trichrome, then assessed in thirds (upper, middle, and lower). reSultS: Mean cadaver age was 72.7 years. Five (71%) were female. Mean length of the nasolabial crease was 41.2mm. Subcision/filler cavities were localized to a plane superficial to the facial mimetic musculature in 80.6% of sections. When compared to subcision alone, subcision combined with silicone filler generated larger, smoothwalled subcision cavities with division of myofascial elements. Filler injection without subcision resulted in irregular silicone deposition amongst multiple filler cavities. Vessels in excess of 300um diameter were disrupted in 3 specimens (25%) and 13 sections (14.1%). Vessel disruption was more frequent in the middle and lower thirds of the NLC, and 61.5% of vessel disruptions were observed during filler injection without subcision. Vessels exceeding 1000um diameter were identified in 5 specimens (35.7%) and 13 sections (8.4%). These larger vessels were always inferior or lateral to the subcision/filler plane, and in the middle/lower thirds of the NLC. No large vessel disruptions or intravascular filler were observed.
ConCluSionS:
Wire subcision reproducibly divides muscular and connective tissue attachments to the nasolabial crease. Vessel disruption during subcision was uncommon, more frequently observed in the middle/lower thirds of the NLC. Vessels exceeding 1000um diameter were more frequently observed in the lateral aspect of the lower third of the NLC which is considered a vascular danger zone.
Jason M. Weissler, MD; Jose Maria Serra-Mestre, MD; Javier Beut, MD; Oren Tepper, MD
introduCtion: Two-dimensional (2D) photography has traditionally facilitated preoperative analysis and surgical planning for plastic surgeons. While this has historically been standard of care, recent technological advances have propelled plastic surgery innovation forward, transitioning from traditional 2D photography to a more comprehensive and realistic modality, using three-dimensional (3D) imaging and printing. With the advent of 3D imaging in facial aesthetic surgery, the plastic surgery community has primarily focused on its utility in preoperative surgical simulation and marketing, however, the application of 3D photography extends well beyond virtual simulation. This study highlights the clinical value of 3D printed models in helping to align patient and surgeon goals in the preoperative and consultative setting, and focuses on the value of custom surgical templates for use as operative blueprints to facilitate intraoperative decision making in rhinoplasty.
MaterialS and MethodS:
Patients undergoing rhinoplasty had standard 3D photographs (Canfield Vectra H1) taken as part of their preoperative visit. Using Vectra, 3D digital renderings of the simulated postoperative result were created. Finally, both baseline and ideal simulated 3D printed models were created as individualized surgical templates for intraoperative guidance during rhinoplasty surgery.
reSultS: 3D printed individualized surgical models have been successfully implemented for use during cosmetic rhinoplasty. The intraoperative application of 3D printed models surpasses not only traditional 2D photography, but also simple 3D computer renderings. The realistic facial prototypes enable the surgeon to have a more intuitive perception of patient-specific soft tissue and bony contours to help achieve superior aesthetic results.
ConCluSion: 3D printing is an emerging technology in aesthetic surgery, and while it permeates the aesthetic market, there is an opportunity for surgeons to incorporate personalized models of patients into their practice for use as intraoperative guides. Realistic facial prototypes enable the surgeon to interact directly with models of patient-specific soft tissue and bony contours to facilitate nasal reconstruction, while optimizing aesthetic outcomes. The introduction of 3D photography as an adjunct to surgical planning has demonstrated impressive applicability, and provides a unique opportunity for aesthetic plastic surgeons to replace traditional 2D photographs, while better aligning patient and surgeon desires. Additional randomized control studies are needed to further elucidate the benefits of this technology, however, we believe this technique represents a paradigm shift and will become standard of care in the years to come.
Neil Tanna, MD; P. Niclas Broer, MD; Paul Heidekrueger, MD; Milomir Ninkovic, MD
BaCkground: Facial defects with loss of hair-bearing regions can be caused by trauma, infection, tumor excision, or burn injury. Several techniques, including local-, loco-regional-, and free flap transfers have been described. This analysis evaluates different surgical approaches with a focus on male beard reconstruction, emphasizing the role of tissue expansion of regional and free flaps.
MethodS: Loco-regional and free flap reconstruction were performed in 11 male patients with 14 facial defects affecting the hair-bearing bucco-mandibular or perioral region. In order to minimize donor site morbidity and | 2018-05-08T17:36:10.086Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "831b898a9f129c3fca56fb731067b5fb414d298d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/01.gox.0000502950.77770.e9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "831b898a9f129c3fca56fb731067b5fb414d298d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231792955 | pes2o/s2orc | v3-fos-license | Early detection of lung cancer in Czech high-risk asymptomatic individuals (ELEGANCE)
Abstract Background: Lung cancer screening in high-risk population increases the proportion of patients diagnosed at a resectable stage. Aims: To optimize the selection criteria and quality indicators for lung cancer screening by low-dose CT (LDCT) in the Czech population of high-risk individuals. To compare the influence of screening on the stage of lung cancer at the time of the diagnosis with the stage distribution in an unscreened population. To estimate the impact on life-years lost according to the stage-specific cancer survival and stage distribution in the screened population. To calculate the cost-effectiveness of the screening program. Methods: Based on the evidence from large national trials - the National Lung Screening Trial in the USA (NLST), the NELSON study, the recent recommendations of the Fleischner society, the American College of Radiology, and I-ELCAP action group, we developed a protocol for a single-arm prospective study in the Czech Republic for the screening of high-risk asymptomatic individuals. The study commenced in August 2020. Results: The inclusion criteria are: age 55 to 74 years; smoking: ≥30 pack-years; smoker or ex-smoker <15 years; performance status (0–1). The screening timepoints are at baseline and 1 year. The LDCT acquisition has a target CTDIvol ≤0.5mGy and effective dose ≤0.2mSv for a standard-size patient. The interpretation of findings is primarily based on nodule volumetry, volume doubling time (and related risk of malignancy). The management includes follow-up LDCT, contrast enhanced CT, PET/CT, tissue sampling. The primary outcome is the number of cancers detected at a resectable stage, secondary outcomes include the average cost per diagnosis of lung cancer, the number, cost, complications of secondary examinations, and the number of potentially important secondary findings. Conclusions: A study protocol for early detection of lung cancer in Czech high-risk asymptomatic individuals (ELEGANCE) study using LDCT has been described.
Introduction
The annual incidence rate of lung cancer in the Czech Republic is high, 86 and 43 cases per 100,000 men and women a year, respectively. [1] With more than half of the cases diagnosed in stage IV, the relative 5-year survival is only 10% making it the most common cause of death among oncological diagnoses. Cigarette smoking is a well-documented cause of lung cancer and about 90% of lung cancers are directly caused by smoking. The relationship between the number of cigarettes smoked per day, the depth of inhalation, the age of the smoker, and the development of lung cancer have been documented. [2] Vast resources have been dedicated to shifting the diagnosis of bronchogenic cancer to its early stages. Poor survival can be largely attributed to delayed diagnosis. Only small resectable stage I tumors offer favorable prognosis with 5-year survival rates of 70% to 90%. [3] In the United States, where lung cancer screening is an established and recognized tool, smoking prevalence is only 15 percent. In the Czech Republic where every fourth adult and about 12% of primary school pupils are active smokers, there is no screening for lung cancer yet. Because of the unsatisfactory results of anti-smoking intervention programs and the fact that even more lung tumors are becoming diagnosed in former (non-active) smokers, secondary prevention by early detection of lung cancer by screening in a selected population is proposed.
Based on previous studies, including the National Lung Screening Trial in the USA (NLST), [4] the NELSON study, [5] and on the recent recommendations of the Fleischner society, [6] the American College of Radiology [7] and I-ELCAP action group, [8] we developed a protocol for an optimization study in the Czech Republic for the screening of high-risk asymptomatic individuals by low-dose CT (LDCT).
Study protocol
The study was approved by the Ethics Committee of the General University Hospital in Prague (12/19 Grant AZV VES 2020 VFN). The study adheres to the principles of the Declaration of Helsinki. Written, informed consent to participate will be obtained from all participants. Patients may discontinue at any time. The participants are being recruited by family physicians, pneumologists, and self-referred by an advertising campaign. The study is designed as a single-arm prospective study conducted in an academic hospital. The study is registered under Clinical-Trials.gov ID: NCT04627350. Protocol modifications will be announced to the trial registry and the Ethics Committee.
Study aims
The aims of this project are: 1. to optimize selection criteria and quality indicators for the target population for lung cancer screening in the Czech population 2. to compare the influence of screening on the stage of lung cancer at the time of the diagnosis with the stage distribution in an unscreened population 3. to estimate the impact on life-years lost according to the stagespecific cancer survival and stage distribution in the screened population 4. to calculate the cost-effectiveness of the screening program 5. to assess the potential for opportunistic screening of noncommunicable diseases. pulmonary fibrosis, aortic aneurysm, compression fracture and signs of osteoporosis, [9,10] myocardial scar) previously unknown.
The study is designed as 2 screening rounds at a baseline and at 1 year, the estimated study duration is 4 years with 2 years of enrolment to baseline screening, 1 year for second round screening, and 1 year for follow-up of the second round. The study commenced in August 2020. Study data are entered to a secured enterprise database, which can be accessed by all investigators. The first author is responsible for data monitoring, integrity, and auditing on a monthly basis. Nominal data will be presented as numbers and percentages and will be analyzed using the Fisher test. Ordinal and continuous data will be reported as mean ± standard deviation or 95% confidence intervals. The study results will be published in a peer-reviewed journal. Authorship will be based on the ICMJE guidelines.
Sample size
In comparison with the Nelson study, where 2.1% scans were positive and 0.9% were cancers detected in the first round, we define a study population at greater risk and expect that the number of detected cancers would be greater. In the Nelson study 58.6% of cancers were detected at stage I compared to 13.5% in the control group. According to the Czech cancer registry, only 10.5% of lung cancers are detected in stage I. [1] To compare these outcomes, the study would require 15 patients with detected cancer, about 1500 screened patients to achieve a power of 80% at a 0.05 significance level. For an expected adherence of 90% and the optimization of input values, we estimate the sample size at 2500 participants. This study is designed as observational, therefore it has no control arm and it has no ambition to assess the effect of lung screening on mortality such as in the NELSON trial with a sample size calculated between 17,300 and 27,900 participants. [11]
Inclusion and exclusion criteria
The pretest risk of lung cancer is dependent especially on the smoking duration, age, sex, family history of lung cancer. [12,13] The inclusion criteria should define a population of participants with the highest risk of lung cancer who are capable of undergoing curative treatment if cancer is found. The recommended threshold by the UK Lung Cancer Screening Trial was based on the prediction result of at least 5% risk of developing lung cancer in the following 5 years. [12] Lambert et al. Medicine (2021) 100:5 Medicine The highest incidence of lung cancer in the Czech Republic is between 60 to 79 years ( Fig. 1). [1] The lung screening trials with the highest sample sizes included patients aged approximately between 55 and 75 years of age. [14] The CHEST Guideline and Expert Panel Report recommend the optimal screening 55 to 77 years. [15] The U.S. Preventive Services Task Force recommends annual screening up to 80 years of age, while the Centers for Medicare & Medicaid Services (CMS) end coverage at the age of 77, corresponding to the oldest age at the time of the final annual screen in the NLST. The optimal age span for screening was set to 55 to 74 years to precede the diagnosis of advanced lung cancer and to take into account a shorter life expectancy in the Czech Republic. The male gender carries a higher risk of lung cancer by a factor of 1.5 to 2. [16] This fact is related to a smaller proportion of women, who are heavy smokers and can therefore be equalized by the pack-years inclusion criterion.
The best predictor of lung cancer is smoking duration and intensity. It is expressed as the number of cigarettes per day for a given number of years ("pack years", 1 pack = 20 cigarettes, 1 pack-year = 20 cigarettes daily for 1 year). Various thresholds have been used with a range from 15 pack-years up to 30 packyears. [14,15,17] Greater risk means better efficacy of screening. The pause from becoming a non-smoker was determined between 10 and 15 years in various studies, for it is known that after this period the risk of developing lung cancer decreases. The recommendations of the CHEST Guideline and Expert Panel Report recommend a threshold of 30 pack-years and <15 years from the cessation of smoking. [15,18] Further risk factors include obstruction on spirometry (FEV-1 below 70% has OR 2.9), which is a criterion that would be difficult to implement. [19] Moreover, patients with greater obstruction have lower vital capacity limits and higher perioperative risk if resection would be considered.
Patients who are not amenable to curative treatment due to their poor performance status are unlikely to benefit from the early detection of lung cancer. The performance of the patients can be assessed by their ability to undergo physical exertion such as climbing the stairs. In the DLCST trial, the threshold was the ability to climb 36 steps without pause. [20] We define good performance status as the ability to climb stairs at least 1 floor without any difficulty or pause.
Inclusion criteria.
Age 55 to 74 years Smoking: ≥30 pack-years, smoker or ex-smoker <15 years Performance status (0-1)can climb at least 1 floor without any difficulty or pause
Exclusion criteria.
Body weight above 140 kg Malignant disease within the last 10 years (except nonmelanoma skin cancer). Chest CT less than 1 year ago, chest x-ray 6 months ago Clinical signs suspicious of lung cancer (weight loss, new cough, hemoptysis) Recent (2 months) bronchopneumonia, pneumonia
Screening interval
The selection of the optimal screening interval, which can be obtained by processing the NELSON study data, is important as well. It appears that the extension of the screening interval (in the NELSON trial it was 1 year, 2 years, and 2.5 years) brings a stationary incidence of diagnosed tumors to a screening round (0.8%), but worsening of the average stage of newly diagnosed tumors in the last round of screening (i.e., 2.5 years). [21] Thus, the 2.5-year interval was too long. The screening interval was set at 12 months.
Low-dose CT (LDCT)
The CT settings for screening protocols varied between 80 and 140 kVp with a minimum of 20 mAs tube time-current product. [15] The reported effective doses ranged from <0.4 mSv in lean patients to 2mSv. More recent trials adopted www.md-journal.com automated kV selection and mAs modulation based on the attenuation profile of the patient. Sufficient image quality and diagnostic performance of ultra-low-dose protocols (<0.2 mSv) for the detection of pulmonary nodules and even ground-glass lesions with the use of model-based iterative reconstruction technique has been confirmed in phantom and clinical studies. [22,23] For a standard-size patient, this corresponds to a CTDIvol <0.6 mGy for a 25 cm scan. [13,18] 2.6.1. CT acquisition protocol.
≥64 slice scanner Axial and longitudinal current modulation Iterative reconstructionhybrid or model-based
Interpretation of LDCT
The interpretation of CT findings is based on the probability that a lesion harbors malignity and how fast it can develop beyond the localized stage. Several academic groups including the Fleischner society, the British thoracic society, the American College of Chest Physicians have proposed guidelines on the management of pulmonary nodules. They derive the risk and the need for further management from the imaging features of the nodule (size or volume, spiculation, upper lobe location, perifissural location), the lungs (emphysema), and the pretest probability of malignancy (age, family history of malignancy, obstruction). [24,25] The best predictor of malignancy of a solid nodule is its size and growth rate. [26] From a mean diameter of 6 mm it starts to increase steadily. [27] The prevalence of malignancy in nodule <5 mm is extremely low, about half a percent. [28] In the NELSON trial, the risk of malignancy in subjects with nodules <100 mm 3 was similar to those without nodules. [29] Based on these data, the lower size threshold for nodules that require follow up was increased to 6 mm or 100 mm 3 in the Fleischner society guidelines [6] and 5 mm or 80 mm 3 in the BTS guidelines. [30] The NELSON study showed that small nodules (<100 mm 3 ) are not predictive for lung cancer and that nodules ≥300 mm 3 or ≥10 mm require timely attention. [29] Malignant nodules showed an exponential growth and nodule doubling time with a threshold of 600 days was suggested as the most optimal for nodules sized between 100 and 300 mm 3 . [31] There is little information about the behavior of subsolid nodules. Ground-glass nodules show very slow growth, beyond the 600 days VDT threshold, and are safe to follow-up on an annual basis. [6] In part-solid nodules, the size of its solid component and its growth are predictive of lung cancer.
The annual incident rates of new nodules (ELCAP, I-ELCAP, PLuSS, and Mayo trials) are reported between 3.4 and 13%. These new nodules are logically fast-growing and the data we have available show that the probability of malignancy in such a newly emerging nodule is 1.6% to 7.5%. Therefore, it may be necessary to choose a lower volume limit for these new nodules than for the first round of screening. [31] The models that aim to distinguish between malignant and benign nodules include the model of McWilliams and the American College of Radiology model, which in 2014 published its Lung-RADS assessment criteria, which were updated in 2019 (Lung-RADS 1.1). [7] The Lung-RADS model is the most cited as it is used in virtually all U.S. screening centers. It includes 5 categories that determine the type of lesion and its subsequent management based on the type of the lesion, its size, and behavior.
About half of the asymptomatic high-risk individuals undergoing screening by LDCT present with more than 1 nodule. At baseline, in the NELSON trial, where the management was based on the largest or most suspicious nodule, malignancy was detected mostly in the largest nodule (97%). [31] However, in the PanCan study, one-fifth of the positive individuals were diagnosed with cancer in a lesion that was not the largest. [27]
Evaluation and management of pulmonary nodules
In this study, the nodules will be assessed in Intellispace Portal (current version 10) Lung analysis package. This package performs automated detection of nodules and automatic segmentation of their volume with manual adjustment by the radiologist, where necessary. Nodules that would escape automatic detection will be segmented manually (lung window) -the agreement between manual and automated segmentation is reported to be excellent. [24] In lesions, where volume segmentation would be difficult or unreliable (e.g., perihilar), the effective diameter will be used (the average of 2 maximal perpendicular diameters), with the exception of broad-based subpleural lesions, where short diameter will be used instead. In non-solid nodules, the ground-glass part will be measured by effective diameter regardless of segmentation.
The management will be based on the largest or most suspicious nodule. Nodules 70 mm 3 and larger will be recorded and their doubling time calculated if they would be found in the previous scan, wherever available. A nodule is by definition a rounded (spherical, oval) circumscribed focus of abnormal tissue.
The proposed management protocol uses the recursive definition of nodules (with regard to their growth). It is based on the recommendations of the American College of Radiology (ACR), The Fleischner society guidelines, and results of the NELSON trial. [32,33] The primary assessment of risk (that a lesion harbors malignity) is based on volumetry and doubling time (Tables 1-3). For solid nodules, the volume threshold is 100 mm 3 (70 mm 3 for new nodules), the VDT threshold is 600 days. Subsolid nodules are rare (0.7%) and the risk of malignancy or premalignancy is low (6%) unless a new solid component appears. [34,35] The data on the management of sub-solid nodules are scarce which is reflected in the diversity of recommendations across different guidelines. [36] The update of the Lung-RADS Secondary findings will be reported as incidental findings and categorized according to their expected clinical significance as shown in Table 4. | 2021-02-04T14:06:51.150Z | 2021-02-05T00:00:00.000 | {
"year": 2021,
"sha1": "33947a9c4d505e27dee235144927f106eefa65ac",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000023878",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "33947a9c4d505e27dee235144927f106eefa65ac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11811177 | pes2o/s2orc | v3-fos-license | Implementing a structured education program for children with diabetes: lessons learnt from an integrated process evaluation
Background There is recognition of an urgent need for clinic-based interventions for young people with type 1 diabetes mellitus that improve glycemic control and quality of life. The Child and Adolescent Structured Competencies Approach to Diabetes Education (CASCADE) is a structured educational group program, using psychological techniques, delivered primarily by diabetes nurses. Composed of four modules, it is designed for children with poor diabetic control and their parents. A mixed methods process evaluation, embedded within a cluster randomized control trial, aimed to assess the feasibility, acceptability, fidelity, and perceived impact of CASCADE. Methods 28 pediatric diabetes clinics across England participated and 362 children aged 8–16 years, with type 1 diabetes and a mean glycosylated hemoglobin (HbA1c) of 8.5 or above, took part. The process evaluation used a wide range of research methods. Results Of the 180 families in the intervention group, only 55 (30%) received the full program with 53% attending at least one module. Only 68% of possible groups were run. Staff found organizing the groups burdensome in terms of arranging suitable dates/times and satisfactory group composition. Some staff also reported difficulties in mastering the psychological techniques. Uptake, by families, was influenced by the number of groups run and by school, work and other commitments. Attendees described improved: family relationships; knowledge and understanding; confidence; motivation to manage the disease. The results of the trial showed that the intervention did not significantly improve HbA1c at 12 or 24 months. Conclusions Clinic-based structured group education delivered by staff using psychological techniques had perceived benefits for parents and young people. Staff and families considered it a valuable intervention, yet uptake was poor and the burden on staff was high. Recommendations are made to inform issues related to organization, design, and delivery in order to potentially enhance the impact of CASCADE and future programs. Current Controlled Trials ISRCTN52537669.
Methods: 28 pediatric diabetes clinics across England participated and 362 children aged 8-16 years, with type 1 diabetes and a mean glycosylated hemoglobin (HbA1c) of 8.5 or above, took part. The process evaluation used a wide range of research methods.
Results: Of the 180 families in the intervention group, only 55 (30%) received the full program with 53% attending at least one module. Only 68% of possible groups were run. Staff found organizing the groups burdensome in terms of arranging suitable dates/times and satisfactory group composition. Some staff also reported difficulties in mastering the psychological techniques. Uptake, by families, was influenced by the number of groups run and by school, work and other commitments. Attendees described improved: family relationships; knowledge and understanding; confidence; motivation to manage the disease. The results of the trial showed that the intervention did not significantly improve HbA1c at 12 or 24 months.
Conclusions: Clinic-based structured group education delivered by staff using psychological techniques had perceived benefits for parents and young people. Staff and families considered it a valuable intervention, yet uptake was poor and the burden on staff was high.
Recommendations are made to inform issues related to organization, design, and delivery in order to potentially enhance the impact of CASCADE and future programs.
INTRODUCTION
Type 1 diabetes mellitus (T1DM) in children and young people is increasing worldwide.
Fewer than one in six children and young people achieve glycosylated fraction of hemoglobin (HbA1c) values in the range identified as providing best future outcomes. 1 It has been recognized that there is an urgent need for clinic-based pragmatic, feasible, and effective interventions that improve both glycemic control and quality of life, with a particular emphasis on structured education programs. 2 In recent years, a number of large multicenter studies have trialed a standard education intervention. [3][4][5] Findings published, to date, report no significant positive impact on glycemic control as measured by HbA1c and only limited impact on a wide range of secondary measures. 4 5 Nevertheless, the recent Best Practice Tariff for Paediatric Diabetes for diabetes services in the UK 6 requires the provision of structured educational programs for young people and their families and, as a consequence, there is an urgent need for high-quality evidence to inform the implementation of this recommendation.
Key messages ▪ The
Child and Adolescent Structured Competencies Approach to Diabetes Education (CASCADE) structured education program is perceived by young people and parents who attend as having benefits but practical challenges associated with attendance result in low uptake. ▪ Staff are positive about the potential of the program but organizational aspects are unacceptably burdensome. ▪ CASCADE is potentially deliverable to families as part of routine care and could be a useful intervention. However, improvements in clinical and administrative support, staff training, program content, and service structures are required to ensure fidelity to the program and feasibility and acceptability to key stakeholders.
The CASCADE (Child and Adolescent Structured Competencies Approach to Diabetes Education) pragmatic cluster randomized controlled trial (RCT) with integral process and economic evaluation is the most recent study. It was undertaken by a team that included clinicians from a London-based pediatric diabetes clinic, a representative from a diabetes patient organization and researcher teams from three universities in London. The CASCADE intervention is a structured education program designed for children and young people with T1DM aged between 8 and 16 years and their parents or carers. 7 The intervention underwent phase 1 pilot work and a non-randomized trial, in which the delivery was carried out by a psychologist. 8 The CASCADE intervention was then modified to be delivered by two members of a diabetes multidisciplinary team (MDT) who receive 2 days of training to enable them to become 'site educators'. CASCADE is a manual-based program. It is delivered in four modules over 4 months, each lasting approximately 2 hours, to groups of three to four families with children and young people grouped according to age (8-11 or 12-16 years). Two psychological approaches, motivational interviewing and solution-focused brief therapy, shown to have potential with children with diabetes are central to the CASCADE intervention. 9 10 These aim to engage participants to identify and develop their own positive approaches and consequent behavior change relevant to the management of their condition. The intervention thus offers both structured education, to ensure young people (and their parents) know what they need to know, and a delivery model designed to motivate self-management through empowerment techniques (see table 1).
The intention is that delivering CASCADE to groups will provide staff with an alternative mode of working with young people in the clinic setting to improve outcomes, rather than requiring additional work.
CASCADE TRIAL SUMMARY
The trial involved young people with T1DM and family members in 28 English pediatric diabetes clinics (randomly assigned at clinic level to intervention or control) in London, South East England, and the Midlands. Clinics eligible to participate were staffed by at least one pediatrician and pediatric nurse with an interest in diabetes. Other inclusion criteria included not running a group education program at time of recruitment and not participating in a similar pediatric diabetes trial within the past 12 months. It was approved by the University College London (UCL)/UCLH Research Ethics Committee (REC) reference number 07/HO714/112. Site-specific approval was granted at each site. Three hundred and sixty-two young people were recruited to the study. Inclusion criteria included: diagnosis with a duration ≥12 months; mean 12-month HbA1c of 8.5 or above; aged 8-16 years. Clinical staff identified eligible young people from their patient list. Researchers sent letters and information sheets to these young people and their parents or carers inviting them to participate in the research and to speak to a researcher at their next clinical appointment. Recruitment was primarily carried out by members of the process evaluation team who attended clinics at which eligible young people had an appointment. Signed consent forms were collected from parents and children wishing to participate.
The primary outcome measure was venous HbA1c at 12 and 24 months. Secondary outcomes included: knowledge, skills and responsibilities associated with diabetes management; emotional and behavioral adjustment; quality of life. Two staff members from each intervention site clinical team participated in the 2 days CASCADE training program. These site educators then took responsibility for organizing the modules at their clinics and delivering the intervention.
The extensive and integral process evaluation was designed to enable an understanding of the implementation of CASCADE and examination of the interaction of causal mechanisms and contextual factors that may be determinants of the intervention's success or failure, as assessed by the trial. 11 Given that the trial found no evidence of benefits on venous HbA1c at 12 and 24 months and little evidence of benefits on secondary outcomes, the focus of this paper is to use the findings of the process evaluation to suggest how future structured education may be more effectively implemented. 12
PROCESS EVALUATION METHODS
The process evaluation aimed to assess the feasibility, acceptability, fidelity and perceived impact of the CASCADE intervention. It ran for the 4-year life of the trial and included the multiple methods shown in table 2. Researchers from the process evaluation teams at the Institute of Education (IOE) and the School of Pharmacy (SOP) conducted the fieldwork.
PROCESS EVALUATION DATA ANALYSIS
Qualitative data analysis was carried out by the process evaluation teams at IOE and SOP (all the authors except LB, RT, and DC). Qualitative analysis of the interview data, supported by the use of NVivo software, identified key topics and issues that emerged through familiarization with transcripts. 13 Pertinent excerpts were coded and memos written to summarize and synthesize emerging themes. Researchers refined their analysis ensuring that themes were crosschecked with other data, first within and then between transcripts. Analysis of each training workshop observation was carried out by a researcher, who was not the observer, reading through the notes made by the observer and identifying key themes and fidelity issues emerging from the data. Quantitative data were analyzed by MW using Excel and the SPSS V.19 software for statistical tests. In terms of the CASCADE modules delivered in the sites, composite fidelity delivery scores were created for content and for technique from individual researcher observer and site educator self-rated scores.
A further composite variable was then calculated which summed the content and technique scores for Table 1 Outline of the CASCADE program (as set out in the manual) The teaching plan Session activities, objectives, time guides, and resources including key information essential for the educator, learning objective for the family, and brief descriptions of each activity Each module starts with a review of, and since, the previous session, creating an opportunity for families to highlight any changes that have taken place and to congratulate young people on successes Module 1 Focuses on the relationship between food, insulin, and BG (eg, considering the pros and cons of matching insulin to food to attain better glycemic control) Module 2 Reviews BG testing and factors influencing BG fluctuation (eg, identifying factors that cause BG to rise and fall and explore hypoglycemia definitions, reviewing symptoms according to severity) Module 3 Looks at the pros and cons of adjusting insulin (eg, a brainstorming session considers when, how, and who to contact for help managing hyperglycemia) Module 4 Addresses aspects of living with diabetes, including managing BG levels and exercise (eg, young people and families complete a 'blueprint for success'. This marks the end of the sessions and acknowledges the steps into the future the young person has already made) Homework tasks are given to families to consolidate learning after each module BG, blood glucose; CASCADE, Child and Adolescent Structured Competencies Approach to Diabetes Education. each site across all four modules, allowing comparison across sites and modules.
PROCESS EVALUATION RESULTS
The results are structured under the following themes: recruitment and training of site educators; organizing the groups; delivery of the modules; uptake and acceptability of the modules; and perceptions of impact. Response rates are reported in table 2.
Recruitment and training of site educators
The National Institute for Health and Care Excellence (NICE) requirement, 2 that structured education programs are delivered as part of routine care was widely recognized by clinic staff and, as a consequence, it proved relatively straightforward to recruit two members of the MDT from each of the 14 intervention sites to become site educators. The majority of site educators were experienced pediatric diabetes specialist nurses (PDSNs); in approximately half of the sites one of the educators was a dietitian. The diabetes specialist nurse and psychologist who developed the intervention delivered the 2 day CASCADE training for site educators in four workshop sessions. In general, it was feasible for sites to send the required minimum of two staff to the core workshops. A few sites sent additional interested members of the MDT though only four consultants attended some or all of the training. The training was delivered in a central London location, except for one site where following a request, training was delivered locally. Site staff reported this change in location to be helpful. The majority of staff who completed the questionnaire following the workshops indicated they had been 'extremely' or 'very' keen to participate. Most staff thought the training was very good, motivating, and comprehensive.
The most common concern raised in staff interviews about becoming site educators and running the CASCADE program, both before and after the training, was additional workload. Other concerns included practical constraints such as finding available rooms in which to run the groups and ability to rapidly change their practice to employ the psychological approaches underpinning CASCADE. One site educator commented.
It [the training] was a lot in the few days. Teaching people theories and expecting them to suddenly change their behaviour I think is very difficult.
The two trainers, and some attendees, expressed concern about levels of diabetes knowledge among the site educators. Organizing the groups A total of 30 complete CASCADE groups, comprising all four modules, were run across 12 of the 14 intervention sites. A post hoc calculation, based on the number of study recruits in a site and the optimum group size of 3-4 young people, suggested 44 groups should have been run across the 14 intervention sites. Thus, 68% of possible groups ran, with only three clinics completing the maximum number of groups possible for their site.
A key reason for this limited delivery was difficulties with organizing the groups. The organization was undertaken by the site educators in all the sites. This involved: deciding which participants should be grouped together using similar ages as a key criterion; setting dates and times; inviting families to attend; and booking a room. Interviews revealed that site educators found these processes frustrating and very time-consuming. One site educator commented: I didn't notice that it saved me any time because I was constantly chasing them [families] up to be there.
One site delivered no modules because the lead site educator left her PDSN post soon after the training. Another site delivered only the first module because of a number of challenges which included: the small number of potential eligible patients on the clinic list; poor uptake of the first module by young people/ parents; practical organizational constraints.
All the sites ran the groups in addition to routine clinics where standard care continued to be received by patients on an individual basis. Staff interview data revealed that the pressure on hospital clinic facilities was too great to make running the groups feasible during clinics. Establishing a date and time for the group sessions that was acceptable to the families was extremely challenging. To maximize attendance, some site educators tried a range of timings including during school hours, after school, weekends, and school holidays. Communication with families, about groups, was via a combination of letter, telephone, and (occasionally) text messages. No sites used email or online meeting booking sites. Despite all the negotiation and careful planning by site educators, late cancellation or non-attendance by participants was reported as common. Some didn't even bother to get back to us and some did and said they were still gonna come but still didn't come. It is frustrating and I think that's what was time consuming, which I hadn't really accounted for…(Site educator) As a result of these difficulties, compromises were made to the intended group size and composition. Groups often had small numbers (sometimes one family only) and/or a wide age range among the young people attending. Although the intention was to run four modules with the same participants, the composition of many groups changed.
Delivery of the modules
The site educators believed they were appropriate individuals to deliver the intervention because they knew patients well, although familiarity with patients was not a requirement. Participating families appeared to support this view. All sites had continuity of at least one trained site educator, but complications in sustaining the availability of a second educator in a few sites resulted in some lack of continuity of trainer pairs. Site educators reported that the time required to organize sessions meant that they often had little or no time for planning and practising delivery of the modules. Observation data and some staff interviews suggested that this lack of practice time was particularly challenging when staff had limited experience in group work.
Researcher observation of the modules and site educator feedback forms indicated that site educators generally delivered activities as described in the manual. However, less time than was recommended was spent on some of the key exercises due to staff finding them difficult to deliver and/or not well received by groups. One such example was the 'review since the previous session' exercise at the beginning of each module.
Also, while researcher observation and staff feedback showed fidelity of CASCADE psychological techniques was good across sessions in half the sites, it was not optimal in the remainder. Difficulties in delivering the intervention particularly occurred when sessions had groups of participants with a wide age range or group numbers were very small.
The first group that we ran had two girls and a boy and the boy was at the younger end of the teenage years and the girls were at the older, it was unfortunate because we didn't have that many patients as part of the study so it was very difficult then to get the groups sorted out so we kind of had to put them together. […] He was just a bit of a silly boy in that…I don't mean horribly, he was lovely, but just kind of played the fool a little bit whereas the girls were older and a similar age and a lot more grown up about it all. (Site educator) Staff reported that the organization and delivery of the intervention was affected by the research context in a number of ways. First, having to restrict the education groups to a subset of recruited patients, instead of offering them to the entire clinic list, was perceived as making the organization of the groups more challenging. This meant that natural groupings of patients (by age or geographical area) often proved too difficult to achieve. Second, delays encountered in the recruitment of families to the trial in many sites (see 12 for detail on this), meant site educators often had to wait several months after their training before they could start to organize groups and deliver the intervention. Third, some site educators reported that additional trial-related tasks, such as organizing research blood samples added to their workload and took time away from organization of, and preparation for, groups.
Uptake and acceptability of the modules Of the 180 young people recruited to the intervention arm, only 55 (30%) received the full education program of four modules with just over half of the original recruits (53%) attending at least one module. Eighty-four young people (47%) failed to attend any modules. Those who attended had significantly lower mean baseline HbA1c scores than those who were offered the sessions but did not attend (9.52 vs 10.33, p<0.01). Significantly more children (8-12 years) attended at least one module compared with teenagers (13-16 years; 64% vs 44%, p<0.01). Clinics were permitted to offer sessions at a time of their choice. If out of school hours sessions were not offered, the main reason given for young people not attending modules was that they did not want to miss school. For parents, taking time off work during the day was a barrier to attendance. Other reasons for non-attendance cited by children and parents included holidays and other extracurricular activities.
On most occasions a parent/carer attended with the young person. Parents and young people reported that joint attendance was a very positive aspect of the experience (see table 3). Staff also, in most instances, found it helpful to include parents.
Perceptions of impact
The majority of parents and young people who attended CASCADE groups described some positive impacts, including improved family relationships, wider knowledge and understanding of diabetes, greater confidence, and increased motivation to manage the disease (see table 4 and young person's comment below). I've been more happier…yeah, like around the house I've been more happier. Not so many strops…'cause my readings are better and we've been given a lot more information about the ketones and how to treat it….I found it really good. [Young person] A number of young people and parents mentioned that timing of the CASCADE sessions would be more appropriate and useful sooner after diagnosis; site educators also commented that this may lead to better uptake of the sessions and have greater impact. I felt they were of little use to me as I already knew everything however this kind of session would be useful to someone who had just been diagnosed. (Young person) They're a bit sort of more 'do as they're told' for the first 12 months, they're more likely to attend and perhaps take it on board, it gets them in the right frame of mind early. (Site educator) Twenty-four months after the intervention, when asked in the questionnaire what effect the program had had, nearly half of the young people selected the response "The sessions made me want to try harder and I have carried on trying". However, these impacts were not reflected in the primary or secondary outcome measures, even for the subgroup of those who attended.
DISCUSSION
The CASCADE intervention aimed to train PDSNs and other members of diabetes teams to deliver a manualised, structured education program, based on behavior change methods, to groups of families. Training of these site educators took place over 2 days. Few members of the MDT, other than PDSNs, attended the training. Trainee educators expressed enthusiasm for the program but highlighted concerns including that: CASCADE would increase their workload; there would be practical constraints to setting up and running groups; and that incorporating the CASCADE psychological model into their practice would be challenging.
Following delivery of CASCADE in the sites, PDSNs and other clinical staff were positive about the program. Having PDSNs and dietitians, who knew the patients, as site educators worked well for both the educators and families. There were, however, feasibility issues with regard to running the program in its current form in the 'real world' of the National Health Service. These were evidenced by low uptake by families and staff feeling unacceptably burdened by organizational aspects of the intervention. Organizing groups was, as anticipated by staff, challenging and time-consuming and many groups did not comprise the recommended number or age range of young people. This affected group dynamics and made it difficult to run the sessions as set out in the manual. It was also difficult to keep a group together for the planned four modules. Delivery of the modules was further compromised by: the gap in time between training and delivering sessions; time spent on organizing group sessions at the expense of practising delivery of the modules; and finding some exercises consistently hard to deliver. Despite the fact that families and staff reported that they liked the program and felt that it offered benefits, the trial found no evidence of impact on venous HbA1c at 12 and 24 months and little evidence of benefits on secondary outcomes, even with the subgroup who attended the training. We think the reasons behind this are twofold. First the organizational difficulties that made the intended group composition problematic and second the difficulties with delivery, especially the lack of fidelity to the psychological techniques. To address these issues, and to support the development of other structured education programs, we make a range of recommendations.
Recommendations
To reduce the burden on the site educators more members of the MDT, including consultants, could attend the program training to foster greater buy-in and a team approach to facilitate sharing of the workload. To make this feasible, including containing cost, training of teams could be conducted at local sites rather than centrally in London. Furthermore, dedicated administrative support to organize venues, appointments, groups, and effective reminder systems would increase the likelihood of improved overall uptake, and would help with grouping the young people by age, as intended. Additional support for site educators in practising and sustaining quality of delivery would have been beneficial. Possible approaches could include: those associated with the successful DAFNE program, 14 such as longer training, a greater focus in the training on improving group work skills, and an observation of CASCADE experts delivering the program; site level mentoring from CASCADE experts including feedback on site educators delivering trial runs; face-to-face mentoring from local colleagues, such as psychologists. In addition, before undertaking structured education programs, there may be a need to improve the knowledge base of some of the current pediatric diabetes service workforce, as levels of knowledge were very variable. Raising knowledge levels may be addressed by the development of a curriculum for professionals specifically in diabetes, ranging from a core curriculum (basic knowledge that all team members would be expected to know) to an extended curriculum (covering high level application of knowledge specific to individual team members). This finding may have relevance to other medical specialisms where structured education programs are being considered.
The uptake of the education sessions was low. For families the key issue was the challenge of fitting attendance into busy day-to-day routines. The education modules were offered in sessions independent of routine clinic appointments. Our data suggest that to improve accessibility it could have been advantageous to make the modules an integral part of routine clinic appointments, thereby overcoming the need for families to make additional hospital visits, with the implications this has for time away from school and work. This would require those in organizational administrative roles to assist with sustainable organizational adjustments required for extending clinic services. This finding and the suggestion that there should be greater 'buy-in' from the wider clinic team echo those in the broader literature on group-based programs. 15 Furthermore in the study, participants had to have been diagnosed with diabetes for more than a year to meet the inclusion criteria for participation. Our data suggest that if the program was offered to families sooner after the initial diabetes diagnosis, this might lead to improved motivation to attend the groups. Additionally offering this structured group education more universally might be more successful, including making the organization of groups by age more feasible, than targeting those with the poorest control of their blood glucose levels. It may be more realistic to assume that those with the very poorest control might also require the greater flexibility and intensity that individualized interventions with a psychologist would offer. A summary of the key recommendations is presented in box 1.
Strengths and limitations of the study
It is a strength of the study that the process evaluation was unusually extensive and fully integrated into the main trial. Data were collected from all key stakeholders through a range of different methods throughout the different phases of the implementation of the intervention. Triangulation of findings enabled an evaluation of the implementation, barriers, and facilitators in relation to all aspects of implementation, operation, and perceived impact to be examined. It was also a strength that as a pragmatic RCT this intervention was evaluated in Box 1 Summary of key recommendations to improve training in, and delivery of, structured education sessions ▸ More involvement of the wider clinical team facilitated by local training; ▸ Greater mentoring of site educators by trainers; ▸ Practice sessions with feedback from trainers for site educators before going 'live' and time between training and delivery of first session kept to a minimum; ▸ More diabetes-specific training for the pediatric diabetes service workforce to guarantee a basic level of diabetes knowledge prior to training in the program; ▸ Dedicated administrative support to assist with organizing the sessions; ▸ Education sessions to be held within clinic time; ▸ Offer the sessions to all young people on clinic lists and soon after diagnosis. 'real-life' and representative settings. One limitation of the study was the impact of the research context on implementation, but steps were taken in the information and reassurance provided, methods, and timing of data collection to minimize effects as much as possible. Additionally, a major hindrance to the intervention was the lower than expected number of CASCADE groups run and the poor uptake of these groups by families. This might suggest a weakness in the intervention's pilot, which was not carried out within the same clinical contexts as the main trial. As such, opportunities to address challenges in organization and delivery were missed prior to, or through carefully managed processes within, the full trial. 16 Experience from pragmatic studies of complex interventions such as CASCADE has yielded valuable new learning on the importance of particular investment in the developmental and piloting stages of complex interventions. 17
CONCLUSION
The extensive multimethod process evaluation showed that the CASCADE structured education program was deliverable; however, improvements in clinical and administrative support, staff training, program content, and service structures to improve accessibility for families were required. The suggested improvements identified in this study all have resource implications, and thus any future research requires cost-benefit considerations. These findings give valuable information on what is required not only in CASCADE but also other similar programs to achieve their aims. | 2016-05-12T22:15:10.714Z | 2015-04-01T00:00:00.000 | {
"year": 2015,
"sha1": "91d90cd9c95ca4ab2ed33ce9eaa135ed0f229835",
"oa_license": "CCBY",
"oa_url": "https://drc.bmj.com/content/bmjdrc/3/1/e000065.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "91d90cd9c95ca4ab2ed33ce9eaa135ed0f229835",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
121387365 | pes2o/s2orc | v3-fos-license | Quarkonia and heavy flavors latest PHENIX results
Heavy quarkonia are direct probes of deconfinement in the quark gluon plasma formed in heavy ion collisions. Other effects also occur, such as modification of the parton distributions in cold nuclear matter, and a multidimensional description is necessary to disentangle the various contributions. Characterization of the production processes in p+p and p or d induced collisions with ions is critical in the extraction of the additional effects in ionion collisions. The Relativistic Heavy Ion Collider has delivered a wide range of systems and energies, from p+p to Au+Au through d+Au and Cu+Au, from 7.6 to 200 GeV (N-N collision energy), allowing the PHENIX experiment to start exploring these experimentally available dimensions for open and hidden charm and beauty as a function of rapidity and transverse momentum. Recent PHENIX results on heavy flavor production measured through lepton pairs or singles, including J/Ψ, Ψ', Χc, ϒ and open charm and beauty, are presented.
Introduction
Heavy quarks and quarkonia are produced in the first steps of a collision, and they are sensitive to the whole evolution of the system, and in particular to the initial stage, which makes heavy quarkonia an excellent probe of the quark gluon plasma formed in heavy ion collisions [1]. As a bound pair of rare quarks, charmonia and bottomia should be direct probes of the deconfined nature of the medium, even possibly a QGP thermometer.
As the energy of the collision increases, the expected energy density and the lifetime of the expected deconfined phase increase. But other effects make the quarkonium picture more complex. Breaking of the bound quarkonium pair, or modifications of their kinematic characteristics, could come from collisions with nucleons or particles produced during the collision. Also, even though they were discovered several decades ago, quarkonia production is not yet fully understood (see for instance [2]). In the last 20 years Tevatron results and, recently, polarization measurements at LHC [3] have raised questions about our understanding of the production process.
The situation is complex, with a range of possible physics processes and strengths. But the recent start of LHC and the increasingly precise and detailed results coming from RHIC are putting more constraints on models of heavy flavor modification in nuclear collisions.
Expérimental set up
The PHENIX experimental set up is described in detail in [4]. Open and hidden heavy flavor yields are deduced from measurements of electrons at mid-rapidity (-0.35<y<0.35), and muons at forward and backward rapidity (1.2<|y|<2.2). As a function of the centrality of the collision between two nuclei A and B, the modification of the yield is characterized by R AB , the ratio of invariant yields in A+B collisions to those in p+p collisions, scaled by the number of nucleon nucleon collisions estimated in a geometrical Glauber model.
Quarkonia production
Quarkonia are measured using electron and muon pairs. Early PHENIX Au-Au results already suggested a J/ψ suppression at RHIC energies (√s=200 GeV), increasing with the centrality of the collision. These results, compatible with the formation of a quark gluon plasma, were comparable with the Pb-Pb ones at SPS (√s=17.3 GeV). This was confirmed by higher statistics PHENIX measurements, showing also that suppression was stronger at forward rapidity. Recent observations [5] at √s=39 GeV and 62.4 GeV showed similar suppression. This apparent stability from SPS to RHIC could hide a more complex phenomenon: the increase of energy density leading to more suppression, but also to more coalescence or recombination due to higher yields of the underlying heavy quark population [6]. The reduced suppression observed at higher collision energy [7] supports this interpretation.
For the smaller colliding system Cu+Cu, results [8] have been found to be very consistent with Au-Au for the same numbers of participating nucleons. For the d-Au system a decrease in the direction of the lighter partner (forward rapidity) is systematically observed. This is a general trend, observed for the bulk of the production [10]. Recent results from d+Au collisions with high statistical precision [11] bring a new sensitivity in the p T and y dimensions. Calculations based on the NLO EPS09 nuclear PDF (nPDF) are found to not reproduce the p T and y distributions simultaneously [11]. On the other hand a multiple scattering and energy loss model has an impressive ability [12] to reproduce the changes of distributions over wide p T ranges at mid and forward rapidity, at all collision energies.
In Figure 1 the R dAu for the J/ψmeasured in the central rapidity region, is shown to depend weakly on the centrality, while that for the ψ ' [13] displays unexpectedly strong suppression for the most central collisions. Stronger suppression of the ψ' than the J/ψ has been observed at lower energies, where the difference seems to be consistent with breakup by collisions with nucleons. But at RHIC breakup by nucleons is unable to explain the large suppression because the two quarkonia are expected [14] to cross the target nucleus too early in their development to have a different sensitivity to breakup effects. Beside their interest as a reference for heavy ion collisions, p(d)+A quarkonia results across a wide range of rapidities provide sensitivity to the time spent in the target nucleus by the quarkonia precursor. This allowed [15], a separation of the J/ψ data into regimes dominated by breakup and energy loss -respectively associated with the forward rapidity domains of the heavy or light colliding partners -in addition to the nuclear effects on the PDF.
The recent measurement of χ c in d -Au [13] is an additional step towards the study of the relationship between the characteristics of the quarkonia, in particular their binding energy, and the CNM suppression, at these energies. Figure 2 [16] displays R dAu versus rapidity, for the ϒ, compared to J/ψ (top) and to a model calculation (bottom). In contrast to the J/ψ, the ϒ suppression appears stronger in the backward region. The large error bars preclude physics conclusions at this stage, but it is noteworthy that the calculation [17] suggests a stronger decrease in the backward region. This could be linked to the breakup effect but also to the EMC effect in the gluon structure functions [18]. This EMC gluon effect can be accessed mainly through ϒ measurements at these energies.
Open heavy flavour production
Open heavy flavor (HF) originate from the same production mechanism as heavy quarkonia (for instance [19] reproduces quarkonia measurements with a calculation tuned on open flavor). The produced b and c quarks combine with light quarks leading to B and D excited mesons, mostly decaying to ground states, which then can decay in the semileptonic channels. This HF lepton source has to be separated from the leptons coming from lower mass mesons, and, in the case of electrons, from photon decays. In PHENIX it is done [20][21] using a Monte Carlo simulation of these backgrounds, and for the muon arms by using the variation of their decay rate with distance from the production point to the hadron absorber in front of the muon arm. For electrons in the central arm, the measurement of background at low p T using foils as a photon converter is also employed. Figure 3 shows the p T evolution of R AA for electrons , in central collisions for three systems. High p T Au-Au displays a strong suppression. That was unexpected for heavy quarks due to the dead cone effect in heavy particle energy loss. The d+Au collision data allow estimation of CNM effects. In contrast to the Au+Au data, they mostly show enhancement at mid p T . Cu+Cu collisions are intermediate between d+Au and Au+Au. When integrated in p T , a continuous evolution from enhancement to suppression is observed from d-Au peripheral to Au-Au central.
The modification of open HF for d-Au at mid-rapidity lies in between that observed in the backward and forward regions. As shown in Figure 4, for the most central collisions, R dAu (p T ) displays [22] an increased enhancement in the backward region, and a suppression in the forward one. This behavior is different from the one observed for quarkonia [11] shown for comparison in Figure 4. The J/ψat low p T is always suppressed, and the difference between backward/forward rapidities is smaller. The differences might be due to different dominant effects at forward and backward rapidities [15]. In the forward region the similar R dAu of J/ψ and HF could reflect the same shadowing and/or energy loss effect with no nuclear breakup at the very short target crossing time, whereas in the backward region the suppression of the J/ψ relative to HF could be due to breakup of expanding quarkonia.
Summary
Thanks to the exploration in various colliding systems and energies made possible by RHIC, a more detailed landscape is emerging in ultrarelativistic collisions with heavy ions. Even dimensions like the Figure 4 For the most central collisions, Heavy flavor R dAu (p T ), displays a backward enhancement. Whereas the J/ψ R dAu (p T ) [11] decreases at low p T , and displays a smaller forward/backward ratio. time evolution of the system seem to become experimentally accessible, thanks to the diversity of probes provided by the quarkonia families and their various formation times, seen from the target nucleus.
Open HF is a reference for quarkonia production, as well as an extremely interesting probe of the energy loss of heavy quarks in the QGP through the observed Au+Au high p T suppression. The suppression pattern of J/ψwith the centrality of the nucleus nucleus collisions remains similar from 200 to 20 GeV collision energy, at the current precision. More p+p and h+A measurements are required to go further. But the d+Au at 200 GeV results have already provided some perspective. As a function of centrality in d+Au collisions, quarkonia and open HF production are modified, with strong suppression in the forward (light partner) rapidity domain. At backward rapidity however there is notably different behavior: the decrease in R dAu of the ϒ, and the increase for open HF. The evolution with p T also puts constraints on models. The ψ' is more suppressed than the J/ψ possibly due to its different binding energy. But this difference is not expected at these energies given the small nuclear crossing time scale of the charmonia precursor, which would lead to a similar breakup cross section due to nucleon collisions. The explanation is not clear yet, but may involve the small binding energy of the ψ' making it very sensitive to the small hot regions produced in d+Au collisions.
The increase of precision and kinematic range has led to stronger constraints on models but also revealed more complexity, and raises new questions. With the vertex detector recently added, allowing experimental separation of open bottom and charm contributions at all rapidities in PHENIX, the near future addition of a new forward calorimeter, and the proposal to measure data from a range of p+A collisions as well as Au+Au with these new detectors, the next generation of results will bring additional information to constrain models and advance our understanding of the mechanisms that influence heavy flavor production in p+A and Au+Au collisions. | 2019-04-19T13:11:01.103Z | 2014-05-07T00:00:00.000 | {
"year": 2014,
"sha1": "2ebb2ec472c9e81c888821595167a7548d3d0955",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/509/1/012010",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "6c8c8ee58ee681973a249440e287a7c2d599b37a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
248505798 | pes2o/s2orc | v3-fos-license | An Empirical Study on Internet Traffic Prediction Using Statistical Rolling Model
Real-world IP network traffic is susceptible to external and internal factors such as new internet service integration, traffic migration, internet application, etc. Due to these factors, the actual internet traffic is non-linear and challenging to analyze using a statistical model for future prediction. In this paper, we investigated and evaluated the performance of different statistical prediction models for real IP network traffic; and showed a significant improvement in prediction using the rolling prediction technique. Initially, a set of best hyper-parameters for the corresponding prediction model is identified by analyzing the traffic characteristics and implementing a grid search algorithm based on the minimum Akaike Information Criterion (AIC). Then, we performed a comparative performance analysis among AutoRegressive Integrated Moving Average (ARIMA), Seasonal ARIMA (SARIMA), SARIMA with eXogenous factors (SARIMAX), and Holt-Winter for single-step prediction. The seasonality of our traffic has been explicitly modeled using SARIMA, which reduces the rolling prediction Mean Average Percentage Error (MAPE) by more than 4% compared to ARIMA (incapable of handling the seasonality). We further improved traffic prediction using SARIMAX to learn different exogenous factors extracted from the original traffic, which yielded the best rolling prediction results with a MAPE of 6.83%. Finally, we applied the exponential smoothing technique to handle the variability in traffic following the Holt-Winter model, which exhibited a better prediction than ARIMA (around 1.5% less MAPE). The rolling prediction technique reduced prediction error using real Internet Service Provider (ISP) traffic data by more than 50\% compared to the standard prediction method.
I. INTRODUCTION
Internet traffic engineering deals with the technology, principle, technique, and tool that assist network administrators in evaluating and optimizing the operational IP network performance. Also the traffic prediction and forecasting are some of the most crucial parts affecting network performance [1]. It helps enhance the network Quality of Service (QoS) and Quality of Experience (QoE). Also, traffic forecasting assists ISP providers in their business decisions such as new product development and service decommissions, advertising, pricing, traffic migration, etc., based on traffic forecasting results. In addition, accurate forecasting helps service providers in capacity planning and investment optimization. Therefore, selecting an appropriate methodology for network traffic prediction is critical for ISP business.
Currently, ISP providers highly depend on experienced network administrators, and they follow an instinctive approach to forecast future traffic using the market analysis data such as a possible number of customers and their usage behavior [2]. The factors they considered for their prediction can be divided into two main categories: internal and external factors. Internal factors are related to ISP companies, such as introducing new services, traffic migration, speed up-gradation, etc. In contrast, external factors come from outside, such as new internet applications, regional economic factors, seasonal effects, etc. Therefore, the intuitive method can only predict a rough estimation of future traffic, which is inadequate to make business decisions.
On the other hand, Operational Research, Statistics, and Computer Science contributions led to reliable prediction methods that replaced intuition-based ones. In particular, Time Series Forecasting (TSF), also termed as univariate forecasting, discusses scientific ways to predict chronologically ordered data, called time-series data [3]. The ultimate objective of TSF is to model a complex forecasting system, predicting future behavior based on the historical observation. The TSF models can be categorized into three main categories based on their learning techniques: statistical model, machine learning, and deep learning model. The prediction model from these categories requires historical data to learn the general trend in time series and make inferences about the future. The learning models also demand different settings of hyper-parameters based on the dataset size and complexity. The prediction model's learning capability and accuracy directly rely on these parameter configurations.
Traditional statistical forecasting models such as Auto-Regressive (AR), Auto-Regressive Moving Average (ARMA), Auto-Regressive Integrated Moving Average (ARIMA), Au-toRegressive Integrated Moving Average (ARIMA), Seasonal Autoregressive Integrated Moving Average (SARIMA), Seasonal Auto-Regressive Integrated Moving Average with eXogenous factors (SARIMAX), Holt-Winter, etc., were studied extensively at different time-series domains. These classical models have been used in traffic load forecasting [4], cloud traffic prediction [5], electricity load forecasting [6], and so on. However, the statistical models are best at capturing the Linear and Short Range Dependencies (SRD) in timeseries data but exhibit the least performance in handling Long Range Dependencies (LRD), which results in poor timeseries prediction and forecasting [7]. In addition, they are also incapable of learning the non-linear attribute of the time-series data. As a result, many variations of the classical forecasting model such as Fractionally Integrated Autoregressive Moving Average (FARIMA) [8], or hybrid models such as ARIMA-GARCH [9] have been proposed to improve the statistical forecasting model's performance. This research explored several state-of-the-art statistical prediction methods such as ARIMA, SARIMA, SARIMAX, and Holt-Winter to predict internet traffic volume. Also, we implemented a technique called rolling prediction to improve the performance of the standard prediction. In rolling prediction, the model used the validation data of the recent prediction for re-training the corresponding model. The comparative analysis among different prediction models summarizes their overall performance in traffic prediction. This research shows a significant improvement in traffic prediction using the rolling technique for our statistical prediction models. The main contributions of this work are as follows: • A comparative performance analysis between classical forecasting model in traffic prediction. • Extracting new features from time series data for better prediction. • Achieved significant performance improvement for the traditional model by applying the rolling prediction technique. This paper is organized as follows. Section II describes the literature review of current traffic prediction using statistical models. Section III presents the proposed methodology introducing the rolling prediction model to the traditional model. Section IV presents the performance results of the different prediction methods and draws a comparative picture between standard prediction and rolling prediction. Finally, section V concludes our paper and sheds light on future research directions.
II. LITERATURE REVIEW
Khashei and Bijari [10] proposed an ensemble forecasting model to improve accuracy. Their model is comprised of a statistical model called Auto-Regressive Integrated Moving Average (ARIMA) and Artificial Neural Network (ANN) model. They identified the limitation of the ANN model in handling linear data, which motivated them to apply a hybrid model based on Multi-Layer Perceptron (MLP) to process the non-linear part of the time-series data. ARIMA model handles the linear component in the time series data. A hybrid forecasting model [10] combining the ARIMA and ANN has been developed to improve overall forecasting accuracy. Their model has been tested on three well-known data sets and indicated an improved performance for all of them. The first stage of their methodology was to generate the required data using the ARIMA model, and then this data is fed into the ANN model to predict the future. U. Kumar and V. Jain [11] investigated the performance of the Auto-Regressive Moving Average (ARMA) and ARIMA model to predict the future in advance. They fine-tune the model parameters p, q, and d by experimenting with different information criteria such as AIC (Akaike Information Criterion), HIC (Hannon-Quinn Information Criterion), BIC (Bayesian Information criterion), and FPE (Final Prediction Error). They also consider AutoCorrelation function (ACF) and Partial AutoCorrelation Function (PACF) plot information to identify the best performing model. Different model performance evaluation metric such as MAPE (Mean Absolute Percentage Error), MAE (Mean Absolute Error) and RMSE (Root Mean Squared Error) has been considered in their work. M. Dastorani et al. [12] performed a comparative analysis among different statistical models such AR (Auto-Regressive), MA (Moving Average), ARMA, ARIMA, and SARIMA. They decomposed the original time series to process the random component using AR, MA, and ARMA models. A trial and error approach has been adapted in their research to find out the best-performing model, and they identified the stochastic model most appropriate for their problem. H. Liu et al. [13] compared the performance of two different hybrid models such as ARIMA-ANN and ARIMA-Kalman. These models have been applied to process the non-stationary data, and both showed better performance. The ARIMA part of the hybrid model is used to identify the architecture of the ANN in the case of the ARIMA-ANN model. In contrast, it is used to initialize the Kalman measurement for the ARIMA-Kalman model. Huda M. A. El Hag and Sami M. Sharif [14] identified the weakness of the ARIMA model in longterm prediction and suggested an adjustment in the ARIMA model to solve the problem. Their proposed Adjusted ARIMA (AARIMA) model can handle an extra parameter called selfsimilarity compared to the traditional ARIMA model. They used four different Hurst estimators to calculate the selfsimilarity. Their proposed model shows better hourly internet traffic prediction in comparison with the ARIMA model.
The modification of conventional statistical models or their combination has been proposed earlier to improve the prediction in internet traffic based on the publicly available dataset. In this research, we focused on improving the performance of internet traffic prediction by adapting a new training strategy instead of modifying or combining the state-of-the-art statistical models. Our experiment results show a substantial performance improvement in traffic prediction after applying the new training strategy
III. METHODOLOGY
In this section, we first introduce the real IP traffic dataset used in our experiment in subsection III-A. The data requires some preprocessing steps explained in subsection III-B to clean and make it compatible with the prediction model. Then, we describe the feature extraction techniques to extricate the exogenous attributes from our original dataset in subsection III-C. Next, the rolling prediction method that significantly improves our model accuracy is introduced in the subsection III-D. After that, we explain the mathematical background of the prediction model and the performance metric to evaluate them in subsection III-E and III-F respectively. Finally, we summarize the configuration of our experimental environment in subsection III-G.
A. Dataset
Real internet traffic telemetry on several high speed interface have been used for this experiment. The data are collected every five minutes for a recent thirty day time period. The data contains average generated traffic per five minutes in bit per second (bps). There are 8563 data samples in our dataset consisting of 29 days complete (288 data instances per day) and last day incomplete data (211 data instance for 30th day).
B. Data Preprocessing
The dataset consists of time series data for the entire thirty days collected at every 5 minutes interval. The original dataset was collected in the JSON format, which is incompatible for processing using the prediction model. So, it was converted into CSV format first before starting the model prediction.
Only the timestamp (GMT) and traffic data (bps) are taken from the JSON file, and all other information is discarded. The last day data in the dataset is incomplete by the previous eight hours (approximately) data and is removed from the original time series data for the experiment. Ultimately, a total of 29 days of data were considered for developing our prediction model. In addition, there are some missing values in the dataset filled up using the mean value. Finally, the time series value unit has been changed from bps to Gbps (gigabit per second) as the original value is large for feeding into our statistical model.
C. Feature Extraction
Feature engineering is an essential part of any prediction model, whether a statistical or machine learning model. Some features have been derived from the time series data for better prediction. We divided a day into a total seven different portion as mid-night (12 am-3 pm), late-night (3 am-6 am), early morning (6 am -9 am), morning (9 am -12 am), afternoon (12 pm -15 pm), late afternoon (15 pm -18 pm), evening (18 pm -21 pm), night (21 pm -24 pm). We introduce another feature based on weekdays and weekends since there is a chance of high traffic on weekdays. These features are particularly provided in the SARIMAX model as it is capable of handling exogenous attributes.
D. Rolling Prediction
Our dataset is divided into train and test sets for training and evaluating the prediction model, respectively. The proposed model used a prediction function, which takes the testing data indices as a parameter for immediate prediction. We looped through our test set entries to make inferences for all test instances. After evaluating each test instance, the model has been re-trained using the true observation from the test set in rolling prediction. In contrast, the standard prediction technique estimates all predictions in the test set after training the model once. This rolling prediction technique exhibits improved performance for all prediction models. E. Time Series Forecasting Models 1) ARIMA: ARIMA model was proposed by Box and Jenkins and is also known as the Box-Jenkins methodology. This model predicts the future value based on the past values of the time series, that is, its own lagged values and the lagged forecast white noises. The time series need to be stationary before applying the ARIMA model as it performs well when there is no correlation and dependency among the predictors [15]. Total of three parameters, such as order of AR term (p), order of MA term (q), and the number of differencing to make the time series stationary (d) are required to design the ARIMA (p, q, d) model. We can express the ARIMA model mathematically as follow [16]: • Here, y t is the time series. • p, d, and q are referred to as the order of AR, I and MA components of the ARIMA model. • ∆ d is an operator to make the y t stationary.
• Φ(L) p id the lag polynomials of order p, and L is defined as the lag operator. • t is white noise. 2) SARIMA: SARIMA is a generalized version of the ARIMA model which can handle the seasonality in the time series data. The SARIMA model requires additional four parameters such as seasonal autoregressive order (P ), seasonal moving average order (Q), the seasonal difference (D), and the length of the seasonality period (S) to process the seasonal component in the series. As a result, six parameters are necessary to define the SARIMA model where p, d, and q are the same as the ARIMA model. We can express SARIMA using the following mathematical equation [17].
• Here, y t is a time series with seasonality S. • P, D, and Q represent the similar meaning of p, q, and d in the ARIMA model but rather applicable for seasonal lags. 3) SARIMAX: SARIMAX model capable of processing the exogenous features of the time series. The exogenous attribute calculated at time t impacts the not auto-regressive time series value at time t. We can change the SARIMA equation 3 above to make it an equivalent SARIMAX equation as follows [18].
• Here, x i t is the exogenous attribute at time t and n is the total number of exogenous features. • β i is the coefficient for the variable x i 4) Holt-Winter: The Holt-Winters method is also known as Triple Exponential Smoothing, one of the popular algorithms designed for time-series forecasting. The first studies of Exponential Smoothing are back to the Simeon Poisson; after him in 1956, Robert Brown has introduced its forecasting application. This forecasting model is defined in three equations one for level (l t ), one for trend (b t ), and one for seasonality (s t ) and also forecasting equation with smoothing parameters α, β, and γ, respectively. Holt-Winters method has two variations, additive and multiplicative, that differ like the seasonal data values. We used an additive version of the Holt-Winter model for our experiment, and the equation can be defined as follow [19]: • Here, m represents seasonal period.
F. Evaluation Metrics
We used Mean Absolute Percentage Error (MAPE), to estimate the performance of our traffic forecasting models. The performance metric identify the deviation of the predicted result from the original data. For example, MAPE error represents the average percentage of fluctuation between the actual value and predicted value. Therefore, we can define our performance metric mathematically as follow: Here, p i and o i are predicted and original value respectively, and n is the total number of test instance
G. Software and Hardware Preliminaries
We have used Python and the statistical model library statsmodels [20] to conduct the experiments. Our computer has the configuration of Intel (R) i3-8130U CPU@2.20GHz, 8GB memory, and a 64-bit Windows operating system. We considered 21 days of data for training our model and the last eight days for testing. Before applying the prediction model, several time-series characteristics, such as stationarity, seasonality, tread, etc., need to be identified. For example, the ARIMA model performs better in stationary time-series data. We can check the stationarity of the time series using the Augmented Dicky Fuller (ADF) test. According to the test results, the ADF statistics (-1.791022) is greater than the critical values (-3.440), and the p-value (0.38) is more significant than 0.05, which satisfies its non-stationarity. To make the time series stationary, we took the first order log difference of our data and again performed the ADF test. Since the p-value (0.00) is less than 0.05 and ADF statistics (-19.49) is less than the critical value (-3.440), so we concluded that the time series is stationary after taking the first-order difference. This experiment helped us to set the difference order d to 1 in defining our ARIMA model. Next, we decomposed our time series to identify the seasonality and trend in Fig. 1. There is a clear seasonality in our time series, which repeats every 24 hours that is there is a daily seasonality in our dataset. After that, we plotted the ACF and PACF in Fig. 2 to figure out the hyper-parameters p and q for the ARIMA model. It is difficult to define AR(p) and MA(q) order from these plots as there is a sinusoidal pattern and no clear indication of the lag, which is significant for our time series data. That is why a grid search technique has been applied to identify the best hyper-parameter combination of p, q, and d based on the minimum Akaike Information Criterion (AIC) value. We tested different p and q combinations from 0 to 24 with difference order 1 to find the best parameter set for our ARIMA model. The top ten combinations based on minimum AIC value are shown in Table I. Our grid search result indicates the order (13,1,16) is the best combination with minimum AIC of 3782.588307 for the ARIMA model. Similarly, the best model parameter we identified for SARIMA model is (p, d, q) = (13, 1, 16) and (P , D, Q, S) = (1, 0, 1, 24). The best performing parameter for SARIMAX model is (p, d, q) = (13, 1, 16) and (P , D, Q, S) = (1, 0, 1, 24). We also provided the exogenous features extracted from our original traffic data according to the feature extraction process discussed in subsection III-C into the SARIMAX model. Finally, we fine-tuned three hyperparameters parameters for the Holt-Winter model: trend type, seasonality type, and seasonality period. The additive version of the Holt-Winter model is used in our experiment since the seasonal variation is relatively constant over time.
We experimented with two different types of prediction methodology. The standard prediction method is mainly divided into two stages: train and test. But rolling prediction model used the validation data of the latest prediction to retrain the model after each inference. The result presented in Table II shows improved performance for all models in rolling prediction if we compare it with standard prediction. The model performance evaluation metrics MAPE in standard prediction decreased to more than 50% in rolling prediction for every model. Firstly, we applied ARIMA model with the best parameter combination (p = 13, d = 1, and q = 16), and it resulted in average percentage prediction error of 7.71 Gbps. Since ARIMA cannot handle the seasonality in the timeseries data, we implemented another model, SARIMA, accepting seasonality information as a hyper-parameter. SARIMA provides better results with a 7.34% average deviation from the original traffic in the rolling prediction, reducing the prediction error by more than 4% compared to ARIMA. Next, we extracted some features from our original dataset according to the feature extraction process described in section III-C. The original traffic and their exogenous attributes are then trained using SARIMAX, which can handle extra features along with traffic seasonality. The SARIMAX yields the best prediction result in our experiment with the lowest MAPE of 6.83%, decreasing the error by more than 11% and 6% compared to ARIMA and SARIMA, respectively. Finally, we implement Holt-Winter prediction model to handle the variability in our traffic data by using the exponential smoothing technique. The Holt-Winter shows 7.59% average fluctuation between predicted and actual traffic using rolling prediction, and it is 1.5% less error than ARIMA. Our best-performing model SARIMAX reduces the prediction error by more than 10% in comparison to the Holt-winter. In Table III, we depicted the actual and expected traffic for the last eight days using both standard prediction and rolling prediction. The comparison between actual and predicted traffic shows a significantly better fitting using the rolling prediction technique.
V. CONCLUSION
In this research, we experimented with several internet traffic prediction models. We tried to improve the performance of state-of-the-art prediction models to learn the general trend in real IP traffic. Also, a comparative performance analysis among several conventional statistical models has been conducted in traffic prediction. Our experimental results show a significant improvement in traffic prediction when we feed our model with the validation data after each prediction. We considered four different prediction models, e.g., ARIMA, SARIMA, SARIMAX, and Holt-Winter, showing a better accuracy using the rolling prediction technique than the standard prediction. As part of our future work, we would like to model the residual variation using Autoregressive Conditional Heteroskedasticity (ARCH) and Generalized AutoRegressive Conditional Heteroskedasticity (GARCH). Also, we plan to extend our work from single-step traffic prediction to multistep prediction. | 2022-05-04T01:15:51.051Z | 2022-05-03T00:00:00.000 | {
"year": 2022,
"sha1": "0a119d28e23fa98d4866f40f0e76e078d1baeab6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0a119d28e23fa98d4866f40f0e76e078d1baeab6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
221523606 | pes2o/s2orc | v3-fos-license | National survey on the treatment of cholelitiasis in Spain during the initial period of the COVID-19 pandemic
Graphical abstract
Introduction
On March 11, 2020, the World Health Organization (WHO) declared that the severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) epidemic had reached pandemic status. 1 In Spain, since the first case was registered on January 31, 2020, 2 the virus has spread rapidly, with a prevalence of 5% according to the ENE-Covid 3 national seroprevalence study, reaching 23 521 deaths on April 27, 2020 4 and a fatality rate of 8.5%. 5 However, this figure was probably overestimated due to the large number of undiagnosed infected persons.
The reorganization of human and material resources to guarantee medical care for patients with COVID-19 has directly affected the surgical activity of Spanish hospitals.
Cholelithiasis is a very prevalent disease that affects 20% of the population in developed countries. 6,7 It is the leading cause of hospital admission in Europe for digestive disorders, 8 with fairly standardized international treatment recommendations. [6][7][8][9] In order to determine the impact of the COVID-19 pandemic on the management of symptomatic cholelithiasis and acute cholecystitis, a survey was created and sent to Spanish surgeons.
Methods
Ours is a descriptive study of data collected from a survey answered by Spanish surgeons about the treatment of symptomatic cholelithiasis and acute cholecystitis during the first month of the COVID-19 pandemic in Spain.
On April 14, 2020, the AEC and the Spanish Chapter of the IHPBA (CE-IHPBA) sent by email a voluntary online survey, created in Google Drive TM (https://forms.gle/ 2iHgGbhYzL2vaDVH6), to all their members at Spanish hospitals. Surgeons were requested to complete only one survey per medical center (Appendix B in Additional material), and the questionnaire was re-sent 7 days later (available for 10 days).
The completed surveys were evaluated manually to exclude surveys with multiple entries from the same individual, responses from foreign hospitals, or responses from members of the same hospital, giving priority to the first response received in that case.
The data from the surveys were compared using the McNemar test and the post-hoc test. Categorical variables were reported as numbers and percentages. Differences were considered statistically significant when the P value was <.05. For the statistical analysis, the SPSS program (version 22; Chicago, IL, USA) was used.
Results
After excluding surveys sent from foreign medical centers (3) or from the same hospital (12), a total of 153 surveys were analyzed. Fig. 1 shows the distribution of responses by autonomous community. The characteristics of the surgeons who completed the survey are presented in Table 1. Table 2 demonstrates the scenario of the centers consulted according to the classification proposed by the Surgery-AEC-COVID-19 Working Group. 10 Encuesta nacional sobre el tratamiento de la colelitiasis en Españ a durante la fase inicial de la pandemia por COVID-19 Palabras clave:
The usual pre-pandemic surgical practice of the surveyed hospitals is indicated in Table 2. During the pandemic, 96.7% of the hospitals had suspended elective cholecystectomies.
In the management of acute cholecystitis, only 29.4% of those surveyed admitted maintaining the same indications for urgent surgery as before the onset of the health crisis (Table 2).
When a cholecystostomy was indicated, 51% of survey participants believed that the waiting for this procedure did not increase during the pandemic and 8% of hospitals have not detected a decrease in the number of urgent consultations for acute cholecystitis ( Table 2).
The laparoscopic approach in acute cholecystitis was preferred by 99% of hospitals, and during the pandemic stage this percentage was 95%. Some 27.5% of survey participants were of the opinion that the risk of contamination of healthcare personnel is greater during laparoscopy. The use of personal protective equipment (PPE) was limited to cases with suspected COVID-19 in 82.4% (Fig. 2).
57% of the surgeons reported having had cases of postoperative confirmation of COVID-19, and 54% of these had presented a more complicated postoperative evolution ( Fig. 3A and B).
Discussion
Tables 1 and 2 show that more than half of the responses came from medical centers with a significant patient volume, performing more than 20 cholecystectomies per month, and the majority (41.82%) have been in a high state of alert.
The impact of the health crisis on surgical services resulted in the cancellation of elective cholecystectomies in 97.6% of the hospitals. This decision was not innocuous, since the annual risk of developing complications in symptomatic cholelithiasis has been estimated at 1%-3%. 8 In the next phase of recovery from the pandemic, the national healthcare system will have to design an adequate strategy to perform a high number of cholecystectomies in the shortest possible time. The future recovery of ordinary surgical activity is a challenge where surgeons will have to face longer waiting lists, more complications derived from the delayed surgery, and the risk of perioperative infection by SARS-CoV-2.
Major outpatient surgery is a safe alternative for elective cholecystectomy 11,12 in appropriately selected cases, as it reduces the patients' exposure to in-hospital infection and helps respond to the demand for hospital beds during the pandemic. However, only 37.9% of hospitals have experience c i r e s p . 2 0 2 1 ; 9 9 ( 5 ) : 3 4 6 -3 5 3 in this strategy (Table 1), and it must be implemented in the de-escalation phase. Other initiatives, such as telephone or video consultations 13 and the use of absorbable skin sutures, could help reduce the number of in-person visits. A significant drop has been observed in consultations for acute cholecystitis (Table 2). This is in line with recent publications that describe fewer surgical emergencies, but more advanced disease. [14][15][16] The confinement of the population, the general instructions to go to the hospital only in strictly necessary cases and the fear of intra-hospital infection could explain these facts.
It is controversial whether the pandemic situation should change the surgical indication for acute cholecystitis. There is a general consensus in most of the guidelines [17][18][19][20][21][22] to adopt conservative treatment in suspected or COVID-19-positive patients, for fear that surgery will aggravate the patient's respiratory condition 23 and to minimize the risks of infection of a highly transmissible viral disease.
According to the results of our survey, 57% of hospitals have had cases of postoperative SARS-CoV-2 infection, with an unfavorable postoperative evolution in 54% of the cases (Fig. 3A and B). This experience coincides with other publications [23][24][25] that document greater postoperative complications that could be attributable to this infection. We do not know what complications have developed, and this is a limitation of our study, but it will be the subject of future research. c i r e s p . 2 0 2 1 ; 9 9 ( 5 ) : 3 4 6 -3 5 3 The increased patient load caused by the pandemic and the limited availability of diagnostic tests mean that in many centers this medical treatment strategy has been transferred to the general population, especially in grade I and II cholecystitis, usually surgical, 9 where conservative treatment rose from 18% to 90% during the pandemic (Table 3). According to the literature, 6,26,27 it is a therapeutic alternative with success rates of 86%, but at the expense of a 22% recurrence of symptoms and a higher percentage of open cholecystectomies in the subsequent hospitalization. The American College of Surgeons advocates urgent cholecystectomy for patients with low surgical risk to minimize hospital stay during the pandemic. 28 In addition, the hospitalization of patients with conservative treatment recorded in our survey may be even longer. There were delays for cholecystostomies observed at 35% of the hospitals (Table 2), probably due to the overload of radiodiagnostic services and sick leaves among healthcare staff. Hence, this underlines the importance of surgeons having resources and training to perform percutaneous cholecystostomies.
Therefore, the therapeutic strategy of acute cholecystitis in the epidemiological situation that we find ourselves must be evaluated individually, weighing the benefit of surgery against any existing alternatives, while contemplating COVID-19 status, patient surgical risk, and the resources available at each hospital. 18,29 Initially, the fear of aerosolization that could occur with the use of pneumoperitoneum led the Association of Surgeons of Great Britain and Ireland (ASGBI) to advise against the use of laparoscopy during the pandemic, but this was later rectified. 21 Although the presence of viruses (such as hepatitis B) has been documented in the pneumoperitoneum, 30 there is no current evidence of the transmission of SARS-CoV-2 during laparoscopy, 20,31 and it is ethically questionable to deny patients the demonstrated advantages of the laparoscopic approach in acute cholecystitis. 20,31 In line with the AEC document, 17 95% of surveyed participants initially maintained use of the laparoscopic approach during the pandemic (Table 3). However, 27.45% of medical centers believed that the risk of contamination of staff by SARS-CoV-2 was greater by laparoscopy (Fig. 3B). This fear may be unfounded, and there may even be a lower risk of laparoscopic transmission, given the lower use of sharp instruments and less exposure to body fluids. Therefore, the choice of surgical approach must be made on an individual basis.
It is imperative to adopt a series of precautions to maximize the protection of the surgical team, as recommended by the AEC, SAGES, EAES and other scientific societies. 17 The use of a filtration system for laparoscopy CO 2 evacuation is a widespread practice in Spanish hospitals, with the exception of 27.72% of medical centers. Moreover, as shown by the results of this survey, most hospitals (59.84%) are using systems they have designed themselves, using disinfectant liquids (sodium hypochlorite), filters connected to suction systems or to the water seal (Pleur-evac1), 33 which may be due to the lack of adequate filtration material in this first phase of the pandemic. Currently, there is no air evacuation filter system that has been validated against the coronavirus, but this pandemic most likely demonstrates the need for its future development.
Other strategies to reduce the exposure of the surgical team to infection and the surgical risk of the patient focus on minimizing the medical staff required in the operating room and the performance of surgical procedures by surgeons with the greatest experience. 17,18,20 The purpose of this proposal is to reduce surgical time and, potentially, the risk of postoperative complications, but it has worked in detriment to the training program for general surgery residents, who actively participate in urgent surgeries, and particularly in cholecystectomies. Most national (AEC) and international (ACS) surgical societies advise against the intervention being performed by surgeons-in-training during the pandemic. 18 However, this suggestion has only been put into practice in 52.94% of the hospitals surveyed.
In Spain, 20.2% of reported COVID-19 cases have been healthcare personnel, 5 and 26 deaths have been documented, 5 including Spanish surgeons. The 23 116 registered cases 5 constitute the highest number of infections among healthcare workers reported in Europe and are probably related to the insufficient availability of adequate PPE, 34 the lack of systematic screening of asymptomatic carriers during the onset of the pandemic, and the initial absence of separation of healthcare circuits at many hospitals, including the lack of an independent operating room for patients with COVID-19 in 24.8% of those surveyed (Table 2). Currently, surgical societies recommend the use of complete PPE in surgical interventions only if there is clinical suspicion or confirmation of SARS-CoV-2 infection, 17,18,20,22 and 82.4% follow these recommendations (Fig. 2).
However, in the current context and with the available diagnostic tests (still in the evaluation phase due to their low sensitivity), it is difficult to safely determine whether a patient is an asymptomatic carrier of the disease. For this reason, in this initial phase, we suggested the universal adoption of PPE and diagnostic tests in all urgent surgeries.
Regarding this latter problem, our survey reveals that 16.4% of hospitals do not perform any diagnostic tests before proceeding with an urgent cholecystectomy (Table 2). These data reflect the heterogeneity of available resources and the geographic variability of prevalence in Spain. 3 Routine PCR screening for SARS-CoV-2 RNA, which is performed most frequently (60% of respondents; associated with chest X-ray in 20%), usually entails a delay of 6À8 h before performing the surgery. 20% of the hospitals exclusively use chest X-rays prior to the operation, which could reflect the scarcity of screening tests. 7.8% of hospitals use preoperative chest CT scans, although mostly (90%) as an extension of an abdominal CT scan and not as a specific study (Table 2). Radiological studies are more cost-effective in symptomatic patients, and can occasionally detect disease in paucisymptomatic patients, but their sensitivity for screening has not been established. The combination of diagnostic methods does not manage to solve the difficulty of diagnosing the infection in the incubation phase and in the first days of the clinical symptoms, which is where the highest number of false-negatives PCR tests and CT scans accumulate. 18,35,36 Another controversial aspect is the screening of COVID-19 in surgical services. The current protocol of the Ministry of Health reserves the screening test exclusively for health professionals with respiratory symptoms. 37 Healthcare workers who have been in close contact with a case are actively monitored, while still maintaining their professional activity. Adherence to this protocol may explain why surgeons are not tested in 94.1% of the hospitals (Table 2). This strategy, and the shortage of effective protection material, has probably contributed to our country becoming the international leader in the number of infected healthcare workers.
Surgeons who are asymptomatic carriers must be identified because they can be a source of infection. Periodic screening of surgeons should be implemented in the deescalation phase for the safety of patients and medical professionals themselves. 18,38 This study reports information obtained exclusively from a survey and should be interpreted within the context of the limited evidence from this type of study. However, in the c i r e s p . 2 0 2 1 ; 9 9 ( 5 ) : 3 4 6 -3 5 3 absence of scientific evidence during this first phase of the pandemic, this study provides relevant information on the patient care provided and the application of the advice of surgical societies in patients with biliary pathology.
In conclusion, the results of our study are testimony to the elevated patient care load and strain felt in Spanish hospitals due to COVID-19. The initial phase of the pandemic has had a very significant impact, causing the suspension of elective cholecystectomies and modifying the treatment of acute cholecystitis.
The results of our survey may facilitate the development of protocols for the treatment of biliary pathology in the deescalation phase of the pandemic. | 2020-07-19T13:05:11.114Z | 2020-07-19T00:00:00.000 | {
"year": 2021,
"sha1": "10540243fe6abeaec4cb28e79f9b6c6847380533",
"oa_license": "unspecified-oa",
"oa_url": "https://europepmc.org/articles/pmc8088215?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "376f36b76a3cdacb95c64ef17cb599bba463e95e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15656535 | pes2o/s2orc | v3-fos-license | Hyperbolic attractor in a system of coupled non-autonomous van der Pol oscillators: Numerical test for expanding and contracting cones
We present numerical verification of hyperbolic nature for chaotic attractor in a system of two coupled non-autonomous van der Pol oscillators (Kuznetsov, Phys. Rev. Lett., 95, 144101, 2005). At certain parameter values, in the four-dimensional phase space of the Poincare map a toroidal domain (a direct product of a circle and a three-dimensional ball) is determined, which is mapped into itself and contains the attractor we analyze. In accordance with the computations, in this absorbing domain the conditions of hyperbolicity are valid, which are formulated in terms of contracting and expanding cones in the tangent spaces (the vector spaces of the small state perturbations).
An orbit in phase space of a dynamical system is called hyperbolic if there are trajectories approaching exponentially the original orbit, and those departing from it in a similar manner. Moreover, an arbitrary small perturbation of a state on the original orbit must admit representation via a linear combination of the growing and the decaying perturbations.
In dissipative systems contracting the space volume the attractors may occur, which consist exclusively of the hyperbolic orbits. These are attractors with strong chaotic properties, like existence of the well-defined invariant SRB-measure, a possibility of description in terms of Markov partitions and symbolic dynamics, positive metric and topological entropy etc. Such hyperbolic (or, more definitely, uniformly hyperbolic) attractors are robust or structurally stable, that means insensitivity of the type of dynamics and of the phase space structure in respect to slight variations of functions and parameters in the evolutionary equations.
Although the basic statements of the hyperbolic theory were formulated 40 years ago, no convincing examples of physical systems were introduced with uniform hyperbolic attractors. In textbooks and reviews on nonlinear dynamics, such attractors are represented by artificial mathematical constructions, like Plykin attractor and Smale -Williams solenoid [1][2][3][4][5][6][7][8]. For realistic systems, in which the chaotic dynamics is mathematically proved, like the Lorenz model [9,10], the strange attractors do not relate to the class of uniformly hyperbolic (not all axiomatic statements of the classic hyperbolic theory are valid for them). Some aspects of possible existence of hyperbolic attractors in differential equations were discussed e.g. in Refs. [11][12][13][14].
In a recent paper of one of the authors [15], an idea was advanced of implementation of a hyperbolic attractor in a system of two coupled non-autonomous van der Pol oscillators. In a Poincaré map that determines evolution on a period of the external driving, a chaotic attractor has been found, which demonstrates some characteristic signs of hyperbolic attractors. By a nature of transformation of the phase space volume in a course of the evolution over a period, it is similar to the Smale -Williams solenoid. It looks robust: the Cantor-like transverse structure and the positive Lyapunov exponent are insensitive to variation of parameters in the equations. An analogous system has been built as an electronic device and studied in experiment [16].
Obviously, it would be desirable to have a mathematical confirmation of the hyperbolic nature of the attractor. As Sinai has suggested in due time [1], one possible way for substantiation of the hyperbolicity for attractor of a Poincaré map consists in numerical verification of certain sufficient conditions formulated in terms of inclusion for expanding and contracting cones in tangent vector space (the space of small perturbation vectors). In this paper, we discuss a method and present results of computer verification of these conditions in application to the chaotic attractor in a system of two coupled non-autonomous van der Pol oscillators.
The system proposed in Ref. [15] is represented by a set of differential equationṡ It consists of two subsystems, the van der Pol oscillators with characteristic frequencies ω 0 and 2ω 0 . Here x and u represent coordinate and velocity for the first oscillator, and y and v for the second one. In each oscillator the parameter responsible for the birth of the limit cycle, is forced to swing slowly with period T and amplitude A. As the parameter modulation is of opposite phase, the subsystems generate turn by turn, each on its own half-period T . The coupling is characterized by parameter ε. The first oscillator affects the second one via a quadratic term in the equation. The backward coupling is introduced by a product of the variable y and an auxiliary signal of frequency ω 0 . It is assumed that the interval T contains an integer number of periods of the auxiliary signal N 0 = ω 0 T /2π, so the external driving is periodic. For a detailed study, we select Qualitatively, the system (1) operates as follows. Let the first oscillator on a stage of generation have some phase ψ: x ∝ sin(ω 0 t + ψ). The squared value x 2 contains the second harmonic: cos(2ω 0 t + 2ψ), and its phase is 2ψ. As the half-period comes to the end, the term x 2 effects as priming for the excitation of the second oscillator, and the oscillations of y get the phase 2ψ. Half a period later, the mixture of these oscillations with the auxiliary signal stimulates excitation of the first oscillator, which accepts this phase 2ψ. Obviously, on subsequent periods the phase of the first oscillator will follow approximately the relation (Here the constant accounts a phase shift in a course of transfer of the excitation from one oscillator to another and back.) The relation (3) called the Bernoulli map is well known as one of the simplest model examples in the chaos theory. 1 For accurate description of the discrete time dynamics, we turn to the Poincaré map [2-8, 17,18]. Let us have a vector x n = {x(t n ), u(t n ), y(t n ), v(t n )} as a state of the system at t n =nT. From solution of the differential equations (1) with the initial condition x n , we get a new vector x n+1 at t n+1 =(n+1)T . As it is determined uniquely by x n , we introduce a function that maps the four-dimensional space {x, u, y, v} into itself: This Poincaré map appears due to evolution determined by differential equations with smooth and bounded right-hand parts in a finite domain of variables {x, u, y, v}. In accordance with theorems of existence, uniqueness, continuity, and differentiability of solutions of differential equations, the map T is a diffeomorphism, a one-to-one differentiable map of class C ∞ [17].
Further, we will deal always with description of the dynamics in terms of the Poincaré map. In particular, under the phase space we mean the four-dimensional space {x, u, y, v}, with x, u, y, v relating to an instant t n . An orbit means a discrete sequence of points in this space; attractor is an invariant attractive set composed of such orbits etc.
In a course of iterations of the map x n+1 = T(x n ), we have expansion of a small phasespace volume in a direction associated with the phase in the approximate equation (3) and contraction in the rest three directions. Interpreting the mapping geometrically, let us imagine a solid toroid embedded in the 4-dimensional space (a direct product of a circle and a three-dimensional ball) and associate one iteration of the map with longitudinal stretch of the toroid, with contraction in the transversal directions, and insertion of the doubly folded "tube" into the original toroid. It is analogous to the construction of Smale and Williams with the only difference that we deal with four-dimensional rather than the three-dimensional phase space.
The mentioned toroid will be referred to as an absorbing domain U. It means that under application of the map T the images of all points from U belong to its interior: To write down an analytic expression for the domain U it is convenient to redefine the coordinate system. We introduce new variables {x 0 , x 1 , x 2 , x 3 } as follows: To determine the constants we accumulate a large number of points {x, u, y, v} on the attractor in the Poincaré section by numerical solution of the equations (1). Then, by the least square method we find out the coefficients to minimize the mean-square values Geometrically, it corresponds to directing the coordinate axes along the principal axes of ellipsoid that approximates the attractor. Additionally, we normalize x 0 and x 1 by appropriate factors to have ¡x 2 0 ¿=¡x 2 1 ¿≈1/2. Finally, at the parameter set (2) we get In the new coordinates, let us define the absorbing domain U by the inequality: Empirically selected constants in this expression are r = 0.94, d r = 0.4, d = 0.15. Figure 1 gives evidence that this is indeed an absorbing domain. For initial points distributed over a border of U we perform numerical solution of the differential equations on an interval T and plot the results in the coordinates As the whole figure is placed inside the unit circle R 2 1 + R 2 2 = 1, the images of the initial points belong to the interior of U.
In Fig.2 we show a three-dimensional projection to illustrate mutual location of the domains U and T(U). It is analogous to that considered on the first step of the construction of the Smale -Williams attractor: take a torus ("a plastic doughnut"), stretch it twice, contract transversally, fold twice and squeeze into its original volume. The transformed "doughnut" T(U) looks like a narrow band because of very strong compression of the phase volume in respective directions in a course of the evolution.
We will verify hyperbolicity conditions required by a theorem (see e.g. [1,13]) adopted for the problem under consideration. Unlike the general formulation, it is sufficient for us to deal with a diffeomorphism of class C ∞ in the Euclidian space That is the Poincaré map T(x). Evolution of a perturbed state x + δx corresponds to transformation of the perturbation vector δx in linear approximation designates the derivative matrix for the inverse mapping T −1 (x). Theorem [1,13]. Suppose that a diffeomorphism T of class C ∞ maps a bounded domain U ⊂ R 4 into itself: T(U) ⊂ Int U, and A ⊂ Int U is an invariant subset for the diffeomorphism. The set A will be uniformly hyperbolic if there exists a constant γ > 1 and the following conditions hold: 1. For each x ∈ A in the space V x of 4D vectors δx the expanding and contracting cones S γ x and C γ x may be defined, such that DT x u > γ u for all u ∈ S γ x , and If the formulated conditions are valid for a whole absorbing domain containing the attractor, say, T n (U), they are obviously true for the attractor A = ∞ n=1 T n (U).
Let us consider in some detail the procedure of computer verification of these conditions. Been given a point x = {x 0 , x 1 , x 2 , x 3 } ∈ U, we perform numerical solution of Eqs. (4) on the interval t ∈ [0, T ] with the initial state and get the image (9) In parallel, we solve numerically the linearized equations for vectors of small perturbations over the same period. In the original variables they are Passage to the redefined coordinates and back may be done with the relations The equations (10) are solved along the orbit started at x for four times, each time with such an initial vector u= {δx i } that unity is posed in a row from 0 to 3, and other elements are zero. Then, we get four vector-columns and compose a matrix U = DT x of them.
Starting at x, an initial perturbation vector u after one iteration of the Poincaré map yields u ′ = Uu. A squared Euclidean norm of this vector is u ′ 2 = u T U T Uu, where T means the transposition. Using the inverse matrix U −1 we can write as well u = U −1 u ′ and u 2 = u ′T U −1,T U −1 u ′ . A condition that u ′ represents an image of a vector belonging to the expanding cone S γ x , is given by an inequality u ′ > γ u , Let the eigenvalues be enumerated in decreasing order. As we have one expanding and three contracting directions, then, Λ 2 0 > 1 and Λ 2 1,2,3 < 1. Now, we suppose that γ is selected in such way that Λ 2 0 > γ 2 and Λ 2 1,2,3 < γ 2 . 2 Then, in the matrix 2 This property is checked in a course of computations at each analyzed point of the absorbing domain naturally: its violation would entail a non-correct operation of taking a square root of a negative number. The inequalities for eigenvalues of the matrix U T x U x ensure fulfillment of the condition that a sum of subsets of the linear vector space (that is a set of all possible linear combinations of vectors from the expanding and contracting cones) is the full 4D vector space: we have one positive and three negative elements on the diagonal. By an additional scale change along the coordinate axes it is reduced to the canonical form A vector c={1,c 1 ,c 2 ,c 3 } belongs to the expanding cone S γ T(x) , if c T H ′ c > 0, or In the 3D space {c 1 ,c 2 ,c 3 } it corresponds to interior of the unit ball.
With the same transformations, the matrix U −1,T U −1 − γ −2 takes a form (Note that it is symmetric: h ij = h ji .) A vector c={1,c 1 ,c 2 ,c 3 } represents an image of a vector belonging to the expanding cone, if c T Hc < 0, or In the space {c 1 ,c 2 ,c 3 } it corresponds to interior of some ellipsoid. The inclusion DT x (S γ x ) ⊂ S γ T(x) means that the ellipsoid has to be placed inside the unit ball. To formulate a sufficient condition for this, we determine a center of the ellipsoid from the equations and estimate a distance of this point from the center of the ball: With a transfer of the origin to the center of the ellipsoid, the equation for its surface becomes wherec i = c i −c i , and Now, we consider a symmetric 3 × 3 matrix h = {h ij }, i, j = 1, 2, 3. In the diagonal representation of this matrix, under appropriate orthogonal coordinate transformation (c 1 ,c 2 ,c 3 ) → (ξ 1 , ξ 2 , ξ 3 ), the equation of the ellipsoid surface becomes where l 1 , l 2 , l 3 are eigenvalues of h. The largest semiaxis of this ellipsoid is expressed via the minimal eigenvalue: A sufficient condition for the ellipsoid to be positioned inside the ball is given by an inequality r max + ρ < 1.
It completes the procedure of verification of the expanding cones inclusion for the point x.
It may be shown that with γ <1 the application of the above procedure in U is equivalent to verification of the condition in the domain x∈T 2 (U) for contracting cones with the parameter γ ′ = 1/γ > 1: . It is so because the cones S γ and C 1/γ are complimentary sets:S γ ∪C 1/γ = V. (Here S γ with γ < 1 corresponds to the cone of vectors, which either expand, or contract, but no stronger than by the factor γ.) Hence, fulfillment of the inequality (25) checked inside U for two parameters γ and 1/γ would imply that both conditions for expanding and for contracting cones are valid in the domain T 2 (U), which contains the attractor. 3 This is sufficient to draw a conclusion on the hyperbolic nature of the attractor.
The computer verification of the required inclusions for the expanding and contracting cones was performed at the parameter values (2) in the coordinate system (4), (5). Computations of the Poincaré map and of the Jacobi matrices were produced by means of joint numerical solution of the differential equations (1) together with linearized equations (10) on the time interval T . We used the Runge -Kutta method of the 8-th order based on formulas of Dormand and Prince with automatic selection of step (the accuracy for one step was assigned to be 10 −11 ) and an extrapolation method (the accuracy for one step assigned 10 −15 ) [19]. For solution of sets of linear algebraic equations, matrix diagonalization, and eigenvalue problem solving, we used sub-programs from the library LAPACK [20].
In accordance with our computations, at γ 2 = 1.1 the sufficient condition (25) of correct inclusion for the expanding cones DT x (S γ x ) ⊂ S γ T(x) is valid in the whole absorbing domain U. To discuss details, let us consider a 3D hypersurface defined by an equation At R = 1 it corresponds to a border of the domain U; at R < 1 it belongs to its interior. We can parametrize this hypersurface by three angle coordinates φ, ψ, and θ: x 0 = (Rd r cos θ + r) sin ψ, x 1 = (Rd r cos θ + r) cos ψ, x 2 = Rd sin θ cos φ, The variable ψ may be regarded as a phase of the first oscillator at the Poincaré crosssection, and φ as a phase of the second oscillator at the same instant. Numerical computations on a 3D grid with step 2π/M at M = 50 show that the value r max + ρ = f (R, φ, ψ, θ) at fixed R depends essentially on ψ and θ, while dependence on φ is very weak. On the plot of the function f one global maximum can be seen of value varied in dependence on φ and R. At R = 1 and some φ the maximum reaches f max ≈0.929441 (that corresponds to a point M on the border of the domain U with coordinates x 0 = −0.102628, x 1 = −0.544957, x 2 = 0.000581, x 3 = 0.040066), but remains definitely less than 1, see Fig. 3. 4 Panel (b) illustrates mutual disposition for the cones DT x (S γ x ) and S γ T(x) at the point M. The plot shows a 3D cross-section of the 4D vector space V T(x) by a hyperplane orthogonal to the expanding direction. The coordinate axes are principal semiaxes of the ellipsoid representing the cross-section of the cone S γ T(x) . Due to scale selection along the axes, it looks like a ball. The ellipsoid representing the cross-section of DT x (S γ x ) looks like a narrow "needle", because of high degree of phase volume compression in two directions. Its disposition inside the large ball testifies the condition DT x (S γ x ) ⊂ S γ T(x) . The ball circumscribed around the ellipsoid is posed inside the large ball too; that expresses the sufficient condition (25). For smaller R the global maximum of r max + ρ only decreases (Fig. 4a). Analogous computations with other values of γ indicate that the required inclusions for the cones S take place at least in the interval 0.64 < γ 2 < 1.35 (Fig. 4b). As explained, the correctness of the condition with γ < 1 implies the condition for the contracting cones DT −1 . We conclude that in T 2 (U), both conditions for expanding and contracting cones are true, say, at γ 2 =1.1. 5 Hence, the analyzed attractor is uniformly hyperbolic. This assertion, although not proven in a classic mathematical style, follows with definiteness from the theorem, conditions of which have been checked in the computations. Assuming the hyperbolicity established, let us illustrate now some attributes of the hyperbolic dynamics.
To start, we note that dynamics on the attractor is chaotic. In a course of time evolution, both oscillators generate turn-by-turn, passing the excitation one to another. Figure 5 shows typical plots for x and y obtained from numerical solution of Eqs. (1). Panel (a) presents a single sample, and panel (b) shows five superimposed samples of the same signal on successive time intervals. Fig. (b) gives evidence that the process is not periodic. In fact, it is chaos, which manifests itself in irregular displacement of the maxima and minima of the waveforms relative to the envelope on successive time intervals T .
To have a quantitative indicator of chaos we turn to Lyapunov exponents. With multiple iterations of the Poincaré map and Jacobi matrix computations, we trace evolution of four perturbation vectors by means of their subsequent multiplication by the Jacobi matrices obtained in a course of the evolution. At each iteration, the Gram -Schmidt orthogonalization and normalization are performed for the set of vectors. The Lyapunov exponents are determined as the mean rates for growth or decrease of the accumulating sums for logarithms of norms for the vectors (after orthogonalization but before the normalization) [21]. From the computations (10 samples, each of 5 · 10 4 iterations of the Poincaré map) we obtained the Lyapunov exponents L 1 = 0.6832 ± 0.0007, L 2 = −2.6022 ± 0.0036, L 3 = −4.6054 ± 0.0028, L 4 = −6.5381 ± 0.0078.
(28) 4 For a search of the maximum in a space of three variables at fixed R, we used the Newton method. 5 Restricting the domain of verification of the condition for expanding cones by the set T 2 (U ) one can improve essentially the estimate for maximum allowable γ. In accordance with our computations, the inclusion conditions for expanding and contracting cones inside the domain T 2 (U ) are valid yet at γ 2 ≈ 1.5.
Presence of the positive exponent L 1 indicates chaos. (It is close to ln 2 = 0.6931 because of applicability of the approximate Bernoulli map (3).) Figure 6 shows portraits of the attractor on a plane of variables of the first oscillator. The panel (a) depicts projection of the attractor from the 5D extended phase space on the plane of original variables (x, u). The attractor is shown in gray scales (the darkness reflects a relative duration of residence inside a given pixel). Black dots relate to the Poincaré cross-section, the instants t n = nT . The panel (b) shows the attractor in the Poincaré cross-section on a plane of the redefined coordinates (x 0 , x 1 ) (see (4)). Note an evident visual similarity with the Smale -Williams attractor as depicted in textbooks. The transverse Cantor-like structure is illustrated separately on the panels (c) and (d) by magnified fragments of the previous picture. For quantitative characterization of the fractal structure in the Poincaré cross-section, we have estimated the correlation dimension by means of the algorithm of Grassberger and Procaccia. Using a 4-component time series x n = x(t n ) obtained from numerical iterations of the Poincaré map for n=1÷M, M=40000, we get D=1.2516±0.0018 (as a result of averaging over 10 samples). The dimension estimated from Luapunov exponents with the Kaplan -Yorke formula is D ≈ 1.263.
From the point of view of theoretical analysis of the hyperbolic attractors, one of the principal features is that intersections of local stable and unstable manifolds if occur must be transversal. In computations, to determine the local manifolds with appropriate accuracy one can use the following scheme. Let us have three points on the attractor obtained one from another by N-fold application of the Poincaré map: where N is a sufficiently large integer. To obtain the 1D unstable manifold at B, we consider an ensemble of initial conditions close to A and parametrized by ∆ψ, a small deflection of the angle variable, of order L −N 1 : x 0 = r A sin ψ,, . After N iterations of the map T, the points take up positions along the unstable manifold Γ B u . To obtain the 3D stable manifold at B we set initial conditions for the Poincaré map close to B: x 0 = (r B + ∆r) sin ψ 0 , x 1 = (r B + ∆r) cos ψ 0 , Fixing three values (∆r, ∆x 2 , ∆x 3 ), which parametrize the manifold, we take as initial guess ψ 0 = ψ B = arg(x B 1 + ix B 0 ) and perform N iterations of the map. Then, we get a discrepancy ψ N − ψ C , ψ 0 = ψ C = arg(x C 1 + ix C 0 ), correct the initial angle variable, ψ ′ 0 = ψ 0 + (ψ C − ψ N )/2 N , and repeat the procedure, until the error will be less than a given small value.
A graphic representation of the manifolds is not trivial because the phase space is fourdimensional. Let us use a plane of variables (x 0 , x 1 ) relating to the first oscillator. The 1D unstable manifold we show simply as a projection onto this plane. For representation of a three-dimensional stable manifold we will use a curve of intersection of the manifold with a two-dimensional plane {x 2 = x B 2 , x 3 = x B 3 } projected onto the plane (x 0 , x 1 ). Practically, a sufficient accuracy for coordinates of points on the manifolds is reached, say, at N ∼ 10. The disposition of the local manifolds revealed from the computations is illustrated in Fig. 7. The invariant set that consists of the unstable manifolds coincides with the attractor itself. It is enclosed in the toroidal absorbing domain going turn by turn around "the hole of the doughnut". On the other hand, the local stable manifolds are posed across the "tube" that forms a surface of the toroid. In the two-dimensional diagram the stable manifolds look like "speaks of a wheel". Due to such mutual location, the stable and unstable manifolds can intersect only transversally, and no tangencies do occur.
As stated in this article, in the four-dimensional phase space of Poincaré map for the system of two non-autonomous coupled van der Pol oscillators there exists a toroidal absorbing domain, containing a uniformly hyperbolic attractor. This conclusion is based on computer verification of conditions formulated in terms of appropriate inclusion of expanding and contracting cones defined in the tangent vector spaces associated with the points of the absorbing domain. Hence, our model delivers a long-time expected example of a simple physically realistic system with hyperbolic attractor. With this example, it will be possible to construct other models with hyperbolic chaos, exploiting structural stability intrinsic to the hyperbolic attractors. In fact, a physical experiment demonstrating attractor of this type has been performed already on a basis of coupled electronic oscillators [16]. In applications, the systems with hyperbolic chaos may be of special interest because of their robustness (structural stability). An interesting, and now a substantial direction is constructing chains, lattices, networks on a base of elements with hyperbolic chaos [22]. Models of this class may be of interest for understanding deep and fundamental questions, like the problem of turbulence.
This research was supported by RFBR grant No 06-02-16619. (6) is absorbing. For initial points distributed over a border of U, the resulting data from numerical solution of the differential equations over a period T plotted in coordinates (7) fit inside the unit circle R 2 1 + R 2 2 = 1. | 2014-10-01T00:00:00.000Z | 2006-09-04T00:00:00.000 | {
"year": 2006,
"sha1": "47b98ee7385668bdf2be0d3df9a9bc84469d1343",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/nlin/0609004",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "47b98ee7385668bdf2be0d3df9a9bc84469d1343",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
214018312 | pes2o/s2orc | v3-fos-license | Stereo vision-based obstacle avoidance module on 3D point cloud data
ABSTRACT
INTRODUCTION
Recent development on autonomous platform geared towards assisting human in their daily activity mandates all autonomous platform to have a capability to perceive the surrounding environment, and make a suitable decision for action.One essential action is to safely navigate cluttered environment and avoiding collision with any obstacle that presents within the environment.One challenge present is avoiding collision with obstacle and navigating in an unstructured environment [1].Since unstructured environment have so many possibilities of obstacle position, orientation and or shape, challenge present involved in gaining obstacle position, orientation and size in order to safely plan a path to avoid a collision.One commonly used solution to deal with these problems is 3D vision [2,3] based solution to gain obstacle position, orientation and size since with 3D vision the system did not need to recognize and learn the object to acquire the information on object position, orientation and size since the raw data itself is already 3D.Many solutions exist on the field of gaining 3D information from the environment, one of the solution is using computer stereo vision [4][5][6] to Stereo vision-based obstacle avoidance module on 3D point cloud data (Eko Purbo Wahyono) 1515 create an point cloud data [7].On solving the problems on 3D vision and obstacle avoidance, as the system become progressively more complex and demanding with processing power, the demands for more details on input on environment and faster execution time become more clear.From the input processing side, with 3D vision that create a 3D representation of surrounding environment with great details comes with the price of enormous data size and demand more processing power to process the data.One solution to this problem is to reduce data size while keeping the characteristics, normally this result in reduced environment details.RANSAC (random sample consensus) based method with SAC model [8] is one of the most used method used to reduce data size without sacrificing too much details on environment data and robust in presence of noise [9], This work uses NaN, passthrough and voxel grid filter based on RANSAC to reduce data size.To further reduce computation, parallel computing are used [9][10][11].Based from [9] comparison this work used CUDA (compute unified device architecture) parallel computing based on [12].The reason for parallel computing is to minimize execution time in order to enable the obstacle avoidance module to be in a processing unit combined with another function for platform movement and automation.
Environment data must be classified into regions with similar characteristic to facilitate further processing.One of data classifying is data segmentation.This process was required to differentiate data to acquire usable environment data [13][14][15][16].This work using plane-based segmentation to differentiate between obstacle which defined as any point cloud data with vertical plane orientation and the horizontal walking plane itself.This step is used to segment the walking plane and obstacle data.Obstacle data, a representation of environment condition are used to plan the collisionfree path.Since the platform movement lies on x,y axis, the obstacle data in form of point cloud can be converted to grid map [17,18].Recent research produce an algorithm that not only able to plan safe path but also deal with environment change and platform disturbances that cause the platform to stray from planned path.One of the algorithm is timed-elastic-band [19].Many research has been done on improving TEB (timed elastic band) method [20][21][22][23][24], this work base the TEB application from research by [20,22].The final goal of this work is a system module with integrated processing for 3D point cloud data similar to [25] with an output of obstacle detection and obstacle position information to produce path planning with an ability of updating the path to deal with environment changes and or platform deviation while record the environment data for future usage and subsequent global path planning.
PROPOSED SYSTEM OVERVIEW
Below are the general overview of steps taken to process environment data to plan the obstacle avoidance action: − Pre-Processing.Raw point cloud data from a stereo camera are filtered to down sample the data using voxel grid method and remove unnecessary points.− Obstacle extraction.Environment data in form of point cloud are separated between vertical and horizontal plane based on point data orientation, environment interpreted into a grid to get the information of safe moving space and obstacle area, the resultant grid are recorded for future use and global navigation.− Path planning.Plan a path towards goal through safe moving area using occupancy map, keeping clear of obstacles area while updating environment grid and adapt the path when environment data and platform position are changed, publish array of waypoints along the global path and create new waypoints array towards global path target.
RESEARCH METHOD 3.1. Point cloud engine
On camera initialization, camera calibration and pre-set parameters are loaded to enable image acquisition.Data taken from Stereo 2D camera with focal point and baseline are stereo matched, and triangulated to gain depth data by: where , and , are matching pixel between left and right camera respectively, the resultant depth data is combined with RGB data from the camera and converted into 3D point cloud data format by normalizing the pixel coordinate (, , ) into 3D coordinate (, , ) with camera baseline using: to be processed afterwards by subsequent sub-systems.Point cloud resultant of 1 and 2 have parallel planes in real condition shown as parallel planes, therefore eliminating the need of calculating camera perspective.All these processes are done in an early stage inside the camera itself.
Filtering
Since dense point cloud data while provide more details on environment leads to longer execution time needed for processing, therefore a method for keeping environment details while reducing point count that leads to faster execution is needed [26].This system uses voxel grid filtering which group data with similar characteristics and replace the group a single data that represent the whole group, this filter was chosen for the robust characteristic of the filter.Point cloud data is down-sampled using voxel grid method.A defined Oriented bounding box along with the minimum and maximum value of point cloud in , , axis are used to calculate voxel grid in size of × × which defined as: Defined size of a voxel is used to group neighbouring point of an initial point Pi with similar characteristics and using the Pi as centroid all other point removed leaving the centroid point data afterwards.Small voxel size, while keep more details of raw point cloud data, leads to longer execution time while bigger voxel size have smaller execution time but loses more details.Resultant of this step are as shown on Figure 1 while voxel grid is slower, it represents the underlying surface more accurately without sacrificing keypoint of 3D point cloud data [27].This characteristic will enable the segmentation process to segment the data properly.The resultant data from this step still contains all data within camera's field of view, since the system do not need roof information and existence of roof data in occupancy grid map process can lead to no available moving space, hence the removal of roof point cloud data is needed.The system use a passthrough filter with maximum vertical parameters to exclude roof point cloud [27], if passthrough filter is not exist, the roof segmented will be defined as an obstacle since the roof plane itself lies parallel with walking plane, this happened because the segmentation process only separate vertical and horizontal point cloud data orientation.Because the roof will not hinder platform movement and including roof as obstacles result in no free obstacle area, the system exclude ceiling point cloud data from being processed by the segmentation process by filter the data using passthrough method: with P as the resultant point cloud, as an individual point, m as the minimum z axis value and n as maximum z axis value.
Segmentation
After pre-processing step, the point cloud data segmented to gain information on walking plane as free moving region and obstacle data.With the definition of walking plane as a plane in , axis.By defining the plane's mathematical model as: where , , on (8) are the parameters of the mathematical model of the plane and k,l,m are the independent variables of the model to define a plane.To determine if point = ( 0 , 0 , 0 ) fit plane = × + × + × + , the perpendicular distance of the point to the plane was calculated using: If the point is below the threshold value, it can be considered as an inlier to the plane.By using (7) to define the walking plane model and isolate the walking plane point cloud data from others using (8) resulted in separation of the plane inlier from other point, the system has gained both free region in form of inlier points and the obstacle point cloud data itself to plan the obstacle avoidance path.Result of this step is shown by Figure 2.
Occupancy grid map
Segmented point cloud data are used for building occupancy grid map to acquire obstacle position relative to the platform and the subsequent map can be recorded for future use or for global path planning to already recorded area.Occupancy grid map created using obstacle point cloud data to determine if a grid occupied by an obstacle that will cause collision [18].Each cell Ci = x, y with a fixed size for each point Pi = x, y, z This step gains areas where the platform can safely move without risk of collision without the need of single out every obstacle and calculate the centroid and position of each obstacle by measuring each area and comparing the width with the minimum width required by the platform.The result as shown on Figure 3.
Path planning
This part of the system have two parallel process, the first one, path planner create a global solution for the platform to move towards the target.The second system handle disturbance force on a platform and output correction command to deal with small changes in environment and unexpected obstacle.Based on timed-elastic-band method by Quinlan [19] with development on sparse model [21] to advanced trajectories [20] with MPC approach towards TEB methods [22].Overview of this step is as shhown on Figure 4.This method was used because the behavior of the platform is not determined at the planning level, yet local disturbances will not hinder the system ability to reach goals.This system deals with local disturbances by deforming the path when environment changes are detected.Despite this step, the system still maintain a complete collision-free path towards a determined goal.Find the shortest path towards determined goals.With (10) produces global path result that used by TEB to create a series of waypoints.The shortest path, taken as a series of a fixed number of waypoint with tolerance radius which comprise.
Array of waypoint can be denoted as: TELKOMNIKA Telecommun Comput El Control
Stereo vision-based obstacle avoidance module on 3D point cloud data (Eko Purbo Wahyono)
1519
This system does not pursue fastest trajectory and in a context of timed elastic band, it only optimizes the location of intermediate waypoint as the system does not have a boundary of final platform state, therefore the change in time for each consecutive waypoint (∆ ) are fixed and can be denoted as (13).
∶= (, ) Optimal band calculation is calculated by minimizing the objective function.
With an array of waypoints Pi from path planning result, the system compare current platform position and orientation information with waypoints coordinate and update waypoints array when the system achieve waypoints or if the system stray from global path and previous waypoint array.
RESULTS AND ANALYSIS
The result of voxel grid filtering with 0.05 m size voxel, the resultant cloud shows a drop of 98.2% in point cloud count from 921600 point to 16664 point.With only 1.8% of the data remains while maintains similar characteristics with the raw data.This result shows that voxel grid preserved underlying plane of the raw point cloud data while reducing point cloud data size as shown on Figure 1.Filtering sub-system takes 86 ms average execution time with 720p raw data size with an acceptable result for segmentation while reducing the data size and removal of unnecessary data proven by the success on segmentation step as shown on 2a, faster execution time leads to more loss of details, which result in failure of segmentation.Execution time for filtering subsystem can be improved by the usage of parallel computing for each filtering model for faster execution.Similar method can be applied for segmentation subsystem which takes 260 ms average execution time.Shorter execution time leads to faster map update and faster decision result.Segmentation resulted in all point cloud data with vertical plane orientation are separated from the point cloud data with horizontal plane orientation.Compared to colour based point cloud clustering, the result shows that the colour based segmentation result are still detecting some vertical object as a same group as the horizontal walking plane, while the system's segmentation based on each point orientation separate all points with vertical plane orientation.Despite system success of segmentation between horizontal walking plane and obstacle data, the system fail to process some data when stereo matching failure exist, this characteristics make neighbouring pixel with a significant distance in 3D environment result in an almost horizontal gradient, therefore the system segmentation does not segment this area.The comparison between plane based segmentation and colour based segmentation are shown on Figure 2 occupancy grid map resulted in information of each obstacle data position therefore safe moving area are acquired where no obstacle are present.
The system will calculate the safe area around the obstacle grid to account for platform size and odometry drift.All these information are successfully recorded proven by the ability of the system to plan navigation path through already mapped area Figure 3 show that where obstacle point cloud exist, the corresponding cells show as black cells with no failure for all obstacle point information.Path planning resulted the shortest path in regard to platform size to avoid collision with obstacle area and modify initial path to target to avoid an obstacle and adapt to new environment data this is proven by moving the camera according to path planning resulted in arriving to target coordinate without any collision shown on Figure 5 (a) and Figure 5 (b) shows the same goal position with updated environment information and platform position.Figure 5 (b) shows the system update the path plan towards the goal when the platform is moved outside the planned path and new obstacle information is acquired from the environment and update path when the platform moved inside planned path Figure 5 (c).The system only produce goal array position as long as the area width for the path planning are wider than the required width of the platform and will terminate the path planning when the required condition are not met, the system will produce path as close as the target as possible when the condition are met, shown on Figure 5 (d), farthest arrow's position have area width of more than 0.7 m, or shown at the Figure 5 (d) as the blue circle diameter are 7 grid with each grid have the width of 0.1 m, while further possible position did not met this requirement, therefore the system are fulfilling the target of creating safe path without possible collision.
CONCLUSION
Filtering sub-system perform amicably with total execution time of 96ms on average.This result of execution time is done on 720p raw point cloud data size with 0.05m voxel size and limit passthrough of 1.2m above ground.All sub-sub-system of filtering step perform as intended.Obstacle Extraction sub-system extract all obstacle data and mapped with appropriate result compared to the real environment.While segmentation process have a point of failure when the resultant of stereo matching did not produce the same result as the real condition which result in segmentation failure while occupancy grid map produces the exact result of every segmented obstacle data.
Path planning sub-system produces appropriate path and correction command when dealing with environment and platform update while keeping the recorded data for future use and for global navigation.This system did not deal with global path planning with previously mapped environment and has no superposition odometry present.This system has provided a capability of obstacle avoidance based on 3D point cloud data interpreted by equation 10 to form basic intelligence based on TEB model to deal with collisionfree path planning and deals with platform deviation.
Figure 5 .
Figure 5. Obstacle avoidance path modification, (a) initial path data, (b) after environment and position update (c) After position update, (d) Minimum width requirement not met with the voxel grid size defined, assign each grid to each point in point cloud .After each grid assigned and the center gravity point for each grid with , , and , , as the outmost point of the grid calculated by: in voxel are replaced by center gravity point of each grid with final position point set ´. Process all voxel grid to acquire ´ by: | 2020-03-19T19:54:21.691Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "ba10eaaab83615aa86967321b309debc1f4b589c",
"oa_license": "CCBYSA",
"oa_url": "http://journal.uad.ac.id/index.php/TELKOMNIKA/article/download/14829/8110",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9141952de42bf3b34bcbf7d710a19f074e8ad3e8",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
221544343 | pes2o/s2orc | v3-fos-license | Characterization of buccal cell DNA after exposure to azo compounds: a cross-sectional study [version 1; peer review: awaiting peer review]
Background: Azo compounds, containing naphthol and diazonium salts, are synthetic dyes widely used in the batik industry. Azo compounds are considered toxic when they are exposed to human tissue. The purpose of this study was to analyze buccal cell DNA exposed to azo compounds in batik workers. Methods: A cross-sectional study involving 20 male subjects divided into two groups (n=10 group), namely azo-exposed and non-exposed (control group). Inclusion criteria were batik workers of the colouring division who have been exposed to azo for at least 5 years. Buccal cells were taken using cytobrush then DNA were isolated from buccal cell. DNA isolation was done by buccal DNA kit, while the purity and concentration of the DNA was determined using spectrophotometer and electrophoresis. Results: The azo-exposed group revealed higher purity DNA than those in the control group. The purity of the DNA in the azo-exposed group and control group was 0.61±0.93 and 0.21±0.09, respectively, while the concentration of DNA was of 59.02 and 19.35 ng/UL, respectively. The ratio at 260/280 nm was 1.84-1.94 (azo-exposed) and 1.85-1.92 (control). Principal component analysis using the first principle component (PC1) and second principle component (PC2) could successfully classify subjects in the control and azo-exposed groups. Conclusion: Characteristics of DNA could be used as an indication of exposure to azo compounds in workers of batik industries.
Introduction
The oral mucosa is the first defence against particles entering the body. The oral epithelial mucosa functions to protect the body from chemical, microbial, and physical challenges 1,2 . The buccal epithelium is the thickest region in the squamous stratification epithelium. Keratinization is influenced by endogenous or exogenous factors. Exogenous factors include the use of drugs, nutritional factors, and irritant factors, such as plaque and calculus, artificial teeth, and smoking or exposure to other substances 3,4 .
The use of azo synthetic dyes and their derivatives, especially those with benzene groups, are increasing in the batik industry 5,6 . Azo dyes are compounds characterized with one or more azo functional groups (-N=N-), linked to benzene. They are readily reduced to hydrazines and primary amines. The benzene group in azo compounds is difficult to degrade because it takes a long time 7,8 . Chemicals in the batik industry are known to cause irritation to the skin and eyes, and cause interference with the respiratory system 8 . Azo compounds are also known to be carcinogenic and mutagenic if they are in the environment for a long time, and they are suspected to be a source of disease 9,10 .
Exposure to synthetic azo dyes, which are continuously inhaled by batik workers, may cause changes in the oral mucosa. Daily exposure to azo dyes needs to be analysed to assess the possibility of the risk of oral cavity abnormalities, although there have been no reports of batik workers that mention oral cavity abnormalities due to azo exposure. Exposure to azo dyes for more than 5 years in batik artisans has been known to significantly increase the frequency of micronuclei, karyolysis, and pyknosis in buccal mucosal epithelial cells [11][12][13] . In addition, exposure to azo dyes significantly increases the expression of cytokeratin 5 and 19 in the buccal mucosa 14,15 . The results of these studies have not yet explained the changes in buccal cell DNA exposed to azo compounds; therefore, the objective of this study was to evaluate the profile of buccal cell DNA exposed to synthetic azo dyes to determine the possibility of cellular damage.
Participants
The method was cross-sectional to compare subjects exposed and not exposed at the same time. We conducted the study in batik industries (for exposed group) and non-batik industries (for control group) in Yogyakarta-Indonesia from May to August 2019. The procedure of this study was approved by Research Ethics Committee of the Faculty Dentistry, Universitas Gadjah Mada (Ethical Clearance No.00107/KKEP/ FKG-UGM/EC/2019). Participants of exposed group were from batik industries in Yogyakarta Indonesia whereas participants of the control group were students and staff at the Faculty of Dentistry, Universitas Gadjah Mada, Indonesia. For the exposed group, batik factories were identified from a list online and information about the study was sent to the manager of the factories (letter No. 5189/UN1/FKG/Set.KG1/PT/2019 from Universitas Gadjah Mada to the factories), who allowed the researchers to interview their workers. For the control group, information about this study was sent to students at our university that asked them to participate in the study. All participants agreed to participate by providing written informed consent.
Information collected from the participants were age, past medical and dental history, occupational history, lifestyle (smoking and alcohol consumption) and if they wore a dental apparatus. Oral Hygiene Index-Simplified (OHI-S) were calculated from calculus index (CI(S)) and debris index (DI(S)): OHI-S = CI(S) + DI(S). Interpretation: 0 -1.2 is good; 1.3 -3.0 is fair; and 3.0 -6.0 is poor 16 .
Inclusion criteria were aged between 18 and 45 years old (age group most likely to be working), male (to provide continuity among participants), OHI-S status of 'good', worked in colouring batik for a minimum of 5 years (for exposed group), and did not work in coloring batik (for control group).
Study size was calculated according to Notoatmodjo 17 Z1-/2P(1-P) n = d α n = number of samples Z1-α/2 = the Z value at 95% degree of significance is 1.96 P = proportion of subject azo-exposed around 50% (0.5) d = degree of deviation to population, by 5% (0.05) Based on the formula, n = 9.8 ≈ 10. In this study, the number of subjects for each group is 10 participants.
Data collection
Participants were asked to rinse out their mouths first to remove debris in the oral cavity. Buccal epithelial cell harvesting was carried out using the smear method using sterile foam Tipped Swab (Product Code: PW1174, Himedia, India). Swab was done by turning in the direction of at least 360° in the buccal mucosa then put in a microtube. Samples were transported in the microtube with 1x PBS to the lab. Sample collection was carried out at the batik factories for the exposed participants and at the university for the non-exposed participants.
DNA isolation DNA isolation was done following the protocol from HiPurA TM Buccal DNA Purification Kit (Product Code: MB531; Himedia, India). Briefly, the buccal swab sample was placed into a 2.0 ml microcentrifuge tube, 400 µl of resuspension solution was added, and the tube was centrifuged at 14,000 rpm for 5 minutes. The pellet was discarded and the supernatant was transferred to a new collection tube. 20 µl of Proteinase K solution (20 mg/ml) was added to the tube containing the supernatant, and this was vortexed for 10-15 seconds. 20 µl of RNase A solution (20 mg/ml) was added, and the tube was again vortexed for 10-15 seconds. The sample tubes were incubated for 2 minutes at room temperature (15-25°C).
The lysis reaction was done by added 400 µl of lysis solution to the tube, which was vortexed thoroughly for a few seconds to obtain a homogenous mixture. Samples were incubated at 55°C for 10 minutes. For the binding step, 400 µl of ethanol (96-100%) was added to the lysate, which was then mixed thoroughly by vortexing for 5-10 seconds. The lysate was added to the HiElute Miniprep Spin Column (Capped) and samples was centrifuged at 6,500 x g (10,000 rpm) for 1 minute. The flow-through liquid was discarded, and the procedure was repeated with any remaining lysate. A prewash was performed by adding 500 µl of diluted prewash solution to the column and centrifuged at 6,500 x g (10,000 rpm) for 1 minute. The flow-through liquid was discarded and the same collection tube was re-used with the column.
Subsequently, samples were washed by adding 500 µl of diluted Wash Solution to the column and centrifuged at 12,000-16,000 x g (13,000-16,000 rpm) for 3 minutes to dry the column then the flow-through was discarded and a new uncapped 2.0 ml collection tube was placed in the column. DNA Elution was done by pipetting 150 µl of the Elution Buffer directly onto the column without spilling on the sides. The samples were incubated for 1 minute at room temperature and centrifuged at >6500 x g (10,000rpm) for 1 minute to elute the DNA. Storage of the eluted purified DNA was done at 2-8°C for short-term (24-48 hours) or -20°C for long-term storage.
Evaluation of DNA characteristics
Purity and concentration of DNA buccal cells were characterized using electrophoresis and spectrophotometer. Agarose gel was prepared in concentration of 2%. Agarose (Biotechnology Grade, 1 st Base, Singapore; 1 gr) was added to 50 ml Tris/Borate/EDTA (TBE) buffer, then put in microwave for around 2 minutes until completely dissolved. After agarose solution was cooled to about 50°C (around 5 minutes), this was poured into a gel tray with the well comb in place and then let sit at room temperature for around 20 minutes until agarose gel had completely solidified. Loading buffer was added to each DNA sample. Agarose gel was placed into the gel box (electrophoresis unit), then filled gel box with 1x TBE until the gel was covered. Electrophoresis was run at 100 mA and then visualized with Florosafe DNA Stain (Genetika Science, PT. Genetika Science Indonesia) using a UV transilluminator. Concentration of DNA buccal cell were measured using a spectrophotometer. Purity of DNA was analysed using spectrophotometer at 280 and 260/280 nm.
Data analysis
Data analysis was performed using stat Shapiro-Wilk and Levene's test, to see if the data were normal and homogenous. Mann-Whitney U test was used to describe the comparison between azo-exposed group and the control group. Statistical measurement was performed using IBM SPSS Statistics v22. The classification of the azo-exposed group and control group was performed using chemometrics of principal component analysis (PCA) using Minitab version 17. P<0.05 was taken as significant.
Results
Characteristics of study participants are described in Table 1.
The presence of high-molecular weight DNA was evaluated by gel electrophoresis and visualized using a UV illuminator. Figure 1 shows that bands for high-molecular weight DNA only appeared for the azo-exposed group, but it did not appear in control group. There was no significant difference between groups for the purity of the DNA at A 280 nm (p=0.076), ratio A 260/280 (p=0.718), or the concentration of the DNA (p=0.076) ( Table 2).
PCA could successfully classify participants in azo-exposed and control groups. The score plot for the first principal components (PC1) and second principle component (PC2) is shown in Figure 2. In addition, the loading plot for the evaluation of variables contributing to the separation is shown in Figure 3. The concentration of DNA contributed the most in PCA, as it was the fartherest variable from the initial points (0.0).
Discussion
The method of this study was done using exfoliative buccal epithelial cells by swab tip or cytobrush then purity and concentration of the DNA was analysed. The exfoliative method was non-invasive. One of the important procedures in the study for DNA extraction was the collection method of the sample. According to Mulot et al. 18 , cytology brushes (cytobrush) are the most appropriate method and provide good quality cell collection compared to mouthwashes, swabs, or collected from saliva.
In the present study, DNA electrophoresis revealed a band for high-molecular weight DNA in azo-exposed group only ( Figure 1). This result indicated that the concentration of the DNA from buccal epithelial cells in azo-exposed group was higher than controls. However, we noticed that not all Table 1. Demographics of azo-exposed batik workers and control group (non-exposed). samples in the azo-exposed group revealed a band. This may be because of the low concentration of DNA in the collected buccal epithelial cells. This result was supported by our spectrophotometer measurements ( Table 2), showing that DNA concentration in the control group was lower than in the azo-exposed group. DNA quality may have been affected by collection and isolation methods. This result showed the mean OD 260/280 ratio was 1.89 both in the azo-exposed and control groups, which indicates that the bulk of the proteins were removed successfully.
Group
The standard deviation for the purity of the DNA at A280 nm and the concentration of the DNA (Table 2) in azo-exposed groups was higher than the mean. This indicates that the purity of the DNA from azo-exposed participants varied, which may be due to exposure of azo that has induced DNA damage. According to Ferraz et al. 19 , the azo dye, Disperse Orange 1, which is used in textiles, induces a frameshift mutation and cytotoxic effect in the human hepatoma cell line HepG2. Mutagenicity was shown by enhanced nitroreductase and o-acetyltransferase, which are important enzymes in mutagenicity. This result was also supported by a previous study that showed that azo dye exposure increases the number of micronuclei, karyolysis, pyknosis, and expression of cytokeratin 5 and 19 in oral epithelial cells [11][12][13][14][15] . However, these results have not yet revealed how the mechanism of DNA damage occurs in oral epithelial cells due to azo exposure.
In order to classify participants into azo-exposed and control groups, principal component analysis (PCA) was used. PCA is capable of projecting the initial variable data in reduced dimensions defined by principal components (PCs). The value corresponding to the PC is known as score plot 20,21 . PCA was done in this study using three variables, namely the purity of DNA at 280 nm, the concentration of DNA, and the ratio of absorbance values at 280 and 260 nm (A 280/260 nm ). Our results showed that the azo-exposed group could be separated successfully and easily differentiated from control group using PC1 and PC2 score plots ( Figure 2). The loading plot of PCA was performed to evaluate the variables having the most significant contribution to the separation and classification of participants as azo-exposed and controls. The loading plot can explain the projection of variables used during PCA in the same plane as the score plot 22 . The absolute value of loading in the variables explains the importance of the contribution of each region. Therefore, the further the variables are from the origin of the variable point, the larger the contribution of that variable to the PCA model 23,24 . The results of the loading plot indicated that all three variables made a significant contribution to the PCA model.
Conclusion
Buccal cell DNA of batik workers exposed to azo compounds had higher purity of DNA, concentration of DNA and absorbance ratio at 260/280 than buccal cell DNA of controls (not exposed to azo compounds). Principal component analysis, based on score plot, could successfully classify participants as controls and azo-exposed individuals. The characteristics of DNA could be used as an indication of exposure to azo compounds in workers in batik industries. | 2020-09-02T17:36:42.519Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "571fa1557a32a5e3f0b2c3f863aaa4a2c7598776",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/9-1053/v1/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "571fa1557a32a5e3f0b2c3f863aaa4a2c7598776",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267238139 | pes2o/s2orc | v3-fos-license | Theoretical Investigation of a Coumarin Fluorescent Probe for Distinguishing the Detection of Small-Molecule Biothiols
Monitoring the level of biothiols in organisms would be beneficial for health inspections. Recently, 3-(2′-nitro vinyl)-4-phenylselenyl coumarin as a fluorescent probe for distinguishing the detection of the small-molecule biothiols cysteine/homocysteine (Cys/Hcy) and glutathione (GSH) was developed. By introducing 4-phenyselenium as the active site, the probe CouSeNO2/CouSNO2 was capable of detecting Cys/Hcy and GSH in dual fluorescence channels. Theoretical insights into the fluorescence sensing mechanism of the probe were provided in this work. The details of the electron excitation process in the probe and sensing products under optical excitation and the fluorescent character were analyzed using the quantum mechanical method. All these theoretical results would provide insight and pave the way for the molecular design of fluorescent probes for the detection of biothiols.
Introduction
Biothiols are involved in many processes of transfer and detoxification, including cell growth, redox, and so on.Small molecular biothiols, including cysteine, homocysteine, and glutathione (Cys, Hcy, and GSH, respectively), are important sulfur compounds that could protect parts of the body due to their reducibility [1][2][3].Biothiols with structural differences would lead to different functions; meanwhile, the biothiols are related to each other.Cys is involved in the process of enzyme catalysis, detoxification, and protein synthesis.Hcy is a regulatory intermediate in the Met cycle and the precursors of Cys and methionine.GSH has a role in maintaining redox homeostasis in biological systems.
The concentration of biological biothiols will deviate from normal values under the influence of adverse factors and directly affect their functions.In this situation, diseases such as growth retardation, cardiovascular disease, liver damage, and rheumatism, etc., could be caused.Therefore, monitoring the level of biothiols in organisms would be beneficial for health inspections.Nowadays, the methods of detecting biothiols are diversified and gradually improved.Yet, different detection methods have their own advantages and drawbacks [2,[4][5][6][7][8][9][10].
According to the comparative analysis, the detection results of high-performance liquid chromatography and mass spectrometry are relatively stable and sensitive but necessitate complicated sample operations and expensive equipment.The capillary electrophoresis detection method is economical and rapid, but has slightly inferior sensitivity.Colorimetry is easy to use but usually produces a relatively big error.Although the electrochemical analysis method has the advantages of convenience and high sensitivity, it is relatively weak in terms of selectivity.In contrast, fluorescent probes have been successfully applied to many detection fields due to their advantages of high sensitivity, low background interference, high selectivity, and good biocompatibility.Combined with focusing microscope instruments, fluorescent probes are applied to real-time and in situ imaging of biological cells and tissues without causing any damage, which provides a powerful analytical technique for disease diagnosis and is becoming a popular detection method in biological and medical fields.
In recent years, remarkable progress has been made in the construction of biothiol fluorescent probes [4,[17][18][19].Many reported fluorescent probes can be responded to biothiols in cells, tissues, and a variety of amino acids.However, due to the similar structure and reactivity of biothiols, most of the fluorescent probes reported so far cannot distinguish between the biothiols of Cys, Hcy, and GSH, which hinders the research of their roles in corresponding physiological and pathological processes [20,21].
Recently, Chen et al. developed 3-(2 ′ -nitro vinyl)-4-phenylselenyl coumarin as a fluorescent probe for distinguishing between the detection of Cys/Hcy and GSH.By introducing 4-phenyselenium as the active site, the probe CouSeNO 2 /CouSNO 2 was capable of detecting Cys/Hcy and GSH in dual fluorescence channels [22].For the biothiols, the first-step sensing reaction was experimentally proven to be the nucleophilic substitution of 4-phenylselenium with the thiol group.Furthermore, through two-channel fluorescent imaging, the probe CouSeNO 2 /CouSNO 2 had been successfully applied to sense the exogenous and endogenous biothiols in living cells.Except for the Michael addition as a usual sensing reaction in reported nitroolefin fluorescent probes, the nucleophilic substitution of 4-phenylselenium in the probe CouSeNO 2 /CouSNO 2 with the thiol group of a biothiol as the first-step sensing reaction not only accelerated the reaction to biothiols but also realized the distinction between Cys/Hcy and GSH in dual fluorescence channels.Compared with the experimental results, the theoretical research on the electronic structure, reaction sites, sensing mechanism, and fluorescent properties of the probe CouSeNO 2 /CouSNO 2 in this work could provide insights and pave the way for the molecular design of fluorescent probes for the detection of biothiols.
Results and Discussion
The stable molecular structures of the probes CouClNO 2 , CouSNO 2 , and CouSeNO 2 are shown in Figure 1a-c.Due to no apparent spectral response with biothiols, the probe CouClNO 2 was only presented for structural comparison here but not for consideration in the following theoretical research.
From the surface map of average local ionization energy (ALIE) [23] on three probes in Figure 1d-f, it could be deduced that the C=C bond in CouClNO 2 is the potential electrophilic reaction site (with an ALIE value of 0.33 a.u.); otherwise, the S(Se) atom and C=C bond in the CouSNO 2 and CouSeNO 2 probes are the potential electrophilic reaction sites (with the ALIE values of 0.32 a.u. and 0.30 a.u.for the S and Se atoms in the CouSNO 2 and CouSeNO 2 probes, respectively).
Fukui function and dual descriptor, known as the important concepts in density functional reactivity theory, which was initially developed by Parr, are very popular methods for predicting reaction sites defined under the conceptual density functional theory framework [24][25][26].The dual descriptors of the CouClNO 2 , CouSNO 2 , and CouSeNO 2 probes were obtained through Multiwfn 3.8(dev) analysis based on the ORCA output results and are illustrated in Figure 1g-i.The S and Se atoms in the CouSNO 2 and CouSeNO 2 probes (indicated by the red circle in Figure 1h,i) were indicated to be the potential electrophilic reaction sites with biothiols, which were in agreement with the corresponding experimental results [22].The lower ALIE value of the Se atom compared to the S atom indicated the higher sensitivity of the CouSeNO 2 probe to biothiols than the CouSNO 2 probe, which was also testified within the experiment work.large reorganization energy and Huang-Rhys factors [27,28] for some normal vibration modes, as shown in Figure 4 To illustrate the electron excitation process from S0 to S1 within the CouSNO2 and CouSeNO2 probes, the hole-electron (brown and green colors, respectively, in Figure 5) analyses were performed based on the TDDFT results.It could be informed that the electron was mainly excited from the benzene ring part to the main planar part of the probes.The excitation energy from S0 to S1 in CouSeNO2 (3.940 eV) was a little larger than that in CouSNO2 (3.917 eV).To illustrate the electron excitation process from S 0 to S 1 within the CouSNO 2 and CouSeNO 2 probes, the hole-electron (brown and green colors, respectively, in Figure 5) analyses were performed based on the TDDFT results.It could be informed that the electron was mainly excited from the benzene ring part to the main planar part of the probes.The excitation energy from S 0 to S 1 in CouSeNO 2 (3.940 eV) was a little larger than that in CouSNO 2 (3.917 eV).
To illustrate the electron excitation process from S0 to S1 within the CouSNO2 and CouSeNO2 probes, the hole-electron (brown and green colors, respectively, in Figure 5) analyses were performed based on the TDDFT results.It could be informed that the electron was mainly excited from the benzene ring part to the main planar part of the probes.The excitation energy from S0 to S1 in CouSeNO2 (3.940 eV) was a little larger than that in CouSNO2 (3.917 eV).The simulated UV-Vis absorption spectrum of the CouSeNO2 probe, as shown in Figure 6a, indicated that the absorption wavelength from S0 to S1 was about 473 nm, which was near the experimental value of 480 nm and testified to the reasonable choice of the functional and basis set for electron excitation calculation on this kind of organic molecular probe.After the reaction with Cys and Hcy, the absorption wavelengths from S0 to S1 of the sensing products Cou-Cys and Cou-Hcy were changed by a blue shift to be about 360 nm and 357 nm, respectively, which were consistent with the experimental results.The simulated UV-Vis absorption spectrum of the CouSeNO 2 probe, as shown in Figure 6a, indicated that the absorption wavelength from S 0 to S 1 was about 473 nm, which was near the experimental value of 480 nm and testified to the reasonable choice of the functional and basis set for electron excitation calculation on this kind of organic molecular probe.After the reaction with Cys and Hcy, the absorption wavelengths from S 0 to S 1 of the sensing products Cou-Cys and Cou-Hcy were changed by a blue shift to be about 360 nm and 357 nm, respectively, which were consistent with the experimental results.Unlike the charge transfer characteristic of electron excitation in the process from S0 to S1 within the original CouSeNO2 probe, it was shown the local excitation character for the electron excitation process from S0 to S1 within the sensing products Cou-Cys and Cou-Hcy, and this local excitation character led to a significant increase in the fluorescent intensity at about 460 nm and 451 nm, respectively, which were testified within both the theoretical and experimental results.A similar reaction between the CouSeNO2 probe and Unlike the charge transfer characteristic of electron excitation in the process from S 0 to S 1 within the original CouSeNO 2 probe, it was shown the local excitation character for the electron excitation process from S 0 to S 1 within the sensing products Cou-Cys and Cou-Hcy, and this local excitation character led to a significant increase in the fluorescent intensity at about 460 nm and 451 nm, respectively, which were testified within both the theoretical and experimental results.A similar reaction between the CouSeNO 2 probe and GSH occurred, which also led to the variation in the UV-Vis absorption spectrum and fluorescent intensity of the sensing product Cou-GSH.Without the seven-or eightmembered ring like in sensing products Cou-Cys and Cou-Hcy, due to the Michael addition reaction of the thiol group to the unsaturated C=C double bond, there was a red shift within the UV-Vis absorption and fluorescent spectrum of sensing product Cou-GSH compared with the original probe CouSeNO 2 .The theoretical absorption and emission wavelength between S 0 and S 1 was about 500 nm and 550 nm, respectively, which were well agreed with the experimental values of 515 nm and 562 nm, respectively.The theoretical and experimental fluorescent-related absorption and emission wavelengths are summarized in Tables 1 and 2. 1.
To illustrate the electronic structures of the CouSeNO 2 probe and its sensing product with biothoils in more depth, the density of electronic states (DOSs) were calculated and are illustrated in Figure 7.The main orbital transition contribution to the electron excitation between S 0 and S 1 in the probes and sensing products was the highest occupied molecular orbit (HOMO) and the lowest unoccupied molecular orbit (LUMO), as shown in Tables 1 and 2. The fluorescence of the probes and sensing products were decided through the electron radiation process from S 1 to S 0 .
with biothoils in more depth, the density of electronic states (DOSs) were calculated and are illustrated in Figure 7.The main orbital transition contribution to the electron excitation between S0 and S1 in the probes and sensing products was the highest occupied molecular orbit (HOMO) and the lowest unoccupied molecular orbit (LUMO), as shown in Tables 1 and 2. The fluorescence of the probes and sensing products were decided through the electron radiation process from S1 to S0.The total DOS (TDOS) of the probe and sensing product molecules and the partial DOS of two individual parts in the molecules (coumarin part and benzene ring in the probe and biothoils in the sensing product) are all depicted within Figure 7.It could be seen that the obvious charge transfer characteristic in the electron excitation process between S0 and S1 in the CouSeNO2 probe, in which the HOMO was mainly contributed by the benzene ring part and the LUMO was mainly contributed by the coumarin part.This charge transfer character indicated that the ICT process led to the small oscillation strength between the S0 and S1 states and a weak fluorescent intensity in the original CouSeNO2 probe.Otherwise, the local excitation characteristic was shown in the electron excitation process between S0 and S1 in the sensing products through the probe reaction with the biothiols, which led to the corresponding significant oscillation strength and fluorescent intensity.Due to the different molecular structures of the biothoils, the sensing product Cou-GSH without its 7-8-membered rings showed a wavelength red shift in maximum absorption peak and fluorescence compared with the original CouSeNO2 probe.Contrarily, the Michael addition reaction between the thiol groups (Cys and Hcy) and the unsaturated C=C double bond in the CouSeNO2 probe led to the formation of the 7-8-membered rings in the sensing products Cou-Cys and Cou-Hcy, which made the different electronic structure variation compared with the original probe CouSeNO2.Both the wavelength of the maximum absorption peak and fluorescence took a blue shift relative to the CouSeNO2 probe.The blue and red absorption shifts of CouSeNO2 with Cys, Hcy, and GSH were clearly related to the HOMO/LUMO energy gaps of the corresponding sensing products.So, the different wavelengths and colors of the fluorescence from the sensing product with the biothiols (Cys, Hcy, and GSH) allowed for the CouSeNO2 probe to be successfully applied in distinguishing the detection of the small-molecule biothiols.The total DOS (TDOS) of the probe and sensing product molecules and the partial DOS of two individual parts in the molecules (coumarin part and benzene ring in the probe and biothoils in the sensing product) are all depicted within Figure 7.It could be seen that the obvious charge transfer characteristic in the electron excitation process between S 0 and S 1 in the CouSeNO 2 probe, in which the HOMO was mainly contributed by the benzene ring part and the LUMO was mainly contributed by the coumarin part.This charge transfer character indicated that the ICT process led to the small oscillation strength between the S 0 and S 1 states and a weak fluorescent intensity in the original CouSeNO 2 probe.Otherwise, the local excitation characteristic was shown in the electron excitation process between S 0 and S 1 in the sensing products through the probe reaction with the biothiols, which led to the corresponding significant oscillation strength and fluorescent intensity.Due to the different molecular structures of the biothoils, the sensing product Cou-GSH without its 7-8-membered rings showed a wavelength red shift in maximum absorption peak and fluorescence compared with the original CouSeNO 2 probe.Contrarily, the Michael addition reaction between the thiol groups (Cys and Hcy) and the unsaturated C=C double bond in the CouSeNO 2 probe led to the formation of the 7-8-membered rings in the sensing products Cou-Cys and Cou-Hcy, which made the different electronic structure variation compared with the original probe CouSeNO 2 .Both the wavelength of the maximum absorption peak and fluorescence took a blue shift relative to the CouSeNO 2 probe.The blue and red absorption shifts of CouSeNO 2 with Cys, Hcy, and GSH were clearly related to the HOMO/LUMO energy gaps of the corresponding sensing products.
So, the different wavelengths and colors of the fluorescence from the sensing product with the biothiols (Cys, Hcy, and GSH) allowed for the CouSeNO 2 probe to be successfully applied in distinguishing the detection of the small-molecule biothiols.
Theoretical Methods
The theoretical methods of the research for the fluorescent probe CouSeNO 2 /CouSNO 2 sensing biothiols were as follows: 1.
The functional and basis set combination CAM-B3LYP/def2-TZVPD was used in structure optimization, corresponding vibrational frequency analysis on the probe, and sensing product conformations with ORCA program 5.1 [30][31][32][33].Non-imaginary frequency was found in the vibrational analysis on the stable geometric structure, which confirmed the stability of the structure optimization results.The wB2GP-PLYP/def2-TZVPD combination was used in single-point energy to obtain free energy with high precision, according to benchmark research [34].Similar calculated results were obtained in the gas phase and in several solvents with different polarities, which indicated that this fluorescent probe was insensitive to the solvent effect.2.
The electronic structure and fluorescent properties of the probe and its sensing products were obtained through the Multiwfn 3.8(dev) code [35] based on the DFT and TDDFT results through the ORCA program.
3.
The reorganization energy and Huang-Rhys factors between the S 0 and S 1 states of the probe and sensing products were obtained through the Dushin program.4.
Most of the figures in this work were rendered by means of VMD 1.9.3 software [36].
Conclusions
The electron structure and fluorescent theoretical analysis indicated a local excitation character for the electron excitation process from S 0 to S 1 within the sensing product of the CouSeNO 2 probe's reaction with small-molecule biothiols, including Cys/Hcy and GSH.Due to the different molecular structures of the biothoils, the sensing product Cou-GSH without its 7-8-membered rings showed a wavelength red shift in its maximum absorption peak and fluorescence compared with the original CouSeNO 2 probe.Contrarily, the Michael addition reaction between the thiol groups (Cys and Hcy) and the unsaturated C=C double bond in the CouSeNO 2 probe led to both the wavelengths of the maximum absorption peak and fluorescence taking a blue shift relative to the CouSeNO 2 probe.So, the different wavelengths and colors of the fluorescence from the sensing product with the biothiols (Cys, Hcy, and GSH) allowed for the CouSeNO 2 probe to be successfully applied in distinguishing the detection of the exogenous and endogenous biothiols in living cells.The theoretical investigation of the mechanism of fluorescent probe molecular design would provide insights into building highly efficient fluorescent probes for biothiol detection in the future.
Figure 1 .
Figure 1.(a-c) The stable molecular structures of the CouClNO2, CouSNO2, and CouSeNO2 probes; (d-f) the surface map of ALIE on the CouClNO2, CouSNO2, and CouSeNO2 probes; and (g-i) the dual descriptors of the CouClNO2, CouSNO2, and CouSeNO2 probes.(the red circle indicate S and Se atom in the CouSNO2 and CouSeNO2 probes respectively).From the 2D plots of dual descriptors on the main molecular planes of the CouSNO2 and CouSeNO2 probes, as shown in Figure 2a,b, the dual descriptor absolute values of the S and Se atoms are obviously larger than the values at other places within the probe molecule.This result indicated that a substitution reaction would likely occur within the S and Se atoms when the CouSNO2 and CouSNO2 probes encountered the biothiols.The 2D localized orbital locator (lol) on the molecular planes of the CouSNO2 and CouSeNO2 probes, as shown in Figure 2c,d, also indicated that the S and Se atoms in the probe molecules were potential reaction sites.The sensing mechanism of CouSeNO2 towards biothiols is shown in Scheme 1.
Figure 1 .
Figure 1.(a-c) The stable molecular structures of the CouClNO 2 , CouSNO 2 , and CouSeNO 2 probes; (d-f) the surface map of ALIE on the CouClNO 2 , CouSNO 2 , and CouSeNO 2 probes; and (g-i) the dual descriptors of the CouClNO 2 , CouSNO 2 , and CouSeNO 2 probes.(the red circle indicate S and Se atom in the CouSNO 2 and CouSeNO 2 probes respectively).From the 2D plots of dual descriptors on the main molecular planes of the CouSNO 2 and CouSeNO 2 probes, as shown in Figure 2a,b, the dual descriptor absolute values of the S and Se atoms are obviously larger than the values at other places within the probe molecule.This result indicated that a substitution reaction would likely occur within the S and Se atoms when the CouSNO 2 and CouSNO 2 probes encountered the biothiols.The 2D localized orbital locator (lol) on the molecular planes of the CouSNO 2 and CouSeNO 2 probes, as shown in Figure 2c,d, also indicated that the S and Se atoms in the probe molecules were potential reaction sites.The sensing mechanism of CouSeNO 2 towards biothiols is shown in Scheme 1.The most stable geometric structures of the ground state S 0 and first excited state S 1 of the CouSNO 2 and CouSeNO 2 probes are shown in Figure3.It indicated a similar difference between the S 0 and S 1 structures of the CouSNO 2 and CouSeNO 2 probes, in which the benzene ring showed an obvious flip from the ground state to the first excited state.The dihedral angle, α, between the benzene ring and the main molecular plane of the CouSNO 2 probe variated from 59 • to 108 • when the molecule was excited from S 0 to S 1 ; this change in α was from 56 • to 108 • in the CouSeNO 2 probe.This large structural difference between S 0 and S 1 within the CouSNO 2 and CouSeNO 2 probes would lead to large reorganization energy and Huang-Rhys factors[27,28] for some normal vibration modes, as shown in Figure4(CouSNO 2 was only shown for clarity consideration).It could be seen that the vibration mode with large Huang-Rhys factors were just corresponding with the swing of the benzene ring in the probe molecule.The reorganization energy and Huang-Rhys factors between S 0 and S 1 of the CouSNO 2 and CouSeNO 2 probes were calculated through the Dushin program[29].
Figure 2 .
Figure 2. (a,b) Two-dimensional plots of dual descriptors on the molecular planes of the CouSNO2 and CouSeNO2 probes; (c,d) 2D lol on the molecular planes of the CouSNO2 and CouSeNO2 probes.
Figure 2 .Figure 2 .
Figure 2. (a,b) Two-dimensional plots of dual descriptors on the molecular planes of the CouSNO 2 and CouSeNO 2 probes; (c,d) 2D lol on the molecular planes of the CouSNO 2 and CouSeNO 2 probes.
Figure 3 .
Figure 3. (a,b) Geometric structures of the S0 and S1 of the CouSNO2 probe; and (c,d) geometric structures of the S0 and S1 of the CouSeNO2 probe.
Figure 3 .
Figure 3. (a,b) Geometric structures of the S 0 and S 1 of the CouSNO 2 probe; and (c,d) geometric structures of the S 0 and S 1 of the CouSeNO 2 probe.Molecules 2024, 29, x FOR PEER REVIEW 6 of 11
Figure 4 .
Figure 4.The Huang-Rhys factors of the CouSNO2 probe.
Figure 5 .
Figure 5. Hole-electron (brown and green colors, respectively) analysis for the electron excitation process from S0 to S1 within the (a) CouSNO2 and (b) CouSeNO2 probes.
Figure 5 .
Figure 5. Hole-electron (brown and green colors, respectively) analysis for the electron excitation process from S 0 to S 1 within the (a) CouSNO 2 and (b) CouSeNO 2 probes.
Table 1 .
The main electron excitation processes in the probe and sensing product molecule.
a Only the excited states with an oscillator strength larger than 0.1 were considered.b H stands for HOMO and L stands for LUMO.c Coefficient of the wave function for each excitation was in absolute value.
Table 2 .
The main electron emission processes in the probe and sensing product molecule.
a,b,c same indication as in Table | 2024-01-26T16:55:38.073Z | 2024-01-23T00:00:00.000 | {
"year": 2024,
"sha1": "090cbdf1ff738308ec2117c3ed8da270bba31f44",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/29/3/554/pdf?version=1705985733",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "92855b77d4c61569f494d2a1eba484ff02a0a2a8",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269521959 | pes2o/s2orc | v3-fos-license | Life and Wisdom in Tembang Lir ilir and Kidung Rumekso ing Wengi : A Philosophical Analysis
The traditional Javanese tembang s created by Sunan Kalijaga, especially Lir ilir and Kidung Rumekso ing Wengi , have an extraordinary depth of philosophical meaning when associated with Javanese culture. Therefore, this research aims to explore and analyze the philosophical meanings contained in the two tembang s and find out how the spiritual messages in the tembang s are still relevant to the conditions of society in the modern era. The sources of data for this research are (1) interviews with Sunan Kalijaga's descendants, artists in Demak Regency, Demak Regency officials, (2) historical records and scientific articles about Sunan Kalijaga, and (3) direct observation in Demak Regency. This research uses Ferdinand de Saussure's semiotic theory to find out the deeper meaning of the text of the two tembang s. For data validation, this research uses the triangulation method, in order to compare all available data and find a common thread of reliability to be presented as scientific data. In the interpretation stage, the researcher tried to explore the messages contained in the two tembang s, which were analyzed from the lyrics of the tembang s and the opinions of experts found in Demak Regency. The results show that both tembang s contain philosophical teachings on how to achieve wisdom and peace in every episode of human life. They also teach about the importance of establishing a good relationship with God. In addition, this study also revealed that the spiritual messages in these two tembang s are still relevant as a reflection of everyone's spiritual journey in the modern era.
INTRODUCTION
Demak is known as the first Islamic Kingdom to exist on the island of Java replacing the super power of the Hindu Majapahit Kingdom which had dominated for more than 200 years (Graaf & Pigeaud, 1976, p. 2).It is estimated that the Islamic Kingdom of Demak was established and first operated in 1475 as a highly respected Kingdom in Central Java (Robson, 1981, p. 279).The strong influence of the Demak Kingdom in influencing the trend of Javanese beliefs at that time could not be separated from a figure named Sunan Kalijaga (Houben, 2003, p. 154).He was one of the Wali Sanga who lived, preached, and was buried in Demak Regency, precisely in a village called Kadilangu.
Sunan Kalijaga is known as the wali who became the architecture of the Great Mosque of Demak itself (Arif Muzayin Shofwan, 2021, p. 190), which is estimated to have been built in 1498 (Kusno, 2003, p. 59), and is one of the oldest mosques in Java (Graaf, 1963, p. 2), which would inspire the construction of mosques in other areas (Muhaimin, 2006, p. 168).).Sunan Kalijaga also played an important role in the arrangement of the city in Demak Regency, even Sunan Kalijaga in a study is believed to be the one who succeeded in Islamizing the last king of Majapahit before the Demak Kingdom was finally established.(Anderson, 1981, p. 114).
When compared to the other Wali Sanga, Sunan Kalijaga also had several unique characteristics, but the most famous was combining Javanese cultural symbols with Islam (Mujiningsih & Yetti, 2015, p. 218).This can certainly be seen from the use of Takwa clothes, or Javanese call it Jarik.Sunan Kalijaga, although he was a great scholar, still did not hesitate to wear Jarik and had to be different from the scholars who existed in that era, who mostly wore robes characterized by Arabic culture (Ricci, 2009, p. 16).Of course, this was done by Sunan Kalijaga so as not to look foreign and there was no symbol of distinction between himself and the identity of ordinary people when preaching to the Javanese community.
Besides being famous for his ideas and clothing, Sunan Kalijaga was also famous for the Islamic artworks he created.These works of art include lakon carangan in wayang performances (Laffan, 2011, p. 8) and several tembang works such as Lir ilir (Rahmawati & Pamungkas, 2023, p. 263) and Kidung Rumekso ing Wengi (Aryanto, 2021, p. 44).Sunan Kalijaga's purpose in creating Islamic artworks, especially the tembang Lir ilir and Kidung Rumekso ing Wengi, was as a medium for spreading Islam at that time, precisely in the 15th and 16th centuries (Nugraha & Ayundasari, 2021, p. 531).As is known, the craze for Javanese musical arts such as tembang at that time was quite strong, Sunan Kalijaga realized that moment.Therefore, Sunan Kalijaga created the tembang Lir ilir and Kidung Rumekso ing Wengi with thick Javanese nuances, and also included very strong Islamic spiritual values.This of course resulted in Islam being easily accepted and absorbed by the Javanese people at that time (Anto & Anita, 2019, p. 78).
Several other studies also revealed similar things about the positive impact of the tembang created by Sunan Kalijga, one of which was in a study on Tembang Lir ilir (Mahmudi & Fathoni, 2023, p. 10) yang berjudul Relevansi Pendidikan Spiritual dalam Tembang Lir ilir Karya Sunan Kalijaga Dengan Masyarakat Madani, which revealed that the verses of Lir ilir were easy to be absorbed by the Javanese people as a medium for teaching Islam.
Coupled with research (Sakdullah, 2016, p. 13) enttiled Kidung Rumeksa Ing Wengi karya Sunan Kalijaga Dalam Kajian Teologis which states that, Kidung Rumekso ing Wengi has Islamic teachings that are easily understood by Javanese people including teachings about God, humans and human relationships with God.
Departing from the background and previous studies, the interest to further examine the two tembangs Lir ilir and Kidung Rumekso ing Wengi is unstoppable.In addition, when looking at previous research studies, most of them only use secondary data, not using direct interview data or observations related to the research.
Therefore, the purpose of this research is to explore the philosophical meaning of Sunan Kalijaga's two tembangs, Lir ilir and Kidung Rumekso ing Wengi by using primary data.The hope is that after the study, a reflection of the spiritual teachings of Sunan Kalijaga to the Javanese people can be found.Furthermore, the philosophical analysis of the two tembangs will provide an understanding of cultural identity and appreciation for the heritage of the ancestors.
In this paper too, in order not to make the discussion too broad and even result in a shallow study, the scope is focused on analyzing the lyrics, symbolism, and spiritual messages of Sunan Kalijga in the tembang Lir-ilir and Kidung Rumekso ing Wengi.
Literature Review
The reason for choosing these two tembangs is not only because of the popularity of this work in contemporary society, but also because these two tembangs are recognized by the descendants of Sunan Kalijaga in Kadilangu Village as his authentic work.In addition, these two tembangs contain the values of belief and wisdom contained in Javanese culture in the past, which is now starting to be a little forgotten by the times.This paper will be divided into four parts, namely Sunan Kalijaga's tembang in Demak then and now, the lyrics and interpretation of the tembang Lir ilir, the lyrics and interpretation of the tembang Kidung Rumekso ing Wengi, and finally the similarities in spiritual messages between the two previous tembangs.
Why is this research important to do because (1) our appreciation of local cultural heritage and identity which is now less popular with western culture, (2) Introducing the integration between culture and religion that has been done by the Ulama, (3) teaching spirituality and moral values through art as an additional option for da'wah media, (4) As a historical lesson, tradition about the character and teachings of Sunan Kalijaga, (5) Providing opportunities for people to do personal reflection through the moral messages contained in the tembang Lir ilir and Kidung Remukso Ing Wengi by Sunan Kalijaga.
METHODOLOGY
The research method this time is qualitative, which characterizes this research method is descriptive and inductive data sources (Adlini et al., 2022, p. 976;Munandar et al., 2023, p. 25).The reason for choosing this type of research is because of the nature of the qualitative method itself, which has no absolute rules (Gumilang, 2016, p. 144) and demands in-depth analysis of a social phenomenon (Abduh et al., 2020, p. 1;Ni Wayan Masyuni Sujayanthi & Ni Putu Hartini, 2023) and tends to use specific material objects (real) (Fadli, 2021, p. 35).
Of course this is very suitable for the study of lyrics, symbolism, and spiritual messages in this study.
The sources of qualitative data in this research include; direct observation, interviews, related literature, and historical records (Rijali, 2019, p. 86;Komang Indra Wirawan, 2023, p. 3) In the direct observation stage, visits were made to key places related to Sunan Kalijaga such as; Demak Great Mosque, Glagah Wangi Museum, and Sunan Kalijaga's Tomb in Kadilangu.In conducting the interview stage, direct meetings were held with Sunan Kalijaga's descendants, as well as the Demak community.Meanwhile, related literature and historical records were obtained through accredited journals and archives from the Demak Regency library archive office.
In the analysis method, this research uses semiotic text analysis developed by Ferdinand de Sausure, which at the context stage creates a paradigm of singnifier and signified (Sørensen & Thellefsen, 2022, p. 2).In general terms, signifier means a sign that we can perceive with our external senses, while signified is the concept or function contained in the object we observed earlier.In practice, Ferdinand de Saussure's semiotic theory will be used in analyzing and dissecting the lyrics of Lir ilir and Kidung Rumeksa Ing Wengi.
In addition to using semiotic theory, this paper will also use etymological analysis and cultural context, which in understanding the Javanese tembang created by Sunan Kalijaga in the 15th or 16th century must certainly understand the language paradigm and the general situation of the people around Java, especially Demak at that time.
At the stage of data validity, this research uses a triangulation technique in which observation data, interview data, and related literature data are synthesized with each other, after which the similarity of the arguments is analyzed.(Bachri, 2010, p. 55).If the results of all the previous data are reliable, then that data will be presented or written in this research.The limitations of this research method include, the difficulty of obtaining primary data about Sunan Kalijaga other than sources from his descendants in Demak Regency.Data on Sunan Kalijaga at the Demak archive library service was also limited and some could not be borrowed for analysis because they were rare collections.Likewise, the artifacts of Sunan Kalijaga's legacy in the Glagah Wangi Museum and the Museum of the Great Mosque of Demak, not too much information can be taken other Triangulation than explanations from the guards of the two museums.
Tembang Sunan Kalijaga in Demak Past and Present
In ancient times, Sunan Kalijaga generally held various art forms at the Great Mosque of Demak.This was done so that people who wanted to attend art performances were not far from the mosque, and eventually would gradually get to know Islam (Brakel, 2010, p. 786).There is even a source that says, the ticket for the performance that was held was to recite two sentences of shahada, the two sentences of the shahada itself are an important credo of Islam which when someone says this sentence sincerely and sincerely, they have automatically officially embraced Islam, as for the sound of the two sentences of the shahada is "La ilaha illallah, Muhammadur rasulullah" (There is no God but Allah, Muhammad is the messenger of Allah).Of course, we can also conclude that Sunan Kalijaga's tembangs such as Lir ilir and Kidung Rumekso ing Wengi were spread through Sunan Kalijaga's preaching at the Great Mosque of Demak, which we can eventually listen to today.
After centuries have passed, the influence of Sunan Kalijaga's tembangs still exists strongly in Demak Regency.This is proven by several statements from Sunan Kalijaga's descendants, artists and the public.They said that Sunan Kalijaga's tembangs such as Lir ilir are still often performed and sung together in schools, especially on the island of Java (Pujiharti, 2017, p. 183), as well as during recitations.Even the tembang Lir ilir is currently the soundtrack of a promotional video for Demak tourism made by the Demak Regency Tourism Office.As for Kidung Rumekso ing Wengi, in the information of Sunan Kalijaga's descendants, it is usually sung by farmers who are farming in the fields and according to the information of Demak culturalists, the tembang is usually also considered as a lullaby prayer to avoid jinn interference or bad things (Sidiq, 2008).The previous information was obtained during direct observation and interviews conducted in December 2022 in Demak Regency."Lir ilir depicts a situation where the times have changed now, ijo royo royo has come a new religion that is identical to the color green, namely Islam, a new religion has come even though it is difficult, reach for it.even though it is a meaning there is a starfruit depicted, which is a fifth that describes the 5 pillars of Islam.Even though it is difficult to reach, even though this heart has so many sins, so many.
Dodotiro-dodotiro kumitir bedhah ing pinggir
Your clothes are torn at the edges
Dondomono jlumatono kanggo sebo mengko sore
Sew them up and mend them for the evening
Mumpung padhang rembulane
While the moon still shines brightly
Mumpung
Jembar Kalangane While there's still plenty of time to spare
Sun suraka surak hiyo
Let's cheer up "Wake up" in the first line of this tembang means as an order to humans not to be lazy.When sleeping, this line also means criticism.Therefore, move, seek God, be devoted to Him, then believe.The second line reads that "the plant has blossomed".In this line, the plant itself in Javanese people can be interpreted as a symbol of awareness, devotion, piety and faith in Allah SWT (Khaelany, 2018, p. 202).Furthermore, the stanza reads that it has "turned green like a new bride".This stanza means about the nuances of one's faith that has grown.
Happiness finally arises like a bride who is happy.
The next stanza reads, "shepherd children climb the star fruit tree".Historically, starfruit trees were often used by Javanese people as shade in rice fields.At the same time, children also like to climb it and take its fruit.When examined semiotically, star fruit can mean like the five pillars of Islam, as a basic guide to the Islamic faith of a Muslim.
The next stanza reads, "even if it's slippery, keep climbing to wash your clothes".It should also be noted that in ancient times people also washed using starfruit (Khaelany, 2018, p. 204).It can be interpreted in this stanza as well, regarding the importance of washing or cleaning the heart, which is symbolized as clothing.By using starfruit, which means the five pillars of Islam before.Although difficult, it means the sacrifice that every human being must make (Wahyuningsih et al., 2019, p. 290).
"Your clothes that are torn at the edges, sew them up and mend them for the evening".This stanza means about the clothes symbolizing faith, which can be shaky or torn.Therefore it must be sewn, or literally, it must be justified, restored to its former glory.The word afternoon at the end means before death.While there is still a lot of free time, it is clear that the meaning of this sentence is an appeal to immediately repent and get closer to Allah SWT.In the last line, "let's cheer".As a symbol of gratitude where the human has passed the trials of the world and finally arrived in His heaven.Lyrics and Tafsir Tembang Kidung Rumekso ing Wengi Sunan Kalijaga.
Signifier Signified
Ana "Kidung" which in Javanese means tembang.It can be interpreted that Kidung Rumekso ing Wengi means a warning of the caution that must be taken into account by humans when they are walking at night.Why at night?Obviously the answer is because crimes are more prevalent at night (Azizah & Hidayat, 2021, p. 3).Especially when Sunan Kalijaga lived in the 15th-16th centuries AD, lighting was still very minimal, unlike today."Magic" in the lyrics can mean the disturbance that will occur at night, whether from humans or wild animals.The last sentence also adds, "thieves become distant", which could mean that thieves lose their prey.Because the humans themselves have taken good care of themselves.One of the ways is by walking not in quiet places and too late.
Interpretation of Stanza 2 of Kidung Rumekso ing Wengi:
If in the first stanza it is written "jinn and demons dare not approach", in this second stanza it is a little different, All diseases return to their place of origin.It can be concluded from this line that the disease or problem has really disappeared finally.If we continue to be careful and istiqomah on His path then problems will not only be afraid to come closer, but problems will also not arise.It is truly afraid to affect the thoughts of the pious.
The next verse explains that "pests will bewitch with the sight of love".This is in line with the experience of Doctor Larry Dossey who said that a congregation servant saw the salted fish in his shop covered in maggots.At the same time the congregation was saying prayers, and what happened was that the maggots fell off (Achmad Chodjim, 2018, p. 57).Hayyu in Javanese is read as wood, meaning life.If the seed is alive, it will be called a miracle tree, and the land where the tree is planted is called haunted land or other names for sacred land.It can also be concluded that the land where a tree is planted must be holy and halal (Achmad Chodjim, 2018, p. 66).Sometimes to honor the land the Javanese say it is sacred.
Interpretation of Stanza 3 of Kidung Rumekso ing Wengi:
The "rhinoceros cage" of this stanza means a fetus carrying a male or female.The drying "stone and sea" symbolizes the meeting of sperm and egg.All life is saved because of the existence of Angels, Angels, Apostles who are all submissive and obedient to all God's commands.
"Prophet Sis" in the stanza before the last of the lyrics means wisdom.Given that Prophet Sis is the sixth child of Prophet Adam who is known for his good nature.The last line reads "Prophet Moses".
From this last stanza we can take the story of the Prophet's strong belief in Allah SWT so that he can split the red sea.So it can be concluded that faith is a very important thing, so it must be spoken not only in the mouth but also stuck in the heart.
Interpretation of Stanza 4 of Kidung Rumekso ing Wengi: "Breath" is associated with Prophet Isa.Why is that?Prophet Isa is the only prophet who can bring the dead back to life.Of course this is thanks to the help of Allah SWT.It can be concluded that the faith of a Muslim must continue to be raised, in a calm and slow way, like the breath that flows throughout the human body."Prophet Jacob as a listener", this means that we cannot immediately believe the news we hear.Like the Prophet Jacob who did not believe the news of the death of his son Joseph eaten by a wolf.
The lyrics "Prophet David my voice", refers to Prophet David's melodious voice, so that anyone who listens to it will be mesmerized.The melodiousness can also mean meekness, so in speaking we are obliged to sound like that."Prophet Ibrahim is described as my life".When looking at history, he was indeed burned but still stood firm as if nothing had happened.We can relate this to the faith of a human being who must also remain strong.Like the body of Prophet Ibrahim.
"Prophet Sulaiman is my magic", means that we must be easy to adapt to every environment we encounter.Like the Prophet Sulaiman who was able to control the wind and talk to animals.He was very adaptive.The Prophet Joseph is placed on the face.Obviously the reason, is that we as humans continue to take good care of the face.It can radiate from the appearance of a shady facial light and always smile.
Furthermore, the lyrics "Prophet Idris in my hair", he is a prophet who is famous for his shiddiq and patient nature.Why is hair chosen?Hair is located on top of the head which means that the nature of shiddiq and patience must be rooted on the head.Next the stanza arrives at the companions of the Prophet, the first to be mentioned is "Ali Bin Abi Talib as my skin".He is the son-in-law of the Prophet Muhammad SAW, Ali Bin Abi Talib in a hadith is said to be the gate of knowledge or the output of knowledge, it can also be likened to the skin, while the Prophet is the warehouse.From this we can conclude that Islam also prioritizes knowledge.
In the last stanza the depiction of "Abu Bakr as blood, Umar as flesh and Usman as bone".It can be concluded that the three previous elements are very closely related and protect.Therefore, Muslims must also be like that, continue to protect each other in truth.
Interpretation of Stanza 5 of Kidung Rumekso ing Wengi: Blood cells are consumed by the marrow, so lack of blood is a big problem for humans, this is the interpretation of the first line "my marrow is Fatimah".Plus about the Mother "Siti Aminah is presented as a body", which is the support of goodness itself, namely the fetus of the Prophet Muhammad SAW.Goodness must be supported, it must be strong, not allowed to fall just like that.Next is "Prophet Job is in the gut", he is famous for his fortitude and patience.These values must continue to flow in human life manifested as an intestine.
"Prophet Noah is associated with the heart", wisdom can be taken from the story of him who worked hard to build a ship and help the believers.Like a heart that always beats tirelessly."Prophet Yunus is symbolized as a muscle", this is like him who was able to withstand the trials of being swallowed by a whale, not giving up and always surrendering.Furthermore, "Prophet Muhammad SAW is symbolized as an eye".Thanks to his guidance like an eye, we can arrive at the goodness that we can feel today.Lastly, Prophet Adam and Eve, who are the ancestors of mankind are symbolized as protectors.They are symbolized as the initial manifestation of human life that is pure, and must return to purity.Both tembangs feel strong in describing the journey of a flowing human life followed by many trials and challenges.
Tembang Liri-ilir itself emphasizes the continuous circulation of life, making a picture of the inevitable changes of human life.This is important for humans to understand as a way to learn to live responsibly (Dewi et al., 2019, p. 47), simply, and always be pious when life is no longer friendly to them.
Tembang Kidung Rumekso ing Wengi describes the spiritual journey of man with many exemplary stories from the Prophets (Lestari, 2021, p. 100), which leads him to a deeper understanding of life, especially highlighting the importance of always learning and growing in every knowledge so that later he can live more wisely and wisely, to himself and everyone (Khusniyah & Indrariani, 2023, p. 21).
In general, the two tembangs above teach about the importance of humans to reflect on themselves, search for meaning, and take a spiritual journey in life.Although the context of the language of these two tembangs is different, it can be concluded that the messages in both tembangs have similarities, which essentially remind us to live life with high awareness and continue to fear Him.
CONCLUSION
While studying and understanding the two tembangs Lir ilir and Kidung Rumekso ing Wengi in the research.Several things were finally discovered based on observation sources in Demak Regency, interviews with descendants of Sunan Kalijaga, artists, and the people of Demak, complemented by reviewing historical records.The findings are that the tembang Lir ilir and Kidung Rumekso ing Wengi are still well practiced in the community, especially in Demak Regency, where this research observation was conducted.This is evident from the fact that Lir ilir is generally sung in schools and recitations in Demak Regency, as well as Kidung Rumekso ing Wengi which is generally sung as a lullaby prayer and a tembang sung by farmers in rice fields, so that their crops are fertile.The second fact, this song has the same similarity in content although the musical nuances are slightly different, if the song kidung rumekso ing wengi is generally monophonic music sung by someone without accompaniment, the song Lir ilir tends to be polyphonic music sung and accompanied by many instruments in its musical practice.
The finding of the philosophical value of Kidung Remekso Ing Wengi and Lir Ilir is the similarity of the depiction of the meaning of spiritual messages for every human being to continue to strive to achieve perfect piety without fatigue and do not forget to always emulate the previous saints.It can also be concluded that these two songs represent the spiritual journey of humans, these things are clearly seen from every philosophical meaning of the lyrics of the two songs.In addition, the philosophical value of this research will also make people aware of the importance of self-reflection, as well as this research will teach people about the history of the integration between Javanese culture and Islam that has occurred since centuries ago.Of course, what Sunan Kalijaga did by combining these two things can be used as an option for da'wah media today, using art as a way to teach Islam.
But behind the exposure of the previous results, this research is certainly still not free from various kinds of shortcomings.This cannot be separated from the limitations of the researcher's analytical ability, as well as the lack of available data sources, considering that the material object is a song that has existed for centuries.It is the hope of the researchers that some academics in the future can develop this research or about other Javanese songs, considering that Javanese songs are a local wisdom from our ancestors that must be preserved and appreciated as high as possible.
Figure 2 :
Figure 2: At the Sunan Kalijaga Foundation is managed by descendants (Akbar Bagaskara, 2022) Some additional explanations regarding the interpretation of the tembang Lir ilir and Kidung Rumekso ing Wengi were obtained from the 15th descendant of Sunan Kalijaga who now serves as the caretaker of Sunan Kalijaga's tomb, Mr. R. Edy Mursalin.The following information can be seen below:
Figure 4 :
Figure 4: Notation Kidung Rumekso Ing Wengi Sunan Kalijaga (Source: Leo Sutrisno, 2009) But humans are obliged to change.Dress up, dondoni again, close up again, repair this heart, repair this self while there is still life for us.While we are still alive, we must improve ourselves.Then for people in the afternoon, in old age, if the clock is already 3 pm, at 6 o'clock we have begun to recede, of course people will die, so therefore they must immediately change before they die, while the sun has not receded.So that's a brief description if we want the meaning of what's called lir ilir, the tembang lir ilir."eliminate what is the name, to eliminate this body from various kinds for example to clean yourself from jinn interference and so on.To clean other sacred places from jinn interference and so on.So Kidung Rumekso ing Wengi is a prayer for all kinds of disorders and diseases.Medical diseases and non-medical diseases for us, then to repel plant pests, of course there are procedures.Then to repel plagues like yesterday there is covid, actually it is good to apply it, for example we hold a ceremony in one village to recite the Kidung Rumekso ing Wengi."(Interview R. Edy Mursalin / 15th descendant of Sunan Kalijaga, December 2022).
"Whereas Kidung Rumekso ing Wengi is a prayer, the last prayer as it is called.That's where Kidung Rumekso ing Wengi is a prayer for disease if in the community there is if you used to say pagebluk the term, then the plague then there are diseases in agriculture as well then to
Similarity of Spiritual Messages of Tembang Lir ilir and Rumekso Ing Wengi
Tembang Lir ilir and Kidung Rumekso ing Wengi both have a deep meaning in each lyric, there are spiritual messages implied.Although both have very different stylistic and lyrical nuances.In general, both Lir ilir and Kidung Rumekso ing Wengi have messages about the journey of human life that are closely related to moral and spiritual development. | 2024-05-03T15:07:41.000Z | 2024-04-29T00:00:00.000 | {
"year": 2024,
"sha1": "e6ea6dab6707415b0acc94049e1b9c4268261017",
"oa_license": "CCBYNC",
"oa_url": "https://jurnal.isi-dps.ac.id/index.php/mudra/article/download/2541/1020",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fbf5c1f8c228044ad4e427374149ad30034eb1ab",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": []
} |
35291483 | pes2o/s2orc | v3-fos-license | Nutritive Value and Digestion Kinetics of Manure Ensiled Wheat Straw Treated with Varying Levels of Urea and Corn Grains
The aim of this was to study the nutritive value of urea and corn grain treated wheat straw ensiled with cattle manure. The different levels of urea (0, 2 and 4%) and corn grain (2 and 4%) were used to treat wheat straw. The urea-corn grain treated wheat straw was mixed with cattle manure in the ratio of 70:30. The silages were fermented in laboratory silos for 20, 30 and 40 days. After the completion of ensilation period, the samples of ensiled wheat straw were analyzed for pH, dry matter (DM), crude protein (CP), true protein (TP), ammonia nitrogen (NH3N), neutral detergent fiber (NDF) and acid detergent fiber (ADF). The result showed that pH, NDF and ADF were decreased at 40 days ensilation period, 4% corn grains (CG) and urea levels each. Dry matter, CP, TP and NH3-N were increased at 40 days of ensilation period, 4% CG and urea level each. On the findings of this result, wheat straw was ensiled with manure for 40 days and 4% level of CG and urea each. Then in situ digestion kinetics of untreated and ensiled wheat straw was determined by using fistulated buffalo bulls. The results of the present study showed that dry matter digestibility (DMD) of manure ensiled wheat straw (EWS) were higher than untreated wheat straw (UWS) that was 15.43 and 13.71 respectively. Similarly, neutral detergent fiber digestibility of EWS was higher than UWS that was 57.60 and 41.43 respectively. @ JASEM Livestock in Pakistan is facing feed shortage. Currently, 121.1 million heads of animals annually require about 10.9 and 90.36 million tons of crude protein (CP) and total digestible nutrients (TDN), respectively. Whilst the availability of these nutrients is 6.7 and 69.0 million tons only, causing a deficiency of 38.10% CP and 24.02% TDN (Sarwar et al., 2002). Green and dry roughages form the bulk of livestock feed in developing countries. Crop residues generally in the form of straws and stovers are receiving considerable attention due to scarcity of green fodder. However, efficient utilization of these crop residues by ruminants is hardly possible because these are high in fiber and low in protein. Thus effective and economical sources of energy and nitrogen (N) are needed to supplements low quality roughages diets for ruminants. Oil seed meals and cereal grains are effective supplements, but are very expensive and our farmer community cannot afford the use of these feed ingredients in ruminant diets. Chemical treatment of crop residues with various alkalis, ammonia (NH3) compounds, peroxides and other chemicals has increased digestibility and animal performance (Sarwar et al., 2004). Among various chemicals, urea is the best for chemical treatment and molasses helped in fixing urea-N in fiber for maximum microbial protein production (Sarwar et al., 2004). Traditionally, animal waste is applied to farmland as a fertilizer. It can also be more valuable and economical as a feed for ruminants (Hadjitanayiotou et al., 1993). Because cattle/buffalo dung contains 818% CP and 23-52% crude fiber on dry matter basis. The sufficient quantities of fermentable carbohydrates and N source before ensilation could ensure better fermentation of wheat straw. Manure and wheat straw both are deficient in fermentable carbohydrates. Therefore, supplementation of urea and corn grain can improve the fermentation process. However, the scientific evidence on manure treated wheat straw ensiled with urea and corn grain is limited. This study was carried out to evaluate the nutritive value of manure ensiled wheat straw treated with CG and urea and its influence on digestion kinetics in ruminally fistulated buffalo bulls. MATERIALS AND METHODS Laboratory Trial Ground wheat straw was treated with different levels of urea (0, 2 and 4%) and corn grain (2 and 4%). The cattle manure was added to the urea-corn grain treated wheat straw in the ratio of 30:70. The moisture level was maintained at 50% at the time of ensiling. This material was ensiled in laboratory silos for 20, 30 and 40 days and stored in the incubator at 40 C. After the completion of ensilation period, the sample of silages was analyzed for pH, DM, CP, true protein (TP), ammonia nitrogen (NH3-N; AOAC, 1990), NDF and acid detergent fiber (ADF; Van Soest et al., 1990). In Situ Trial Two adult rumen fistulated buffalo bulls were used to evaluate in situ digestion kinetics of untreated and ensiled wheat straw. The animals were fed the same diet as will be incubated in the rumen. This was done to avoid the effect of diet on the ruminal fermentation of the feed stuffs (Clark and David, 1990). Nylon Nutritive Value and Digestion Kinetics of Manure Ensiled Wheat Straw Treated with Varying Levels of Urea and Corn Grains * Corresponding author: Muhammad Jamshed Khan 104 bags measuring 13 x 21 cm, with an average pore size of 50 μm, were used to determine the rate and extent of DM and NDF disappearance. For each time point, 5g of sample were weighed into bags, in triplicate. Two bags were used to determine DM and NDF disappearance and the third bag was serve as blank. The bags were closed and tied with braided nylon fishing line. To remove soluble and or 50-μm filterable materials, the bags were soaked in specific amount of tap water for 15 minutes, just before the ruminal incubation. Weight loss due to soaking was expressed as pre ruminal dry matter disappearance. On day 11 of each experiment, the untreated and ensiled wheat straw samples were incubated in the rumen for, 1, 2, 6, 12, 24, 36, 48, and 96 hours, in reverse order and were removed all at the same time. After removal from the rumen, bags were washed in running tap water until the rinse is clear. The bags were then dried in a forced air oven at 55C for 48 hours. After equilibration with air for 8 hours, the bags were be weighed back and the residues were transferred to 100 ml cups and stored for later DM and NDF analysis. Digestion coefficient of DM and NDF was calculated at 48 hours of incubation. Disappearance rates of DM and NDF from all feed samples were determined by the methods described by Sarwar et al. (1991). Statistical Analysis The data generated in laboratory silos was analyzed for analysis of variance using 3x2x3 factorial arrangement in completely randomized design. The differences in mean were compared using Duncan’s Multiple Range test (Steel and Torrie, 1984). In situ digestion kinetics data was also analyzed by t-test. RESULTS AND DISCUSSION Nutritive value pH: The results show significant differences among all treatments. The comparison means of pH of manure ensiled wheat straw at different storage periods by Duncan’s Multiple Range test revealed that pH significantly decreased when the length of ensilation period increased and this is in close agreement with the results reported by Similar results were also found by Reddy and Reddy (1989) who observed rice straw treated with cattle manure for 45 days had low pH as compared to untreated rice straw. The results also indicated that 2% urea produced maximum pH as compared to 0% and 4% level. Similarly corn grains produced maximum pH at 2% level than 4% level. DM: The comparison means of DM of manure ensiled wheat straw at different storage periods revealed that DM significantly increased when the length of ensilation period decreased. It decreased significantly when the storage time increased to 40 days. In contrast to this, Parthasarathy and Pradhan (1982) who reported control green sorghum fodder and green sorghum fodder ensiled with wheat straw poultry litter had 28.7 and 34.5% DM, respectively. The results also indicated that 4% urea produced maximum DM as compared to 0% and 2% level. Similarly corn grains produced maximum DM at 4% level than 2% level. CP: The results show significant differences among all treatments. The comparison means of CP revealed that CP significantly increased when the length of ensilation period increased to 40 days. It decreased significantly when the storage time decreased to 20 days. Minimum loss of CP during ensiling was due to low pH and higher lactic acid values, which is good indication of well-preserved silage. Similar results had been reported by Daniels et al. (1983) who ensiled maize with broiler litter for 6 weeks and found that CP was increased. The results indicated that 4% urea produced maximum CP as compared to 0% and 2% level. Similarly corn grains produced maximum CP at 4% level than 2% level. Total Nitrogen The results show significant differences among all treatments. The comparison means of total N of manure ensiled wheat straw at different storage periods revealed that total N significantly increased when the length of ensilation period increased to 40 days. It decreased significantly when the storage time decreased to 20 days. A factor probably contributing to the low N content was the high crude fiber value. When litter was incorporated into rations for cattle and sheep, it contributed appreciable amounts of Nitrogen. Rankins et al. (1993) reported that addition of litter resulted in an overall increase in dietary Nitrogen. The results indicated that 4% urea produced maximum total N as compared to 0% and 2% level. Similarly corn grains produced maximum total N at 4% level than 2% level. True Protein-Nitrogen The results show significant differences among all treatments. The comparison means of true protein-N of manure ensiled wheat straw revealed that true protein-N significantly increased when the length of ensilation period increased to 40 days. It decreased significantly when the storage time decreased to 20 days. The results indicated that 4% urea produced maximum true protein-N as compared to 0% and 2% level. Similarly corn grains produced maximum true protein-N at 4% level than 2% level. Nutritive Value and Digestion Kinetics of Manure Ensiled Wheat Straw Treated with Varying Levels of Urea and Corn Grains * Corresponding author: Muhammad Jamshed Khan 105 True Protein The comparison means of TP of manure ensiled wheat
Livestock in Pakistan is facing feed shortage. Currently, 121.1 million heads of animals annually require about 10.9 and 90.36 million tons of crude protein (CP) and total digestible nutrients (TDN), respectively. Whilst the availability of these nutrients is 6.7 and 69.0 million tons only, causing a deficiency of 38.10% CP and 24.02% TDN (Sarwar et al., 2002). Green and dry roughages form the bulk of livestock feed in developing countries. Crop residues generally in the form of straws and stovers are receiving considerable attention due to scarcity of green fodder. However, efficient utilization of these crop residues by ruminants is hardly possible because these are high in fiber and low in protein. Thus effective and economical sources of energy and nitrogen (N) are needed to supplements low quality roughages diets for ruminants. Oil seed meals and cereal grains are effective supplements, but are very expensive and our farmer community cannot afford the use of these feed ingredients in ruminant diets. Chemical treatment of crop residues with various alkalis, ammonia (NH 3 ) compounds, peroxides and other chemicals has increased digestibility and animal performance (Sarwar et al., 2004). Among various chemicals, urea is the best for chemical treatment and molasses helped in fixing urea-N in fiber for maximum microbial protein production (Sarwar et al., 2004). Traditionally, animal waste is applied to farmland as a fertilizer. It can also be more valuable and economical as a feed for ruminants (Hadjitanayiotou et al., 1993). Because cattle/buffalo dung contains 8-18% CP and 23-52% crude fiber on dry matter basis. The sufficient quantities of fermentable carbohydrates and N source before ensilation could ensure better fermentation of wheat straw. Manure and wheat straw both are deficient in fermentable carbohydrates. Therefore, supplementation of urea and corn grain can improve the fermentation process. However, the scientific evidence on manure treated wheat straw ensiled with urea and corn grain is limited. This study was carried out to evaluate the nutritive value of manure ensiled wheat straw treated with CG and urea and its influence on digestion kinetics in ruminally fistulated buffalo bulls.
Laboratory Trial
Ground wheat straw was treated with different levels of urea (0, 2 and 4%) and corn grain (2 and 4%). The cattle manure was added to the urea-corn grain treated wheat straw in the ratio of 30:70. The moisture level was maintained at 50% at the time of ensiling. This material was ensiled in laboratory silos for 20, 30 and 40 days and stored in the incubator at 40 C. After the completion of ensilation period, the sample of silages was analyzed for pH, DM, CP, true protein (TP), ammonia nitrogen (NH 3 -N; AOAC, 1990), NDF and acid detergent fiber (ADF; Van Soest et al., 1990).
In Situ Trial
Two adult rumen fistulated buffalo bulls were used to evaluate in situ digestion kinetics of untreated and ensiled wheat straw. The animals were fed the same diet as will be incubated in the rumen. This was done to avoid the effect of diet on the ruminal fermentation of the feed stuffs (Clark and David, 1990). Nylon bags measuring 13 x 21 cm, with an average pore size of 50 µm, were used to determine the rate and extent of DM and NDF disappearance. For each time point, 5g of sample were weighed into bags, in triplicate. Two bags were used to determine DM and NDF disappearance and the third bag was serve as blank. The bags were closed and tied with braided nylon fishing line. To remove soluble and or 50-µm filterable materials, the bags were soaked in specific amount of tap water for 15 minutes, just before the ruminal incubation. Weight loss due to soaking was expressed as pre ruminal dry matter disappearance. On day 11 of each experiment, the untreated and ensiled wheat straw samples were incubated in the rumen for, 1, 2, 6, 12, 24, 36, 48, and 96 hours, in reverse order and were removed all at the same time.
After removal from the rumen, bags were washed in running tap water until the rinse is clear. The bags were then dried in a forced air oven at 55 O C for 48 hours. After equilibration with air for 8 hours, the bags were be weighed back and the residues were transferred to 100 ml cups and stored for later DM and NDF analysis. Digestion coefficient of DM and NDF was calculated at 48 hours of incubation. Disappearance rates of DM and NDF from all feed samples were determined by the methods described by Sarwar et al. (1991).
Statistical Analysis
The data generated in laboratory silos was analyzed for analysis of variance using 3x2x3 factorial arrangement in completely randomized design. The differences in mean were compared using Duncan's Multiple Range test ( Steel and Torrie, 1984). In situ digestion kinetics data was also analyzed by t-test.
Nutritive value pH:
The results show significant differences among all treatments. The comparison means of pH of manure ensiled wheat straw at different storage periods by Duncan's Multiple Range test revealed that pH significantly decreased when the length of ensilation period increased and this is in close agreement with the results reported by Similar results were also found by Reddy and Reddy (1989) who observed rice straw treated with cattle manure for 45 days had low pH as compared to untreated rice straw. The results also indicated that 2% urea produced maximum pH as compared to 0% and 4% level. Similarly corn grains produced maximum pH at 2% level than 4% level.
DM:
The comparison means of DM of manure ensiled wheat straw at different storage periods revealed that DM significantly increased when the length of ensilation period decreased. It decreased significantly when the storage time increased to 40 days. In contrast to this, Parthasarathy and Pradhan (1982) who reported control green sorghum fodder and green sorghum fodder ensiled with wheat straw poultry litter had 28.7 and 34.5% DM, respectively. The results also indicated that 4% urea produced maximum DM as compared to 0% and 2% level. Similarly corn grains produced maximum DM at 4% level than 2% level.
CP:
The results show significant differences among all treatments. The comparison means of CP revealed that CP significantly increased when the length of ensilation period increased to 40 days. It decreased significantly when the storage time decreased to 20 days. Minimum loss of CP during ensiling was due to low pH and higher lactic acid values, which is good indication of well-preserved silage. Similar results had been reported by Daniels et al. (1983) who ensiled maize with broiler litter for 6 weeks and found that CP was increased. The results indicated that 4% urea produced maximum CP as compared to 0% and 2% level. Similarly corn grains produced maximum CP at 4% level than 2% level.
Total Nitrogen
The results show significant differences among all treatments. The comparison means of total N of manure ensiled wheat straw at different storage periods revealed that total N significantly increased when the length of ensilation period increased to 40 days. It decreased significantly when the storage time decreased to 20 days. A factor probably contributing to the low N content was the high crude fiber value. When litter was incorporated into rations for cattle and sheep, it contributed appreciable amounts of Nitrogen. Rankins et al. (1993) reported that addition of litter resulted in an overall increase in dietary Nitrogen. The results indicated that 4% urea produced maximum total N as compared to 0% and 2% level. Similarly corn grains produced maximum total N at 4% level than 2% level.
True Protein-Nitrogen
The results show significant differences among all treatments. The comparison means of true protein-N of manure ensiled wheat straw revealed that true protein-N significantly increased when the length of ensilation period increased to 40 days. It decreased significantly when the storage time decreased to 20 days. The results indicated that 4% urea produced maximum true protein-N as compared to 0% and 2% level. Similarly corn grains produced maximum true protein-N at 4% level than 2% level.
True Protein
The comparison means of TP of manure ensiled wheat straw revealed that TP significantly increased when the length of ensilation period increased to 40 days. This may be attributed to the promotion of silage fermentation. It decreased significantly when the storage time decreased to 20 days. The results indicated that 4% urea produced maximum TP as compared to 0% and 2% level. Similarly corn grains produced maximum TP at 4% level than 2% level.
Ammonia Nitrogen
The results show significant differences among all treatments. The comparison means of ammonia -N of manure ensiled wheat straw revealed that ammonia -N significantly increased when the length of ensilation period decreased to 20 days. However, the differences in ammonia -N of manure ensiled wheat straw for 30 and 40 days were non-significant statistically. These results are also in close agreement with the earlier findings of Parthasarathy and Pradhan (1982). The results indicated that 4% urea produced maximum ammonia-N as compared to 0% and 2% level. Similarly corn grains produced maximum ammonia-N at 4% level than 2% level.
Neutral Detergent Fiber
The results show significant differences among all treatments. The comparison means of NDF of manure ensiled wheat straw revealed that NDF significantly decreased when the length of ensilation period of manure and wheat straw increased to 40 days as compared to 20 or 30 days. The results indicated that 0 and 4% urea produced maximum NDF as compared to 2% level. Similarly corn grains produced maximum NDF at 2% level than 4% level.
Acid Detergent Fiber
The comparison means of ADF of manure ensiled wheat straw at different storage periods revealed that ADF significantly decreased when the length of ensilation period increased to 40 days as compared to 20 and 30 days. This is in close agreement with the ADF value 25.7% obtained by Ko et al. (2001) when they prepared silage by mixing poultry litter with whole crop corn in a ratio of 30:70. The results indicated that 0 and 4% urea produced maximum ADF as compared to 2% level. Similarly corn grains produced maximum ADF at 2% level than 4% level.
Digestion Kinetics
The results of the present study showed that dry matter digestibility (DMD) of manure ensiled wheat straw (EWS) were higher than untreated wheat straw (UWS) that was 15.43 and 13.71 respectively. Similarly, neutral detergent fiber digestibility of EWS was higher than UWS that was 57.60 and 41.43 respectively. Our results are supported by Park et al. (1995) and Prakash et al. (1996). | 2017-09-07T19:47:31.915Z | 2010-06-11T00:00:00.000 | {
"year": 2010,
"sha1": "baf98adf28231e6c84da9c9dfc370dc8e579dc16",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/jasem/article/download/55546/44022",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "baf98adf28231e6c84da9c9dfc370dc8e579dc16",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
62898961 | pes2o/s2orc | v3-fos-license | Zinc adsorption in bentonite clay : influence of pH and initial concentration
This paper evaluated the adsorption capacity of zinc by Bofe bentonite clay. Bofe clay was subjected to a thermal treatment for optimizing its adsorption capacity. The kinetic equilibrium of the process was studied in a finite bath system and experiments were performed by varying pH, the amount of adsorbent and initial concentration of the metal. The Langmuir and Freundlich models were used for the analysis of adsorption equilibrium. The physicochemical characterization of clay, before and after the adsorption process, included the techniques of scanning electron microscopy, energy-dispersive X-ray spectroscopy, X-ray diffraction and N2 physisorption. The calcined Bofe clay is able to remove zinc from synthetic wastewater. Langmuir model provided the best fit for sorption isotherms with a maximum amount of metal adsorbed of 4.95 mg of metal g of calcined clay. The adsorption was strongly influenced by the initial conditions and modifies the physicochemical characteristics of the clay.
Introduction
Numerous industrial activities have contributed to a significant increase in concentrations of metal ions in water resources due to the release of effluents.Conventionally, the removal of heavy metals takes place by chemical precipitation, but, although this process is relatively simple and inexpensive, generates a large volume of sludge and provides little benefit to very low concentrations of metals.In many situations the treated effluent may also contain residual concentrations of metals above the acceptable environmental conditions, requiring the application of a complementary process for polishing the final effluent.
The adsorption is reported as an alternative to tertiary treatment for removal of heavy metals in the opportunity to satisfactorily meet effluent with low concentrations of metals, in addition to using low cost adsorbents.
Alternative adsorbents such as natural clay have been assessed due to its high availability and costeffectiveness for the removal of heavy metals.The use of clay as an adsorbent for removing heavy metals is due to its ion exchange capacity (CEC), selectivity, regenerability and abundance compared to other natural and synthetic adsorbents.
Studies such as those conducted by Jiang et al. (2010), Ghorbel-Abid et al. (2010), Vieira et al. (2010), Silva et al. (2009), Fagundes-Klen et al. (2011) and Abollino et al. (2008) have investigated the potential of clays for the removal of heavy metals.Although the results involving metal removal by clays are significant and promising, the properties of adsorbents for optimizing the conditions of the process need to be better understood.
This paper evaluated the adsorption capacity of zinc on calcined Bofe clay in finite bath and the influence of parameters, such as adsorbent amount, pH and initial concentration of metal.The calcined Bofe clay (before and after zinc adsorption) was characterized by energy-dispersive X-ray spectroscopy, X-ray diffraction and N 2 physisorption.
Adsorbent
Bofe bentonite clay from Paraíba State, Brazil, was used as adsorbent.Clay was ground and its particles were separated by a sieving technique at 0.074 mm particle size.The natural clay was subjected to calcination process in a muffle furnace at 500°C for 24 hours.In the adsorption tests only calcined clay was used, because this clay is more stable, allowing its use in continuous systems.
Determination of point of zero charge of the adsorbent and metal chemical speciation
The point of zero charge of the adsorbent, pHzpc, was determined by potentiometric titration, according to Stumm (1992).This model shows that charges on the surface of the solid result from an acid-base reaction (Surface Complexation Model).The experimental procedure consisted of titrating two suspensions containing 10 g of clay each, in 100 mL of CH 3 COONH 4 (0.1 M) as supporting electrolyte (after waiting 10 min.for stabilization of the sample), one containing CH 3 COOH (0.3 M) and another containing NH 4 OH (0.25 M).This titration was performed in a wide range of acid and base concentrations.The surface charge, Q in units of mol g -1 , was obtained by Eq. 1.The pHzpc value of the solid was obtained by building a chart of the total surface charge of the solid as a function of pH.This value corresponds to the pH in which the curve crosses the x-axis (Q = 0).
Speciation diagrams of the distributions of zinc at 1.5 meq L -1 as a function of pH for the kind of Zn 2+ were simulated using software HYDRA (Hydrochemical Equilibrium -Constant Database) in order to identify the different species in aqueous solution.These tests for determine the pH range in which the adsorption process predominates.
Adsorbent characterization
Calcined and contaminated clays were characterized by an Oxford 7060 X-ray spectroscopy system through dispersive energy (EDS), which enables qualitative evaluation of chemical composition.In order to indicate the mapping of the adsorbed metal on the Bofe clay, a scanning electron microscopy (SEM) at LEO equipment, LEO 440i, with 500 X power was performed.The specific surface area was obtained by N 2 physisorption in a Micromeritics Gemini III 2375 Surface Area Analyzer device using the BET method.The X-ray diffraction (XRD) for the basal spacing samples evaluation in Philips equipment, X'Pert model, Kα copper radiation (λ = 1.5418Å), observing the diffraction angle of 2θ, step size of 0.02 degrees ranging between 4° and 50°.
Batch adsorption
The metal aqueous solution was prepared from salt of hexahydrated zinc nitrate, Zn(NO 3 ) 2 .6H 2 O.
The adsorption experiments were carried out in finite bath with known amount of the adsorbent and 100 mL of solution metal at fixed concentrations of metal with pH = 4.5, defined according to metal speciation.The pH value of suspension was adjusted with dilute HNO 3 or NH 4 OH.The erlenmeyers were kept at room temperature (25°C) under constant agitation (150 rpm).At predetermined intervals aliquots were taken and the metal concentration was determined in the ANALYST-100 Atomic Absorption Spectrophotometer.The adsorption capacity of metal ion at each time step was calculated by Eq. 2.
(
) where: q eq adsorption capacity of metal ion (mg g -1 ); C 0 initial concentration of metal ion in solution (mg L -1 ); C eq final concentration of metal ion after reaching equilibrium (mg L -1 ); V volume of solution (L); m s mass of adsorbent (g).The effect of the contact time, pH of the adsorbate solution, adsorbent amount and adsorbate concentration on the kinetics of zinc were studied.The experimental procedure mentioned above was used, with some changes depending on the parameter studied.
(b) Effect of initial solution pH: 1 g of clay 100 mL -1 of zinc solution (100 mg L -1 ), interaction time 150 min., pH range of 2-10, adjusted before starting the adsorption experiments.
Equilibrium tests were performed with 1 g of clay 100 mL -1 of zinc solution, adsorbate concentration varying from 3.0 to 200 mg L -1 , interaction time 150 min., pH 4.5, 25°C.The experimental data were fitted with Langmuir (Eq. 3) and Freundlich (Eq.5) isotherms.The fitting was performed using software Origin 6.0.eq eq m eq bC bC q q + = 1 (3) where: q m maximum amount of ion adsorbed per unit of adsorbent mass to form a complete monolayer on the surface, (mg g -1 ); b constant related to adsorption energy, corresponds to the affinity between adsorbent surface and solute, (L mg -1 ).
The essential characteristics of Langmuir isotherm can be expressed by the constant dimensionless number, separation factor or equilibrium parameter (R L ), which indicates the curvature of the sorption isotherm: if R L > 1, the isotherm is not favorable; if R L = 1, it is linear; 0 < R L < 1, favorable; R L = 0, irreversible.This value is given by Eq. 4: where: k f constant related to adsorbent capacity; n constant related to adsorption intensity.
Results and discussion
pH ZPC and zinc speciation The electrical charge of clay surfaces is dependent on pH.There is a particular pH value where the amounts of positive and negative electric charges are equal, this pH value, typical of each clay, is called point of zero charge pHzpc.The main surface functional groups in clays that generate loads are pH-dependent Si-OH groups and Al-OH.
Figure 1 (a) shows the pHzpc of the clays.The values of pH ZPC obtained for the natural and calcined clays are, respectively, 6.0 and 5.3.This difference is because there are more hydroxyl groups in natural clays, while in calcined clay the dehydroxylation occurs with the calcination process.Thus the adsorption should be made in a pH range > pHzpc, in this case pH > 5.3 for the calcined clay.
The objective was to maximize the removal of zinc ions from an aqueous solution, considering that the ion behaves as a cation, thus it was decided that the adsorption process would be conducted in a range of pH > pHzpc in this case, pH > 5.3 for the calcined clay.However, in view of the effect of precipitation of Zn 2+ at high pH values was made a study of chemical speciation of this ion at 1.5 meq L -1 as a function of pH (Figure 1b).The pH for this study was defined by the outcome of the two procedures, i.e. by pH ZPC and chemical speciation of the ion.According to Figure 1 (b) at pH 5.0 to 9.0, and pH above 7.0, the fraction of Zn 2+ in aqueous solution decreases and starts the formation of ZnOH + and Zn(O).It is known that metal species are present in deionized water in the forms of Zn 2+ , ZnOH + , Zn(O) and Zn(OH) 2(S) .Within the pH range of 1.0 -5.0, the solubility of the Zn(OH) 2(S) is high and therefore, the Zn 2+ is the main species in the solution.Within the pH range of 5.0 -9.0 the solubility of Zn(OH)2(S) decreases and at pH ∼10.0, the solubility of Zn(OH)2(S) is very low.At this time, the main species in the solution is Zn(OH)2(S) and the fraction of ions Zn2+ in aqueous solution decreases.
By analysis of pH ZPC , the adsorption should be performed in a pH range > pHzpc, in this case, pH > 5.3.However the chemical speciation at pH = 5.0 starts the precipitation of zinc.Thus, to ensure the process of adsorption, the pH was set at 4.5 in all conditions studied in this work.
The pH of the dispersion formed by natural and calcined Bofe clay was measured directly, in which 1 g of clay was dispersed in 100 mL of deionized water.The results were pH of 7.7 and 4.3 for the natural and calcined Bofe clay, respectively.The pH of the clay results in part from the nature of exchangeable ions present.According to the chemical composition obtained by EDS (Table 1), the ions exchangeable of Bofe clays are alkali metal cations and alkaline earth metals, which give an alkaline pH to the dispersions formed by natural clays.With calcination, there is no loss of cations, but dehydroxylation, which gives an acid pH for the dispersion formed by the calcined clay.
Characterization of clay samples
Qualitative chemical analysis of the clays is listed in Table 1.All elements in the average composition have a percentage within the expected for this clay, according to Souza Santos (1992).There is predominance of Si and Al, basic elements for smectite clay group.Bofe clay can be designated as a polycationic bentonite due to the presence of Ca 2+ , Mg 2+ and Na + cations.The contents of these exchangeable cations are interesting for the adsorption process.This type of clay is the most frequently found in Brazil (AMORIM et al., 2006).The presence of zinc after adsorption shows that this metal was actually adsorbed by clay.With the zinc adsorption, there was a reduction in the amount of Ca 2+ and Mg 2+ cations and disappearance of Na + cations, indicating the occurrence of ion exchange, especially with Na + .Figure 2 shows a homogeneous distribution of metal adsorbed on the clay surface.The values obtained for the specific surface area by the BET method for calcined and contaminated Bofe clays are presented in Table 2.The BET method was chosen because presents the best fit of data, when compared to those of the Langmuir model, in addition to the similar behavior of the curves obtained with BET isotherms.
The N 2 adsorption analysis includes only the external surface area of bentonite clay.According to Yukselen et al. (2006), the N 2 adsorption method is performed under dry conditions, where the layer of the montmorillonite (bentonite) is tightly bound.Thus, the molecules of the selected gas cannot cover the interlayer surfaces.In the calcined clay occurs a structural change due to thermal treatment affecting the interaction with water of exchangeable ions present between the clay layers, so that calcined clay does not expand, and its area measured by N 2 adsorption adequately represents the clay.The specific surface area of clay decreased after the zinc adsorption because Zn 2+ ions occupied the active sites of clay that would be available for N 2 , blocking the area and preventing the passage of molecules.XRD analysis confirms this blockade, Figure 3.
The diffractograms obtained for calcined and contaminated bentonite clay samples are shown in Figure 3. Thus, it is possible to observe the presence of montmorillonite and quartz, typical of this type of clay whereby smectite is the predominant clay mineral (SOUZA SANTOS, 1992).Thermal treatment caused distortions in the crystalline structure modifying thus the Bragg reflection standards, calcination may be followed by cation movement within the octahedral sheet (BOJEMUELLER et al., 2001).The contaminated clay has a mangling of peak d(001) due to Zn 2+ adsorption that replaces the exchangeable cations interlayer.
Adsorption experiments (a) Effect of contact time and adsorption kinetics Experiments were performed to evaluate the behavior of the kinetics of zinc removal by calcined clay.Figure 4 presents the adsorption kinetics of zinc at an initial concentration of 100 mg L -1 on calcined clay.The adsorption of zinc ions into clay pores occurs rapidly at the first moments of the process where there are a lot of empty sites for adsorption, and over time, the number of empty sites decreases and favors the action of repulsive forces of molecules already adsorbed zinc, which complicates the process of adsorption by the remaining sites (STATHI et al., 2007).The maximum adsorbed amount was 4.95 mg metal g -1 clay (Figure 4a).The reduction in the concentration of zinc ions under the condition of this study was 56% compared to the initial concentration of 100 mg L -1 (Figure 4b).2008), Arias and Sen (2009) observed that the solution pH is an important parameter in the adsorption of metal ions on clay.Very high values of metal solutions pH should be avoided because they may cause precipitation of metal complexes and hinder the distinction between adsorption and precipitation as metal removal process.The solution pH affects the loads on the adsorbent surface, but also influences the ionization of the solute, or interferes with ions (KUBILAY et al., 2007).
Figure 5 shows the influence of solution pH on removal of zinc ions by Zn 2+ calcined clay.There was an increase of metal removal with increasing pH of the adsorbate.In the pH range between 2 and 5 the removal process probably occurred by adsorption.In the range of 5 to 8, there was a marked increase in the amount of zinc removed, but, this pH range began to form ZnOH + decreasing the fraction of Zn 2+ in aqueous solution.At pH above 8.0, there was a further increase in the removal of zinc as a result of chemical precipitation of metal in the form of zinc hydroxide, as noted in the study of chemical speciation (Figure 1b).Therefore, an optimum pH for the adsorption of Zn 2+ in aqueous solution at 100 mg L -1 should be below 5.0, this value being consistent with the study of chemical speciation of zinc.(c)Effect of adsorbent amount on metal ion adsorption The results of the kinetic experiments with varying adsorbent concentrations are presented in Figure 6.The amount of Zn 2+ adsorbed per unit mass of adsorbent decreased as the adsorbent mass increased.The decrease in the amount adsorbed per unit weight of adsorbent is a common behavior also reported by Bhattacharyya and Gupta (2008).This fact probably occurred because a greater amount of adsorbent reduces the unsaturation of adsorption sites and hence the number of sites per unit mass decreases, resulting in a lower adsorption rate to a greater amount of adsorbent.
(d) Effect of adsorbate concentration and adsorption isotherm
The adsorption of the Zn 2+ ion by calcined clay is described by Langmuir and Freundlich isotherms (Figure 7).The isotherms showed basically the same behavior, and can be classified as favorable.Table 3 lists the parameters obtained through these adjustments.By the R 2 values in Table 3, it is possible to note that Langmuir model best fits the concentrations of Zn 2+ adsorbed.Similar results were found by Bhattacharyya and Gupta (2007) for the removal of heavy metals on montmorillonite and by Tito et al. (2008) for the removal of Zn 2+ on bentonite clay.The values for Freundlich constant is around 0.3 indicating that the adsorptive characteristics of calcined and sodium-saturated Bofe clays are good for zinc adsorption (TREYBAL, 1980).The amount of metal adsorbed at equilibrium, adjusted by Langmuir model, was 4.4 mg g -1 .
To evaluate the affinity between adsorbate and adsorbent, the dimensionless separation factor (R L ) was calculated, based on Langmuir constant b and the initial zinc concentration presented in Table 4.The values of R L for zinc adsorption on calcined clay ranged between 0 and 1, characterizing a favorable adsorption, mainly for the highest initial concentrations of metal.
Conclusion
Calcined Bofe clays presented adequate capacity of adsorption of Zn 2+ in aqueous solution under the study conditions, and can replace other adsorbents more expensive due to its high availability and good adsorption properties.The characterization demonstrated the changes occurred in the clays after the zinc adsorption thus proving that the process occurred by ionic zinc exchange with intermellar cations.The experimental data at equilibrium satisfactorily fitted the Langmuir model.
Figure 4 .
Figure 4. Kinetic curve for zinc adsorption on calcined clay.(a) Adsorbed amount at equilibrium; (b) dimensionless solution concentration as a function of adsorption time.
Figure 5 .
Figure 5.Effect of solution pH on Zn 2+ removal on calcined clay.
Figure 6 .
Figure 6.Effect of the adsorbent amount on adsorption capacity.
Figure 7 .
Figure 7. Adsorption isotherms adjusted to the models of Langmuir and Freundlich.
Table 1 .
Chemical analysis of the clays.
Table 2 .
Specific surface area and volume pores of clays by BET method.
Table 3 .
Ion Zn 2+ parameters obtained by fitting the models of Langmuir and Freundlich.
Table 4 .
Separation factor values (R L ) for Zn adsorption by calcined Bofe clay. | 2018-12-21T21:51:37.182Z | 2013-04-18T00:00:00.000 | {
"year": 2013,
"sha1": "10aa247ad4e5b1cb79239b8ca6827b26b9e3ff79",
"oa_license": "CCBY",
"oa_url": "https://periodicos.uem.br/ojs/index.php/ActaSciTechnol/article/download/13364/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4bc3e8a2805348c226a3a4c28cf3d34a88fe77f1",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Materials Science"
]
} |
235199222 | pes2o/s2orc | v3-fos-license | Diversified amino acid-mediated allosteric regulation of phosphoglycerate dehydrogenase for serine biosynthesis in land plants
The phosphorylated pathway of serine biosynthesis is initiated with 3-phosphoglycerate dehydrogenase (PGDH). The liverwort Marchantia polymorpha possesses an amino acid-sensitive MpPGDH which is inhibited by l-serine and activated by five proteinogenic amino acids, while the eudicot Arabidopsis thaliana has amino acid-sensitive AtPGDH1 and AtPGDH3 as well as amino acid-insensitive AtPGDH2. In this study, we analyzed PGDH isozymes of the representative land plants: the monocot Oryza sativa (OsPGDH1–3), basal angiosperm Amborella trichopoda (AmtriPGDH1–2), and moss Physcomitrium (Physcomitrella) patens (PpPGDH1–4). We demonstrated that OsPGDH1, AmtriPGDH1, PpPGDH1, and PpPGDH3 were amino acid-sensitive, whereas OsPGDH2, OsPGDH3, AmtriPGDH2, PpPGDH2, and PpPGDH4 were either sensitive to only some of the six effector amino acids or insensitive to all effectors. This indicates that PGDH sensitivity to effectors has been diversified among isozymes and that the land plant species examined, except for M. polymorpha, possess different isozyme types in terms of regulation. Phylogenetic analysis suggested that the different sensitivities convergently evolved in the bryophyte and angiosperm lineages. Site-directed mutagenesis of AtPGDH1 revealed that Asp538 and Asn556 residues in the ACT domain are involved in allosteric regulation by the effectors. These findings provide insight into the evolution of PGDH isozymes, highlighting the functional diversification of allosteric regulation in land plants.
Introduction
Serine is an important molecule that acts as a protein building block and precursor for various biomolecules, including nucleic acid bases, phospholipids, sphingolipids, and amino acids, such as tryptophan and cysteine. The phosphorylated pathway of serine biosynthesis is common in bacteria, animals, and plants [1][2][3] and consists of three reactions catalyzed by 3-phosphoglycerate dehydrogenase (PGDH), phosphoserine aminotransferase, and phosphoserine phosphatase ( Figure 1). The first committed enzyme, PGDH, oxidizes 3-phosphoglycerate (3-PGA), one of the products of glycolysis in all organisms and of the Calvin cycle in photosynthetic autotrophs, to form 3-phosphohydroxypyruvate.
In some bacteria, such as Mycobacterium tuberculosis and Escherichia coli, the phosphorylated pathway is regulated by negative feedback through allosteric inhibition of PGDH by L-serine [4,5]. A PGDH in M. tuberculosis (MtPGDH) is biochemically well-studied and its crystal structure has been solved [6]. In addition to the catalytic domain, MtPGDH possesses allosteric substrate binding (ASB) and aspartate kinase-chorismate mutase-tyrA (ACT) domains [7,8] at its C-terminal region, both of which are involved in regulating its enzymatic activity. MtPGDH mainly forms a homotetramer, and binding of L-serine to the ACT domain causes a conformational change in the homotetramer that leads to a decrease in the maximum velocity of enzymatic activity. Serine inhibition is enhanced in the presence of the phosphate ion, which binds to the ASB domain [6]. In contrast, the PGDH of E. coli (EcPGDH) possesses only the ACT domain [9].
In plants, serine is synthesized in two additional pathways: the glycolate and glycerate pathways [2]. The glycolate pathway, which takes place in the mitochondria and is part of the photorespiratory pathway, is the most important source of serine among the three pathways, at least in photosynthetic tissues [2]. The glycerate pathway occurs in the cytosol and may be a major source of serine during dark periods in C 3 plants and in non-photosynthetic tissues [10]. On the other hand, the phosphorylated pathway takes place in the plastid and is considered to function mainly in photosynthetic organs during dark periods and in non-photosynthetic tissues [2].
The enzymatic activity of PGDH from Pisum sativum was previously found to be inhibited by serine and activated by methionine at 10 mM [11]. The eudicot (angiosperm) Arabidopsis thaliana possesses three plastid-localized PGDH isozymes encoded by AtPGDH1, AtPGDH2, and AtPGDH3 [12,13]. All three isozymes contain ASB and ACT domains, but AtPGDH2 is not inhibited by L-serine [13,14]. AtPGDH1 is involved in the biosynthesis of tryptophan-derived metabolites including indole-3-acetic acid and in adaptation to high CO 2 conditions, under which photorespiration and the serine supply via the glycolate pathway are suppressed [13]. AtPGDH3 plays a crucial role in stromal NADH supply and eventually in photosynthetic performance [15]. Our previous study revealed that AtPGDH1 and AtPGDH3 are not only feedback-inhibited by L-serine but also activated by L-alanine, L-valine, L-methionine, L-homoserine, and L-homocysteine in a cooperative manner [14] (Figure 1). We also revealed that AtPGDH1 and AtPGDH3 predominantly form homotetramers, whereas AtPGDH2 formed an equilibrium of homooctamers and homotetramers. Among the above six effector amino acids, the sulfur-containing L-homocysteine showed the lowest half maximal effective concentration (EC 50 ). Furthermore, inhibition of these AtPGDHs by L-serine and activation by the activator amino acids affected each other, indicating that the serine supply via the phosphorylated pathway involving AtPGDH1 and AtPGDH3 is regulated by the ratio of L-serine and activator amino acids. This, in turn, regulates the balance of various metabolic pathways, including tryptophan biosynthesis and sulfur metabolism [14]. A lack of the phosphorylated pathway perturbs sulfur assimilation and sulfur homeostasis between photosynthetic and nonphotosynthetic tissues [16].
Domain swapping between the amino acid-sensitive AtPGDH1 and amino acid-insensitive AtPGDH2 was conducted to construct two chimeric enzymes possessing the N-terminal half (containing the catalytic domain) of AtPGDH2 and C-terminal half (containing the ASB and ACT domains) of AtPGDH1, and vice versa [14]. The results suggested that L-serine inhibits AtPGDH1 enzymatic activity by binding to the ACT domain [14]. Although the binding site of the activator amino acids was not clearly identified, the results suggested that cooperative activation by the effector amino acids requires the formation of higher-order structures of the Cand N-terminal half regions via intra-and/or inter-molecular interactions [14].
We identified a single-copy gene encoding PGDH in the basal land plant (bryophyte) Marchantia polymorpha (MpPGDH) [17]. MpPGDH also possesses the ASB and ACT domains in its C-terminal region. Similar to AtPGDH1 and AtPGDH3, MpPGDH forms a homotetramer in vitro, and is inhibited by L-serine and activated by L-alanine, L-valine, L-methionine, L-homoserine, and L-homocysteine, with the lowest EC 50 for L-homocysteine [17]. These findings suggest that PGDH regulation by amino acids is conserved in land plants, regardless of the presence or absence of cooperativity. In addition, the lack of amino acid-insensitive PGDH in M. polymorpha evokes a question when such isozymes as AtPGDH2 emerged during land plant evolution. To address this issue, in this study, we identified PGDH isozymes in three representative land plant species with available genome sequences, namely, the monocot (angiosperm) Oryza sativa, basal angiosperm Amborella trichopoda, and moss (bryophyte) Physcomitrium (Physcomitrella) patens, and examined their regulation by effector amino acids. The findings suggest that PGDH functional diversification in terms of regulation occurred independently in the bryophyte and angiosperm lineages. Furthermore, we conducted site-directed mutagenesis of AtPGDH1 at the amino acid residues corresponding to the key residues for serine binding in MtPGDH, which identified two key residues for regulation and indicated that PGDH regulation occurs in an allosteric manner.
Genetic complementation of Escherichia coli serine auxotroph mutant
Genetic complementation of the E. coli serine auxotroph mutant by PGDH genes was performed as described previously [17,20]. Complementation vectors were constructed as described as follows. Regions corresponding to mature enzymes were amplified using the primers shown in Supplementary Table S1, which were ligated into the NcoI and KpnI sites of the expression vector pTV118N (Takara Bio, Inc., Shiga, Japan) using the In-Fusion HD cloning kit. cDNA synthesis and cloning into the NcoI and KpnI sites of the pTV118N vector of PpPGDH1 and AmtriPGDH2 were performed by artificial gene synthesis (Eurofins Genomics Inc., Tokyo, Japan). The E. coli L-serine-auxotroph JW 2880 strain (TG1ΔserA::KmFRT) [21] was transformed with the pTV118N expression vectors possessing cDNAs of OsPGDHs, PpPGDHs, and AmtriPGDHs and grown on M9 media with or without 0.2 mM L-serine. The Escherichia coli strain JW2880 was provided by NBRP E. coli Strain at the National Institute of Genetics, Japan.
Preparation of recombinant enzymes
Full-length cDNA clones of OsPGDH1 and OsPGDH2 were obtained from NIAS DNA Bank (accession codes AK120939 and AK243399, respectively). cDNA clones of PpPGDH2, PpPGDH3, and PpPGDH4 were obtained from RIKEN BioResource Center (accession code pdp33831, pdp82465, and pdp12194, respectively). cDNA of OsPGDH3 and AmtriPGDH1 was obtained by artificial gene synthesis (Integrated DNA Technologies, Inc., Coralville, IA, USA). cDNA synthesis and cloning into a heterologous expression vector (as described below) of PpPGDH1 and AmtriPGDH2 were performed by artificial gene synthesis (Eurofins Genomics Inc.).
Transit peptide sequences of OsPGDH1, OsPGDH2, OsPGDH3, PpPGDH1, PpPGDH2, PpPGDH3, PpPGDH4, AmtriPGDH1, and AmtriPGDH2 were predicted by comparing their amino acid sequences with that of MtPGDH. Regions corresponding to mature enzymes were amplified using the primers provided in Supplementary Table S1, which were assembled at the SpeI and NotI sites of the expression vector pPAL7 (Bio-Rad, Hercules, CA, USA) using an In-Fusion HD cloning kit (Takara Bio, Inc.) [22].
All recombinant enzymes without the transit peptide were expressed in E. coli BL21 CodonPlus (DE3)-RIPL cells (Agilent Technologies, Santa Clara, CA, USA). Pre-cultivation was performed in Luria-Bertani (LB) liquid medium containing 100 μg/ml carbenicillin and 30 μg/ml chloramphenicol at 37°C for 12 h. Next, 2% of these cultures was used to inoculate 150 ml LB liquid medium containing 100 μg/ml carbenicillin and 30 μg/ml chloramphenicol and grown at 20°C until the optical density at 600 nm (OD 600 ) reached 0.5. IPTG was added at a final concentration of 0.5 mM, and the cells were further incubated for 12 h at 20°C.
Tag-free recombinant proteins were prepared by affinity purification using Profinity eXact Purification Resin (Bio-Rad). Briefly, cell pellets were obtained by centrifugation of E. coli cultures at 9000×g, and the cell pellets were resuspended in 100 mM sodium phosphate buffer ( pH 9.0) and sonicated on ice for 10 min. Crude extracts were then centrifuged for 10 min at 9000×g. The supernatants were applied to the Profinity eXact Purification Resin, followed by in-column incubation at 20°C for 1 h to eliminate the affinity tag eXact Fusion-Tag from the recombinant proteins. The buffer of eluted fractions was immediately exchanged with 100 mM sodium phosphate buffer ( pH 9.0) by ultrafiltration using an Amicon Ultra-4 Centrifugal Filter Unit (MWCO 10 000; Merck-Millipore, Billerica, MA, USA). All recombinant proteins were analyzed by SDS-PAGE using a 10% polyacrylamide gel.
Spectrophotometric assays of recombinant PGDH enzymes
A spectrophotometric assay was performed as previously described [14,17]. The optimal pH for AtPGDHs ranged from 9.0 to 10.0 [14,17], whereas that for AhPGDH was 9.0 [23]. The enzyme assay was conducted at pH 9.0 in 100-μl reaction mixtures containing 0.1 M TAPS ( pH 9.0), 1 mM DTT, 10 mM 3-PGA, 1 mM NAD + , 0.1 M NaCl, and approximately 3.0-4.0 μg recombinant enzyme. The reaction mixtures were preincubated without 3-PGA at 25°C for 10 min, and the reactions were initiated by adding 3-PGA [24,25]. 3-PGA oxidation activities were determined by the increased absorbance of NADH (340 nm), as detected with a UV-2700 spectrophotometer (Shimadzu, Kyoto, Japan). Reaction mixtures without substrates were used as negative controls. Kinetic parameters for apparent Michaelis constants (K app m ) and apparent maximum velocities (V app max ) were calculated by fitting specific activities to Michaelis-Menten equations under various concentrations of substrates using the Enzyme Kinetics Module in SigmaPlot 14 (Systat Software, San Jose, CA). Initial velocities were determined from the slopes of the plots of NADH formation versus incubation time within 15 s. The dose response of PGDHs to L-alanine, L-valine, L-methionine, L-homoserine, L-homocysteine, and L-serine were determined by fitting the percentage relative activities at various effector concentrations to the Hill equation [26,27] using Global Curve Fitting in SigmaPlot 14, as described previously [14].
Site-directed mutagenesis
Using the pPAL7 (Bio-Rad) vector carrying AtPGDH1, site-directed mutagenesis for AtPGDH1-Q536A, AtPGDH1-D538A, and AtPGDH-N556A were performed by inverse PCR using overlapping primers for alanine substitution (Supplementary Table. S1). After amplification, E. coli DH5α competent cells were transformed with the reaction mixtures, and the clones carrying the expected mutation were selected by sequencing. Heterologous expression and purification, determination of kinetics parameters, and dose response analysis were performed as described above.
Genetic complementation of an E. coli serine auxotrophic mutant by land plant PGDHs
We found PGDH isozymes in O. sativa (OsPGDH1-3), A. trichopoda (AmtriPGDH1-2), and P. patens (PpPGDH1-4) by homology searching. Their enzymatic activities in vivo and in vitro have not been determined. Therefore, to investigate whether OsPGDHs, AmtriPGDHs, and PpPGDHs are functional in living cells, we performed genetic complementation experiments in a serine auxotrophic mutant of E. coli, in which PGDH was disrupted [21]. The cDNA of each PGDH without its transit peptide sequence was expressed under the lac promoter in the mutant. We observed that all PGDH isozymes, except AmtriPGDH2, complemented serine auxotrophy, indicating that they participate in serine biosynthesis in vivo in E. coli (Supplementary Figure S1).
Kinetic parameters of recombinant PGDH isozymes
Next, we examined the biochemical properties of OsPGDHs, AmtriPGDHs, and PpPGDHs using the respective recombinant proteins expressed in E. coli. AmtriPGDH2 was also examined to determine why it did not complement the E. coli serine auxotroph mutant. SDS-PAGE analysis indicated that E. coli expressed approximately 60 kDa proteins, which was consistent with the theoretical molecular weights of PGDHs without the transit peptide (Supplementary Figure S2).
The apparent Michaelis constant (K app m ) and maximum velocity (V app max ) for 3-PGA and NAD + were calculated at pH 9.0 ( Figure 2). The K app m values of OsPGDHs and AmtriPGDH1 ranged from 0.18 to 1.21 mM (for 3-PGA) and from 0.062 to 0.245 mM (for NAD + ) (Figure 2A,B and Table 1), which were comparable to those of AtPGDHs and MpPGDH [14,17]. The V app max values of OsPGDHs and AmtriPGDH1 ranged from 4.34 to 7.98 mmol min −1 ·mg −1 , which are similar to those of AtPGDHs and MpPGDH (Figure 2A,B). Thus, OsPGDHs and AmtriPGDH1 gave specific constants (k cat /K m app ) comparable to those of AtPGDHs and MpPGDH. However, those of AmtriPGDH2 were extremely low (Table 1), which may perhaps explain the failure of genetic complementation in the E. coli serine auxotrophic mutant (Supplementary Figure S1). In contrast, the K app m and V app max values of PpPGDH2-4 were higher than those of AtPGDHs, MpPGDH, OsPGDHs, and AmtriPGDHs ( Figure 2C and Table 1). shown. Data are presented as the means and standard errors from two technical replicates, using enzymes purified from two independent batches of cells (basically n = 4).
Regulation of PGDH isozymes by effector amino acids
We previously reported that AtPGDH1 and AtPGDH3 from the eudicot A. thaliana and MpPGDH from the liverwort M. polymorpha were inhibited by L-serine and activated by L-alanine, L-valine, L-methionine, L-homoserine, and L-homocysteine, whereas AtPGDH2 was not [14,17]. To clarify whether L-amino acid-mediated PGDH regulation is conserved in land plant lineages, we examined the 3-PGA-oxidation activity of PGDH isozymes from land plants O. sativa (monocot), A. trichopoda (basal angiosperm), and P. patens (moss) in response to different doses of effector amino acids (Figure 3). We also investigated AhPGDH from the cyanobacterium Aphanothece halophytica, whose biochemical properties have already been characterized (Supplementary Figure S3) [23].
The results showed that OsPGDH1 and AmtriPGDH1 were inhibited by serine and activated by the activator amino acids, exhibiting sigmoidal dose-response curves fitted to the Hill equation (determination coefficient R 2 > 0.9) [26] (Figure 3A,D). The Hill coefficients of OsPGDH1 for all effectors were >1.5 ( Table 2), indicating that these amino acids inhibited (L-serine) or activated (L-alanine, L-valine, L-methionine, L-homoserine, and L-homocysteine) enzymatic activity in a cooperative manner. Among the effectors, the EC 50 of L-homocysteine was lowest, followed by L-methionine, in OsPGDH1 (Table 2). AmtriPGDH1 was also inhibited by L-serine and activated by the other amino acids ( Figure 3D). The Hill coefficients for the effectors other than L-homoserine were >1.5 ( Table 2), indicating that AmtriPGDH1 was cooperatively regulated by these amino acids. Again, L-homocysteine showed the lowest EC 50 value among all tested amino acids. In contrast, only some of the tested amino acids regulated OsPGDH2 and OsPGDH3 ( Figure 3B,C). OsPGDH2 was activated by L-homocysteine, L-alanine, and L-methionine to a lesser extent than OsPGDH1, whereas its activity was not affected by L-valine, L-homoserine, or L-serine at any of the tested concentrations. OsPGDH3 was inhibited by L-serine to a similar degree as OsPGDH1 and activated by L-homocysteine, L-alanine, L-methionine, and L-valine to a lesser extent compared to OsPGDH1. The responses of AmtriPGDH2 to the effector amino acids also differed from those of AmtriPGDH1 and OsPGDH1: the enzyme was inhibited by L-alanine and L-methionine and was not regulated by the other effectors ( Figure 3E).
Among the moss PGDHs, PpPGDH1, and PpPGDH3 were amino acid-sensitive, whereas PpPGDH2 and PpPGDH4 were not affected by the effector amino acids ( Figure 3F-I). The EC 50 values for L-homocysteine in PpPGDH1 and PpPGDH3 were the lowest among the effectors, which was also the case for AtPGDH1, AtPGDH3, OsPGDH1, and AmtriPGDH1. In contrast, cyanobacterial AhPGDH was neither inhibited by L-serine nor activated by L-homocysteine and the other activator amino acids ( Figure 3J).
These results indicate that O. sativa, A. trichopoda, and P. patens contain at least one isozyme such as MpPGDH that is regulated by all effector amino acids and possess other isozymes whose regulation is diversified.
Identification of key amino acid residues for regulation of AtPGDH1
To clarify difference between amino acid-sensitive and -insensitive isozymes, we searched the amino acid sequences of PGDH isozymes and evaluated which protein motif is responsible for the regulation. Our previous domain The values were calculated from Figure 2 and are presented with the standard errors (n = 4, except OsPGDH1 (n = 3)). swapping experiment between amino acid-sensitive AtPGDH1 and amino acid-insensitive AtPGDH2 suggested that L-serine binds the ACT domain of AtPGDH1 to inhibit its enzymatic activity, although the binding site of the activator amino acids has not been clearly identified [14]. In this study, we compared the amino acid sequences of the PGDH isozymes of the land plant species as well as those of A. halophytica and M. tuberculosis (Supplementary Figure S4), focusing on the C-terminal regions containing the ASB and ACT domains (Figure 4).
In the case of MtPGDH inhibition by L-serine, Tyr 461 (Y461), Asp 463 (D463), and Asn 481 (N481) residues in the ACT domain (Figure 4, arrowheads) formed hydrogen bond networks with a carboxyl group of L-serine [37]. Multiple sequence alignment (Figure 4) indicated that amino acid residues corresponding to D463 and N481 were completely conserved among all PGDHs examined. In contrast, residues corresponding to Y461 were commonly substituted with Gln (Q) in the land plant PGDHs. Therefore, to determine whether the triad of Gln, Asp, and Asn in the land plant PGDHs is involved in PGDH regulation by the effector amino acids, we conducted site-directed mutagenesis of AtPGDH1 at Q536, D538, and N556 (which correspond to Y461, D463, and N481 in MtPGDH, respectively, Supplementary Figure S4). We constructed three alanine-substituted AtPGDH1 enzymes: AtPGDH1-Q536A, AtPGDH1-D538A, and AtPGDH1-N556A. A spectrophotometric assay using the recombinant enzymes indicated that AtPGDH1-D538A and AtPGDH1-N556A had slightly different enzyme kinetic parameters, whereas AtPGDH1-Q536A showed a lower specific constant for 3-PGA ( Figure 5 and Table 3) compared to wild-type AtPGDH1.
We next analyzed the dose responses of the alanine-substituted AtPGDH1 enzymes to the effector amino acids. AtPGDH1-D538A and AtPGDH1-N556A completely lost their cooperative inhibition by L-serine and activation by the five activator amino acids ( Figure 6). In contrast, AtPGDH1-Q536A was inhibited and activated by the effector amino acids with slightly different EC 50 values compared to wild-type AtPGDH1 ( Table 4). The Hill coefficients were greater than 2 except for that of L-methionine, indicating that Q536 is not directly involved in cooperative inhibition and activation by effector amino acids other than L-methionine. These results suggest that D538 and N556 in the ACT domain of AtPGDH1 are necessary for cooperative inhibition and activation by the effectors (Figure 6), and that the regulation occurs in an allosteric manner. Additionally, Q536 affects AtPGDH1 catalytic activity ( Figure 5).
Discussion
Our biochemical analysis revealed that all land plants examined possess PGDH isozyme(s) which are inhibited by L-serine and activated by L-alanine, L-valine, L-methionine, L-homoserine, and L-homocysteine. Except for M. polymorpha, these plants also contain isozymes with diverse sensitivities to effector amino acids. Phylogenetic analysis of PGDHs from various land plant species, including bryophytes, lycophyte, gymnosperms, and angiosperms, indicated that angiosperm PGDHs were divided into two subclades (Figure 7, Supplementary Table S2). One subclade (hereafter called as sub. I) includes PGDH isozymes that are inhibited by L-serine and activated by five L-amino acids; these were the PGDH isozymes from the eudicot A. thaliana (AtPGDH1 and AtPGDH3), monocot O. sativa (OsPGDH1), and basal angiosperm A. trichopoda (AmtriPGDH1) (Figure 7, shown in red letters). The other subclade (sub. II) includes isozymes that are diverse in terms of amino acid sensitivity. AtPGDH2 is insensitive to all effector amino acids, whereas OsPGDH2 and OsPGDH3 are sensitive to some effectors (Figure 7, shown in blue letters). AmtriPGDH2 was also regulated by some of the effectors (Figure 3), although its specificity constant was extremely low (Figure 2 and Table 1). We found that almost all angiosperm species examined possess both sub. I and sub. II PGDHs (Figure 7). The seven gymnosperm PGDHs examined formed a clade sister to both sub. I and sub. II (Figure 7), suggesting that the two angiosperm subclades were separated after the divergence from the gymnosperm lineage.
In the bryophyte PGDH clade, four PGDH isozymes of the moss P. patens were separated into two groups, each corresponding to the amino acid-sensitive type (PpPGDH1 and PpPGDH3) and amino acid-insensitive type (PpPGDH2 and PpPGDH4). The liverwort M. polymorpha has the single PGDH isozyme MpPGDH belonging to the bryophyte clade. Given that MpPGDH is sensitive to all six effector amino acids [17], the phylogeny suggests that the ancestral PGDH in land plants was likely amino acid-sensitive and that PGDH isozymes at least partially free from regulation by the six effector amino acids were later acquired independently in different land plant lineages via gene duplication events during evolution. In addition, we showed that the cyanobacterium A. halophytica possesses only the amino acid-insensitive type of PGDH (Figures 3 and 7). It is widely accepted that the eukaryotic photosynthetic organelle ( plastid) originated from endosymbiosis of cyanobacteria in the plantae ancestor [38]. Therefore, further studies of PGDHs in green plant lineages before the divergence of land plants would reveal the origin of amino acid-mediated PGDH regulation.
PGDH duplication and functional diversification appear to have been necessary for the evolution of land plants to adequately control the serine supply in different tissues at different developmental stages. In A. The values were calculated from Figure 5 and are presented with the standard errors (n = 4). 1 The data are cited from Okamura and Hirai [14].
thaliana, three AtPGDH genes exhibited different tissue-specific expression patterns [12,13]. The loss-of-function mutant of AtPGDH1 exhibited embryonic lethality, whereas those of AtPGDH2 and AtPGDH3 showed no drastic visible phenotype [12,13], demonstrating the functional diversification of PGDH isozymes. Because serine functions as a precursor of stress-related specialized (secondary) metabolites such as glucosinolates in Brassicaceae plants [39] and glycine betaine in Poaceae plants [40], the serine supply may be involved in environmental stress responses in plants. It is likely that some PGDH paralogs are regulated to fulfill this demand. In fact, AtPGDH1 is under the control of MYB34 and MYB51, which are transcription factors that regulate tryptophan-derived glucosinolate biosynthesis [13]. In Beta vulgaris, BvPGDHa was induced whereas The values were calculated from Figure 6 and are presented with standard errors (n = 4 or 2). 1 The data are cited from Okamura and Hirai [14]. BvPGDHb was repressed under salt stress [41]. Similarly, AtPGDH1 and AtPGDH2 were induced and AtPGDH3 was repressed by salt stress, although the metabolic functions of these genes under salt stress are yet to be identified [42]. However, as the serine content increased in A. thaliana after salt treatment [42], regulation of PGDH may also play important roles in regulating the serine supply in response to salt stress. The transcriptional regulatory mechanism by salt does not appear to be related to the diversification of sub. I and sub. II PGDHs, as sub. I contains both the salt-inducible AtPGDH1 and salt-repressible AtPGDH3. The pattern of occurrence of amino acid-sensitive isozymes and salt-regulated homologs in land plant lineages suggests that PGDH represents a point in the serine biosynthesis that is easily controllable by environmental stresses such as salinity. Recent study revealed redox regulation of AtPGDH1 associated with the redox-active Cys pair uniquely found in land plant PGDH [43]. Our previous study using AtPGDH1-AtPGDH2 chimeric enzymes indicated that some features of the AtPGDH1 N-terminal region are necessary for the five activator amino acids to fully activate PGDH [14]. These results suggest that regulation of PGDH enzymatic activity by the effector amino acids requires not only effector binding to the ACT domain but also the transmission of resultant structural changes from the ACT domain at the C-terminal region to the catalytic domain at the N-terminal region or from the effector-bound PGDH monomer to the other monomers [14,17]. In this study, site-directed mutagenesis of AtPGDH1 showed that the D538 and N556 residues in the ACT domain are necessary for cooperative inhibition by L-serine and activation by the activator amino acids ( Figure 6). These residues correspond to the serine-binding sites of MtPGDH from a mycobacterium, suggesting that they are involved in binding of the effector amino acids in AtPGDH1, and binding of the effector to the ACT domain regulates catalytic activity via an allosteric effect. However, these aspartic acid (D) and asparagine (N) residues were conserved in all PGDHs we examined, including amino acid-insensitive isozymes (Figure 4). The Q536 residue of AtPGDH1, which is conserved in land plant PGDHs, is not essential for regulation but is involved in enzyme kinetics (Figures 5 and 6). At the primary protein structure level, no amino acid residue is conserved only in amino acid-sensitive isozymes ( Figure 4 and Supplementary Figure S4). Protein structure analysis performed in the presence of the effector amino acids would help us understand the molecular mechanism of PGDH regulation by effector amino acids.
In conclusion, functionally diversified PGDH enzymes in terms of amino acid-mediated allosteric regulation convergently evolved in the bryophyte and angiosperm lineages. Although the protein structural differences between amino acid-sensitive and -insensitive isozymes require further analysis, our findings reveal the binding sites of allosteric effectors in land plant PGDHs which has long been unknown [11,44] and provide insight into the biological importance of the phosphorylated pathway of serine biosynthesis in land plants.
Data Availability
All data are included in the main manuscript and in the Supplementary data file. | 2021-05-27T06:19:22.172Z | 2021-05-25T00:00:00.000 | {
"year": 2021,
"sha1": "f7bfc3ce2763554cda262bb18586b85758a77ac3",
"oa_license": "CCBYNCND",
"oa_url": "https://portlandpress.com/biochemj/article-pdf/478/12/2217/915243/bcj-2021-0191.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "166cc2af42b675c4ea9b69cb4eaa97f92db1a1da",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44598043 | pes2o/s2orc | v3-fos-license | Methotrexate for refractory prurigo nodularis
Prurigo nodularis PN is chronic intolerable inflammatory skin disease. It results from chronic pruritus. The term prurigo is Latin and means itching. PN was initially described by J.N. Hyde on 1909. Although it was described before 105 years, still the pathogenesis unclear and not many studies have been conducted regarding the systemic treatment of prurigo nodularis. The diagnosis and therapy of chronic pruritus and prurigo nodularis require multidiscipline [1].
INTRODUCTION
Prurigo nodularis PN is chronic intolerable inflammatory skin disease.It results from chronic pruritus.The term prurigo is Latin and means itching.PN was initially described by J.N. Hyde on 1909.Although it was described before 105 years, still the pathogenesis unclear and not many studies have been conducted regarding the systemic treatment of prurigo nodularis.The diagnosis and therapy of chronic pruritus and prurigo nodularis require multidiscipline [1].
CASE REPORT
A 64-year-old male patient has moderate to severe atopic dermatitis since 2007, and had been treated for a long time with topical and systemic steroids as well as topical calcineurin inhibitor such as pimecrolimus and tacrolimus, which lead to transit improvement in patient's condition.However, 2 years later patient developed disseminated pruritic nodules over the trunk and extremities.Clinically and histologically the diagnosis of prurigo nodularis has been established.
He was treated with high potency topical steroid and oral antihistamines medication without improvement.Intralesional injection of triamcinolone did not show any benefit.Thus patient was intensively investigated, his complete blood count test (CBC) was not significant, his urea and electrolytes as well as creatinine levels were within normal limits.His liver enzymes were normal.Thyroid function test (TFT) was not significant.Total IgE was 769 which is high whereas specific IgE were irrelevant.Prick test for pollen and food allergen and standard patch test were insignificant.Autoimmune antibodies including ANA, Pemphigus and Pemphigoid and Duhring antibodies have been investigated and were negative.CD4/CD8 ratio was normal.
In spite of topical treatment as well as Phototherapy (including UVB and PUVA), patient's condition was deteriorating.Therefore, the patient was treated with multimodalities including high potency topical steroid, intravenous antihistamine and cyclosporine.However, cyclosporin has been ceased due to uncontrolled hypertension and unbearable gastrointestinal symptoms.Moreover, his symptoms were reluctant to the systemic therapy with omluzimumab.Thus the patient has been treated with methotrexate 15mg subcutaneous weekly, remarkable improvement has been noticed 3 months following the onset of treatment.Consequently the dose of Methotrexate was gradually reduced, and then discontinued following complete resolution of the pruritus and prurigo nodules (Figs. 1 and 2).
Definition
Prurigo nodularis is chronic enervating inflammatory skin disorder that may affect the entire body.This disease can occur in all age groups but it primarily affects adults.The exact etiology of prurigo nodularis is still unknown.It results from chronic scratching due to many etiologies, such as dermatological disorders as xerosis or atopic dermatitis and systemic diseases for instance hyperthyroidism, hepatic or renal dysfunction, Lymphoma and iron deficiency.Emotional distress and psychological illnesses are also common contributing factors [2][3][4].
The lesions of prurigo nodularis vary in quantity and morphology.The eruption can be erythematous, brown or skin color.The lesions present usually in hard dome shaped, papular or nodular appearance that are excoriated and have central scale or crust.Those nodules present in a symmetric distribution with predominance on the extensor surfaces of the upper and lower limbs.Prurigo nodularis can be diagnosed clinically as it has characteristic morphology.Beside that histological investigation is a useful confirmatory diagnostic tool.Further investigations to rule out any underlying systemic causes of pruritus are crucial such as CBC, liver function test, and creatinine and TFT [4][5][6][7].
Pathogenesis
The exact pathophysiology of PN is not totally clear.Recent studies proposed neurogenic mechanisms as dermal hyperplasia and epidermal hypoplasia of sensory nerve fibers have been documented.This supported by effectiveness of thalidomide or even gabapentin in the management of PN.In addition, higher levels of the novel pruritic cytokine IL 31 were recently found in the skin of patients with prurigo nodularis than other pruritic skin disease.Moreover, latest studies have revealed that mast cells play a crucial role in the genesis of pruritus in PN.It has been observed that mast cells in PN lesions present in an abundant quantity adjacent to peripheral nerves in patients with PN and have distinctive morphology such as an enlarged cell body and a dendritic shape compared with the round or elongated figure observed in the normal skin.The PN mast cells also have an abundant cytoplasm with a reduced amount of granules, proposing that many of the granules have been released into the surrounding tissue.Mast cells in PN have been observed to produce more nerve growth factor (NGF) in the lesional skins leading to neural hyperplasia.Subsequently, this neural hyperplasia leads to intense pruritus.Apart from neural hyperplasia, there are other mast cell products that may contribute to pruritus in PN; these include histamine, tryptase, prostaglandins, and interleukins.Thus the pathogenesis of PN seems to be regulated by immunological neuronal plasticity [7][8][9].
Treatment
Prurigo nodularis is often refractory to various therapeutic regimens.Optimal skin hydration through regular use of emollients is the mainstay of treatment in pruritus, as emollients enhance the skin barrier function and prevent entry of irritants.The topical treatments of PN include antipruritic agent such as menthol and anesthetic agent such as pramoxine.Topical capsaicin and calcipotriol have been reported as effective therapies.Potent and super-potent corticosteroid can also be effective due to their anti-inflammatory properties.Intralesional corticosteroids such as dexamethasone or triamcinolone suspensions may be effective but they are unpractical if there are numerous lesions.Cryotherapy has been used but depigmentation and scarring can occur.For disseminated lesions, phototherapy with UVB or PUVA can be administered.Systemic treatments used for PN includes antihistamine medication, anti-depressants such as amitriptyline or doxepin, oral steroids and naltrexone.In sever refractory cases cyclosporine, azathioprine, methotrexate, thalidomide and Immunoglobulin has been reported as efficient [10,11].
Methotrexate is a folic acid antagonist commonly used in the management of inflammatory, autoimmune and malignant disorders.The anticancer property of MTX is well described; it suppresses the key enzymes in the biosynthesis of purines and pyrimidines, thereby reducing malignant cell proliferation and turnover.Besides that, it has anti-inflammatory effect but it is poorly understood.The most probable anti-inflammatory effect of MTX is enhanced extracellular concentrations of adenosine which has potent anti-inflammatory activity.Adenosine interferes with pro-inflammatory consequences of classical macrophage activation, leading to suppression of cytokine/chemokine production such as IL6, IL12, tumor necrosis factor TNF and interferon ƴ.In addition, recruitment and activation of Neutrophils is impaired by Adenosine.This elucidates the efficiency of MTX in the management of atopic dermatitis and its associated PN [8,9,12].
CONCLUSION
Management of prurigo nodularis is often challenging as the etiology of PN in the majority of the cases is unknown.Conservative treatments such as topical corticosteroids antipruritic agents and phototherapy are often inefficient.This case proves the efficacy of methotrexate in the management of prurigo nodularis, however further studies should be conducted to assess the long term effectiveness of MTX in different age groups.
Consent
The examination of the patient was conducted according to the Declaration of Helsinki principles. | 2017-09-26T17:58:49.064Z | 2017-01-09T00:00:00.000 | {
"year": 2017,
"sha1": "1b59026f5dde5f5356c22d36c8dbfdbb306bbd8f",
"oa_license": "CCBY",
"oa_url": "http://www.odermatol.com/odermatology/20171/10.Methotrexat-AlZaabiM.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1b59026f5dde5f5356c22d36c8dbfdbb306bbd8f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
121056748 | pes2o/s2orc | v3-fos-license | Quantum Control of Electron Wavepacket Dynamics in Molecules by Trains of Half-Cycle Pulses
We investigate localization dynamics of electrons in small molecular model systems driven by optimally shaped trains of half-cycle pulses (HCP). We explore the parameter space defining these HCP trains and demonstrate that the timing and strength of the first HCP "kicks" define the efficiency and direction of electron localization. As an extension, we also demonstrate that electron localization can be achieved in simple four-atomic linear molecules opening the route towards selective charge transport along chains of atoms.
Introduction
The possibility to monitor ultrafast dynamics on the femtosecond timescale, the timescale of nuclear motion [1], evoked the desire not only to observe but to actively control nuclear dynamics [2][3][4][5][6][7]. With the advent of attosecond physics, it has become possible to extend this approach to electronic dynamics [8], inspiring likewise attempts to actively control electronic dynamics. Control requires either ultrashort (attosecond) pulses (with spectra extending to the EUV domain) or intense (infrared) fields facing the additional challenge of complicated strong-field dynamics.
Recently, quantum control protocols steering the dynamics of ionized electrons by either twocolor laser fields [9] or elliptically polarized single-color fields [10,11] were suggested. It has been demonstrated that bound electrons can be steered in real time by either carrier-envelope phase control of few-cycle infrared laser pulses [12][13][14][15][16][17][18] or by control over delay time between two laser pulses [19,20]. Alternatively, phase-and amplitude-shaped electric fields may steer electronic and coupled nuclear-electronic dynamics [21]. We have recently demonstrated that breaking the inversion symmetry of the electric field relative to the polarization axis by e.g. mixing even and odd harmonic colours of the fundamental frequency opens new possibilities to steer electronic dynamics on the femto-and sub-femtosecond time-scale [17,22]. By the combination of several harmonic orders with locked phases, a train of unidirectional half-cycle laser pulses (HCP) can be formed.
In this work, we present numerical results demonstrating efficient control of electron localization dynamics in small molecular systems by trains of HCPs. While for dissociating systems the electronic motion "freezes out" upon dissociation, for systems with bound states localization is only transient for homonuclear diatomic molecules. We also present results for a four-atomic system of reduced dimensionality to demonstrate that the quantum control protocol can be extended towards longer, chain-like molecules.
The paper is organized as follows: Section 2 briefly describes the model systems representing different molecules. In Section 3, the numerical results for quantum control of electronic localization dynamics in bound systems are presented, together with a description of the underlying physical mechanism. Finally, Section 4 contains a brief summary and conclusions.
2. Brief description of the theoretical methods 2.1. Model systems describing bound molecular ions We consider the coupled nuclear-electronic dynamics of prototype molecular systems representing simple diatomic and four-atomic molecular ions. The model potentials are described by one-dimensional soft-core potentials with a smoothing function α(R) depending on the nuclear degree of freedom R. These potentials are parameterized such that the two lowest lying electronic states of the model system are bound states with even and odd symmetry. The Hamiltonian of these systems with one electronic degree of freedom (−∞ < x < ∞) and one nuclear degree of freedom (0 ≤ R < ∞) is given by the sum of kinetic (T ) and potential terms (V ) and the interaction with the external field E(t) (atomic units are used if not stated otherwise): The potential term has a soft-core form with a smoothing function α(R), which enables control over the equilibrium internuclear distance, the energy gap between the electronic states, and the ionisation potential. The particular choice of their shape will be given below. Being interested in linearly-polarized fields and dynamics developing mainly along the molecular axis, the relevant features can be extracted from a one-dimensional description. We believe that a full threedimensional calculation might yield different absolute values for the observables of interest, here the absolute asymmetry, but would not fundamentally alter the underlying mechanisms. The time-dependent Schrödinger equation (TDSE) for the coupled electronic and nuclear coordinate is with the initial condition Ψ(x, R, t = 0) = ϕ g (x; R) χ 0 (R).
In eq. 3, ϕ g (x; R) is the electronic ground state with gerade symmetry, and χ 0 (R) the vibrational ground state. The calculations are performed on a 2-dimensional grid, using the split-operator technique [23] with a total number of 512 points in R and 1024 in x. The grid in R is defined from 0.1 to 32 a.u., and the grid in x from −100 to 100 a.u. Additionally, we have performed calculations within the basis expansion of 2 Born-Oppenheimer (BO) states with the nuclear Schrödinger equation In above equation, M is the reduced nuclear mass, I is the unit matrix, and µ gu (R) is the transition dipole between electronic states g and u. The potential curves V i (R) are the electronic eigenenergies, parametrically depending on R. For computational efficiency, time-consuming scans in the multi-dimensional control parameter space were performed within the BO basis expansion. The control fields obtained within the BO states serve then as an input for the subsequent analysis within the full numerical solution of the TDSE. Figure 1. Formation of unidirectional HCP trains by superimposing harmonic colours nω to the fundamental frequency ω with appropriate phase φ n and amplitude E n . In general, |E n | → 0 as n → ∞. In addition, the envelope function f (t) can be optimized. An example of a HCP used in localization protocols is shown in the right panel.
For discussion and analysis, we extract the following observables: the time-dependent populations P i (t) = | ϕ i |Ψ(t) | 2 of states i, where the ϕ i (x; R) are the electronic eigenstates (obtained within the BO approximation), and for analysis of the localization dynamics, the projections onto the coherent superposition states, P r, is the first excited state with ungerade symmetry. The degree of localization for diatomic molecules is quantified by the (time-dependent) asymmetry coefficient assuming values between −1 and +1, with the sign indicating the direction in which the electron is preferentially localized: localization in the direction of the force of the unipolar peak field of the HCP train (positive fields) yields localization in the left potential well, i.e. A > 0. The objective is to maximize A via the application of a genetic algorithm to optimize the control parameters of the pulse train.
Half-cycle pulses
The description of trains of half-cycle pulse for quantum control on the ultrashort timescale has been summarized in our previous paper [17]. Shortly, the trains of half-cycle pulses (HCP) we consider consist of a sequence of ultrashort unipolar electric field "spikes", satisfying the requirement E(t)dt = 0. The fields are designed such that they feature a strong peak field in one direction ("kicks"), E + (t), being accompanied by a low-amplitude long lasting off-set field in opposite direction, E − (t). Pulse trains down to the attosecond regime can be synthesized by the superposition of harmonic colours [24] where f (t) is the normalized envelope function, E 0 the overall field strength, and E n the amplitude of the harmonics with phase φ n . Mixing the fundamental frequency ω with several higher (even and odd) harmonics and choosing the phases φ n properly, a unidirectional HCP train can be formed, Fig. 1. In our simulations, we chose a flat-top envelope function with a smoothed ramp-on and -off. The offset field E − (t) is given by
Numerical results
Using a genetic algorithm, the objective is to find optimally shaped HCP trains which induce the highest possible degree of electron localization in one potential well. Localization is transient for homonuclear molecules (where the two lowest eigenstates are bound), and can be permanent for either non-inversion symmetric heteronuclear or dissociative molecules, where two electronic states are located well below the internuclear barrier such that electronic dynamics can "freeze out" [17,22]. We mainly search for pulse trains inducing electron localization in the left potential well, corresponding to the direction of force from the kicks. However, localization in the opposite direction along the direction of the quasi-static DC field is possible as well, as we will demonstrate below. The genetic algorithm optimizes the fundamental frequency ω, the field strength E 0 , the absolute phase of the generating fundamental pulse φ abs , and the rise time τ , corresponding to the time needed to reach the maximum value of the flat-top envelope ( Fig. 1 (b)). The length of the pulses is kept constant and the width of the kicks was fixed to 0.2 T (or 570 as). The optimization by the genetic algorithm is performed within the BO expansion for computational efficiency, however, all results presented here have been obtained from the fully coupled nuclearelectronic Schrödinger equation (eq. 2) using these optimized pulse trains.
Diatomic potential
We consider first the case of a homonuclear molecule (inversion-symmetric potential): All electronic eigenstates have inversion symmetry, making (inversion-symmetry breaking) localization possible only in the presence of an HCP. The efficiency in forming a transiently localized wavepacket depends on the energy gap between two nearby eigenstates of opposite inversion symmetry [22]. We first concentrate on the diatomic model potential for which the excitation gap to the first excited state is very small (compared to the fundamental frequency of the field) and the equilibrium internuclear distances in those two states is very similar (mass = proton mass). The smoothing function α(R) entering the nuclear-electron potential (eq. 1) is parameterized as follows: with a = 0.6055, r 1 = 5, a 1 = 4.24, b = 0.6, c = 0.825, r 2 = 6.75, R c = 5 and a 2 = 0.35, yielding potential curves as depicted in Fig. 2 (b). The parameters of the optimal pulse train are as follows: field strength E 0 = 0.024 a.u., rising time τ of two optical cycles, wavelength 949 nm, and phase φ abs = 0 (i.e. one HCP spike is located directly at the position where the ramp-on of the envelope has reached its maximum), see Fig. 3 (a). For the flat-top pulse, the total pulse length is of minor importance. We chose a longer pulse in order to demonstrate that localization can be sustained for several vibrational periods. As we have shown recently [22], the electronic density distribution is transiently localized on the left nucleus (in the direction of the "kicks") during the ramp-on (Fig. 3 (b)). The timing and strength of the first kicks is decisive for the success of the control process. After the end of the pulse train, localization of the charge density is oscillating between the two nuclei with an oscillation period inversely proportional to the energy gap between the electronic ground and excited states. With the excitation of the electronic wavepacket, also a vibrational (nuclear) wavepacket, starting from the ground state, is initiated (modulation with a timescale of 18 fs). Population analysis shows that population of higher lying electronic states, as well as ionisation, are negligible (< 0.6%). A high degree of localization can be achieved: more than 80 % of the population can be driven into the state Φ l (x; R), corresponding to A ≈ 0.6.
The electron localization is due to the interplay between the kicks and the quasi-dc field: almost adiabatic electronic dynamics driven by the quasi-dc offset field with almost impulsive kicks inducing transitions. The field strength of the HCP train plays a dominant role for this non-perturbative dynamics.
We explore the quantum control landscape by studying the dependence of the localization on the pulse parameters. As to be expected for strong-field-driven molecular dynamics, the field strength plays a crucial and non-trivial role in the efficient control of localization. Fig. 4 (a) displays the time-averaged asymmetryĀ, with as a function of the field strength E 0 . As was the case for the asymmetry in H + 2 [17], scanning the field strength E 0 (keeping the other pulse parameters fixed) changes the value ofĀ dramatically: a field strength of E 0 = 0.024 a.u. yields the highest value ofĀ = 0.6. Decreasing or increasing the field strength reduces the asymmetry substantially, and may even invert the direction of localization. In contrast, the wavelength has a minor influence onĀ (panel (b)).
The phase (or the timing of the first kicks) plays a crucial role, see Fig. 4 (c). If we apply the same field as before but change the absolute phase by π, i.e. the kicks are shifted in between the ones of the best result, none of the kicks manages to transfer a substantial amount of population selectively.However, Fig. 4 indicates that localization in the direction opposite to the direction of the fields should be possible. We have therefore redefined the objective to find an optimally shaped pulse train which induces the highest possible degree of localization in the potential well opposing the direction of the force of the kicks. The optimal pulse train has the following parameters: field strength E 0 = 0.029 a.u., rising time τ of two optical cycles, wavelength 855 nm, and phase φ abs = 0.76π. The dynamics of the system driven by the "anti-optimal" field is shown in Fig. 5. Here, too, the localization is determined with the first two kicks. However, as the field strength E 0 is higher for this case, the first kick does not invert the field dressed Figure 5. Early dynamics of the system driven by the two HCP trains: (a) Populations P l (t), P r (t) with driving HCP train (b) leading to localization in the direction of the kicks. Panels (c,d) display the corresponding dynamics and field for localization in the opposite direction. The full field, together with the electron density ̺(x, t) = |Ψ(x, R, t)| 2 dR is displayed on the right column for the case of localization in the direction opposing the kicks. population, and the system stays in the lower field dressed state, periodically disturbed by the kicks.
Localization in a model system with four nuclei
Efficient localization dynamics can be also found for chain-like molecules. For demonstration purposes, we restrict ourselves to a simplified test system with frozen nuclear coordinates, mimicking a four-atomic linear molecule. The one-electron Hamiltonian is given by The internuclear distance is fixed to R = 4 a.u., and the screening parameter α is set to 1. Figs. 6,7 show two different scenarios where the electron is efficiently localized in the left potential well: in the first case (Fig. 6), the HCP train pushes the electron density which is initially localized in the two outer potential wells from the right well over the internuclear barrier to the left potential well, thereby populating intermediately the third and fourth excited state. After the end of the pulse, the system is in a superposition of the first lowest lying eigenstates. In the second case (Fig. 7), the HCP train simply ionizes the electron out of the right potential well (thereby creating a superposition between mainly the first two lowest lying electronic states). Fig. 6, driven by a different, more intense pulse train which ionizes electron density selectively out of the right potential well. While this reduced system does not fully describe the charge transfer in chain-like molecules as the vibrational motion is not included, the example suggests that controlled charge migration in larger systems is possible and proceeds under similar conditions as we have analyzed for the diatomic molecules.
Summary and conclusion
We have presented numerical simulations examining the quantum control of electronic dynamics in small molecules using trains of unidirectional half-cycle pulses (HCPs) on the few femtosecond scale. Such trains consist of narrow unidirectional "peaks" and a weak offset field in the opposite direction. Extending our recent work [17,22], the systems we have analysed here are representing di-and four-atomic molecules.
We explore the quantum control landscape by scanning different pulse parameters describing localization in a diatomic model potential. In the diatomic model system, the almost static offset component of the field shapes field-dressed BO potential curves. Between the kicks, the system adiabatically follows the dressed states, while the HCPs induce almost impulsive couplings between the states. The timing and strength of the first kicks during the ramp-on of the field is decisive for the outcome of the localization dynamics: depending on the pulse parameters, localization in direction of the kicks but also in the direction opposite to the kicks is possible.
Efficient electron localization can be also found in longer, chain-like molecules. In a simple model system representing a four-atomic molecule with frozen nuclei, we were able to demonstrate that these quantum control protocols can be readily extended towards longer chainlike molecules. Interesting applications are e.g. electron localization in molecules, resembling chains of atoms, where one functional group or atom has an excited dissociative channel. By steering the electron along the chain, one atom or functional group can be detached in a controlled way. In order to examine this class of processes, electron localization / transport along a chain of atoms with bound excited states with coupled nuclear dynamics needs to be considered. Although for the homonuclear diatomic molecules localization exists only in the presence of the field, this is a necessary precursor for the implementation of charge transfer along chain-like molecules. Transient polarization can induce a molecular electronic current from one end of the chain to the other. Furthermore, detachment of selected functional groups via electron localization and bond breaking can be envisioned. Transient electron localization can be probed via the asymmetry in the photoelectron distribution induced by a weak, attosecond XUV pulse [22,25]. | 2019-04-19T13:08:00.218Z | 2012-11-05T00:00:00.000 | {
"year": 2012,
"sha1": "41fb7a51c0660fc5e05e964dc3d4e13af0bc13b8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/388/1/012033",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a9f500d24e609f22b25dbb9225b2bc4150b53633",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
4549706 | pes2o/s2orc | v3-fos-license | Comprehensive proteome profiling in Aedes albopictus to decipher Wolbachia-arbovirus interference phenomenon
Background Aedes albopictus is a vector of arboviruses that cause severe diseases in humans such as Chikungunya, Dengue and Zika fevers. The vector competence of Ae. albopictus varies depending on the mosquito population involved and the virus transmitted. Wolbachia infection status in believed to be among key elements that determine viral transmission efficiency. Little is known about the cellular functions mobilized in Ae. albopictus during co-infection by Wolbachia and a given arbovirus. To decipher this tripartite interaction at the molecular level, we performed a proteome analysis in Ae. albopictus C6/36 cells mono-infected by Wolbachia wAlbB strain or Chikungunya virus (CHIKV), and bi-infected. Results We first confirmed significant inhibition of CHIKV by Wolbachia. Using two-dimensional gel electrophoresis followed by nano liquid chromatography coupled with tandem mass spectrometry, we identified 600 unique differentially expressed proteins mostly related to glycolysis, translation and protein metabolism. Wolbachia infection had greater impact on cellular functions than CHIKV infection, inducing either up or down-regulation of proteins associated with metabolic processes such as glycolysis and ATP metabolism, or structural glycoproteins and capsid proteins in the case of bi-infection with CHIKV. CHIKV infection inhibited expression of proteins linked with the processes of transcription, translation, lipid storage and miRNA pathways. Conclusions The results of our proteome profiling have provided new insights into the molecular pathways involved in tripartite Ae. albopictus-Wolbachia-CHIKV interaction and may help defining targets for the better implementation of Wolbachia-based strategies for disease transmission control. Electronic supplementary material The online version of this article (doi:10.1186/s12864-017-3985-y) contains supplementary material, which is available to authorized users.
Background
The Asian tiger mosquito Aedes albopictus is a species native to South and East Asia, with a great capacity for invasion. It has been classified by the WHO as the fourth most invasive species in the world [1]. Since the mid-twentieth century, Ae. albopictus has considerably increased its distribution, and is currently present on five continents [2]. Ae. albopictus is involved in the transmission of many human-infecting arboviruses, including Chikungunya virus (CHIKV), Dengue virus (DENV), and probably Zika virus [3][4][5]. Historically, Ae. Albopictus has been considered of secondary importance in terms of arbovirosis incidence relative to Aedes aegypti. However, this has changed since the implication of Ae. albopictus in the explosive epidemics of CHIKV on La Reunion Island and neighboring islands in southern Indian Ocean [6,7], as well as in the CHIKV outbreaks in Italy [8] and successive autochthonous transmissions of both CHIKV and DENV in metropolitan France [9][10][11][12]. Efficient transmission of CHIKV has been associated with a mutation in E1 envelope glycoprotein (Ala-226-Val) that increases viral infectivity in Ae. albopictus compared to Ae. aegypti [6,13]. Advances in technologies of large-scale analysis and the availability of genome sequencing allow the meaning of this host tolerance to be examined at the molecular level by the screening of cell factors possibly mobilized during viral cell invasion. In Ae. aegypti differential trends of proteomic expression were seen in the midgut and salivary glands infected by CHIKV or DENV in comparison to uninfected specimens [14,15]. Using cellular models, microarrays studies have shown that CHIKV enters Ae. albopictus cells by clathrin-dependent endocytosis [16], activating diverse biological processes, including protein folding and metabolic pathways [17]. Overall, the modulation of the synthesis of some classes of host proteins clearly favors virus survival, replication and transmission [18].
Ae. albopictus is naturally infected by the intracellular bacterium Wolbachia pipientis that are maternally transmitted from mother to offspring. Two distinct Wolbachia strains (wAlbA and wAlbB) are present in variable density in Ae. albopictus tissues [19][20][21] and they usually induce sterility through the phenomenon known as cytoplasmic incompatibility [22][23][24]. In Ae. aegypti, naturally devoid of Wolbachia, transinfected females harboring the wAlbB strain have been found to inhibit the transmission of both CHIKV and DENV [25,26]. In Ae. albopictus dissemination of DENV serotype 2 to salivary glands of Wolbachia-infected Ae. albopictus from La Reunion was considerably diminished in comparison to Wolbachia-uninfected individuals generated by antibiotic treatment [27]. When Ae. albopictus was transinfected with Wolbachia wMel strain derived from Drosophila melanogaster, the transmission of DENV serotype 2 was totally abolished [28]. However, the inhibitory effect of Wolbachia is not universal [29,30], and one study noted an increase in parasite infection in Anopheles [31], suggesting that variable mechanisms are involved depending on the interacting partners. Investigations into the molecular mechanisms behind Wolbachia interference have suggested that the bacterium may act by modulating expression of insect innate immune genes, including antimicrobial peptides, or more broadly by inducing oxidative and metabolic stresses that will in turn impact the behavior of the infectious agent in the host cells [32,33]. It is also proposed that Wolbachia and viruses would compete for the host cells' resources [34].
We recently showed that the wAlbB strain was able to block CHIKV infection in Ae. albopictus C6/36 cell lines relative to uninfected cells [35]. This is in line with observations in all studies using cellular models [36,37], suggesting that viral inhibition is common in such simplified systems, possibly due in part to the proximity of the interacting partners. Thus, cellular models could represent interesting systems to decipher the mechanisms involved in the tripartite interactions between Wolbachia, arboviruses and host cells. Both naturally and artificially Wolbachia-infected Aedes cell lines have shown changes in the expression of several genes involved in structural, metabolic and stress functions [38,39]. On the other hand CHIKV was reported to activate cellular functions necessary for infection and persistence [17]. However, no molecular mechanism for the interplay between Wolbachia and CHIKV in Ae. albopictus has been proposed to date. Therefore, in this study we used proteome profiling of Ae. albopictus C6/36 cell lines to discover how Wolbachia-infected cells reacted when challenged with CHIKV. Two-dimensional electrophoresis (2DE) followed by nano liquid chromatography and coupled with tandem mass spectrometry (nanoLC-MS/ MS) showed differentially expressed proteins likely belonging to diverse processes of glycolysis, protein metabolism, protein modification and amino acid metabolism. Overall, the innovative proteomic approach used in this descriptive work provided potential candidates involved in the tripartite interaction between mosquito-CHIKV-Wolbachia. Future investigation will focus on the functional studies to validate the more promising candidates implicated in cellular processes that mediated the interplay between microbes.
Mosquito cell line and virus
The C6/36 cells infected by Wolbachia wAlbB strain and uninfected cells generated by removing the bacterium through tetracycline treatment [35] were cultured at 28°C in medium consisting of equal volumes of Mitsuhashi/ Maramorosh (Bioconcept, Switzerland) and Schneider's insect medium (Sigma, France), supplemented with 10% (v/v) of heat-inactivated fetal bovine serum (PAA, USA) and penicillin/streptomycin (50 U/50 μg/mL; Gibco, Invitrogen, France). Cells were continuously passaged in 25-cm 2 flasks by scrapping and seeding a new flask with 1:5 of the cell suspension in 5 mL of fresh medium, every 4 days. The Chikungunya virus (CHIKV) 06.21 strain was isolated in C6/36 from newborn serum sample with neonatal encephalopathy during the outbreak in La Reunion Island [6]. Viral stocks were produced on C6/36 cells in 25-cm 2 flasks, at Multiplicity Of Infection (MOI) of 0.01. After 3 days at 28°C, supernatants from infected cells were recovered and virus titration was done using plaque assay on Vero E6 (green monkey kidney) cells [40]. The titer stock virus was estimated to 10 8 plaque-forming units (PFU)/mL and stored in aliquots at −80°C until used.
Cell infection
To assess the impact of cell co-infection by Wolbachia and CHIKV, we compared four modalities of infection; cells uninfected, mono-infected by wAlbB or CHIKV and bi-infected, each with three independent biological replicates. The day prior infection, 5 × 10 6 cells were transferred in 25-cm 2 flask and allowed to attach for 18 h at 28°C. Infection at MOI 0.1 with CHIKV 06.21 was performed in 0.5 mL new medium with 2% fetal bovine serum, using virus-free medium as control. After 1 h, 5 mL of fresh medium with 10% fetal bovine serum were added and incubation extended. Cells and supernatants were harvested at 24 and 120 h post-infection. For uninfected cells, we applied the same protocol but fetal bovine serum medium did not contain any virus particles. Blue trypan staining used for cell counting and light microscopy employed to monitor cell monolayers did not show apparent necrotic cells along the course of the experiment (not shown). At the two times (24 h and 120 h), cells were scrapped and pelleted by centrifugation and a fraction of these cells was conserved in 1.5 mL tube for genomic DNA and RNA isolations. Each cell pellet was washed once in 10 mL PBS 1× pH 7.4 (Gibco, Invitrogen, France) and then resuspended in lysis buffer composed of urea 7 M (Sigma, France), 2 M thiourea (Fluka, Sigma, France), 4% CHAPS (Sigma, France), 0.5% Triton ×100 (Sigma, France) and TBP 0.08 mM (Sigma, France) in distilled water (Gibco, Invitrogen, France); and incubated on ice for 30 min with regular vortexing. Cell lysates were stored at −80°C until protein extractions.
DNA and RNA isolation
Genomic DNA isolation was performed using DNeasy blood and tissues Kit (Qiagen, France) following manufacturer's instructions. Cell pellets were resuspended in 180 μL of ATL lysis buffer and incubated for 2 h at 37°C with 2 mg/mL lysozyme (Euromedex, France). Residual co-extracted RNA was eliminated by adding 100 mg/mL RNase A, for 2 min at room temperature, then isolated DNA was eluted in 30 μL of DNase-free water. To isolate total RNA, cell pellets were crushed in 350 μL RLT lysis buffer of RNeasy Mini Kit (Qiagen, France) using RNasefree piston pellet (Kontes, USA) and following manufacturer's recommendations. Then RNA was eluted in 37 μL of RNase-free water and treated with DNase using the TURBO-DNA free kit (Ambion, USA) in 50 μL final volume following the manufacturer's instructions. DNA and RNA were quantified using a UV-mc 2 spectrophotometer and diluted to 5 ng/μL, then frozen at −20°C (DNA) or −80°C (RNA) until use.
Quantitative analysis of Wolbachia (qPCR) and CHIKV (RT-qPCR)
To monitor the relative density of Wolbachia per cell, qPCR was performed using Wolbachia Surface Protein (wsp) gene for the bacterium and actin gene for the host cell. Standard curves were drawn on 10-fold serial dilutions from 1 × 10 8 to 1 × 10 1 copies/μL of the DNA plasmid pQuantAlb16S containing fragments of the two targeted genes [20,41]. Amplification reaction was done in a total volume of 20 μL containing 10 ng of template DNA, 1× (10 μL) Fast-SYBR-Green Master Mix (Roche, Suisse), 200 mM of each wsp primers (5' AAGGAACC GAAGTTCATG3′ and 5' AGTTGTGAGTAAAGTCCC3') and 300 mM each actin primers (5'GCAAACGTGG TATCCTGAC3' and 5'GTCAGGAGAACTGGGTGCT3'). Amplification was performed on LC480 LightCycler (Roche, France) and consisted of 10 min at 95°C, followed by 40 cycles of 15 s at 95°C, 1 min at 65°C, and a final elongation at 72°C for 30 s. To quantify CHIKV RNA copy number, RT-qPCR was done on the envelope E2 gene using a standard curve of 10-fold serial dilution of a synthetic CHIKV RNA transcript [29]. One-step RT-qPCR was performed using EXPRESS One-Step SYBR GreenER Kit (Invitrogen, France) in a volume of 20 μL containing 10 ng of RNA template, 1× (10 μL) EXPRESS SYBR GreenER SuperMix Universal, 200 nM of sense Chik/E2/9018/+ and anti-sense Chik/E2/9235/− primers [42] and 1× (0.5 μL) EXPRESS Superscript Mix. Amplification was performed on a LC480 LightCycler (Roche, France) and consisted of 15 min at 50°C and by 95°C for 2 min, followed by 40 cycles of 95°C for 15 s and 63°C for 1 min. All PCR reactions were done in triplicate. DNA and RNA extracted from C6/36 uninfected were used as negative control.
Protein extraction, 2D-PAGE and densitometric gel analyses
To extract proteins, cell lysates were defreezed on ice and proteins were precipitated with 10% (w/v) trichloroacetic acid (Sigma, France) at 4°C overnight. Proteins were pelleted by centrifugation at 14,000 g for 15 min at 4°C and washed three times with glacial acetone (VWR Chemicals, France). Isoelectric focusing (IEF) was performed using the Protean IEF System (Biorad, France) according to the manufacturer's instructions. The rehydration buffer contained 8 M urea (Sigma-Aldrich) and 4% (w/v) CHAPS (Sigma). IEF was performed with 11 cm no-linear strips, pH 3-10 (Biorad), using the Voltage Ramp protocol recommended by the manufacturer (100 V/30 min/rapid, 250 V/30 min/linear, 1000 V/30 min/linear, 7000 V/ 3 h/linear, and finally 32,000 V/h (pH 3-10 IPG)). The second dimension was carried out using the Criterion Dodeca system (Biorad). A minimum of four gels loaded with biological replicates was used for each condition. Criterion any kD TGX gels (Biorad) were run at 10°C in Laemmli buffer [43] at 100 V for 2 h. Then the 2D-gels were stained with silver nitrate as previously described [44], scanned and analyzed using the software SameSpots v.4.5 (Non-linear Dynamics Progenesis, UK). An ANOVA test of the spot volumes was calculated to compare the different conditions. Variations in spot volumes with p < 0.02 and fold-change >2 were considered significant.
Sample preparation and nanoLC-MS/MS analysis
Protein spots were destained in 60 mM potassium ferricyanide and 200 mM sodium thiosulfate mixed 1:1 until all brown color was removed. The spots were washed through successive incubations with water until all yellow color was removed and shrunk in acetonitrile (ACN) for 10 min. After ACN removal, gel pieces were dried at room temperature. Proteins were digested by incubating each gel slice with 10 ng/μL of trypsin (T6567, Sigma-Aldrich) in 40 mM NH4HCO3, 10% ACN, rehydrated at 4°C for 10 min, and finally incubated overnight at 37°C. The resulting peptides were extracted from the gel by three steps: a first incubation in 40 mM NH4HCO3, 10% ACN for 15 min at room temperature followed by two incubations in 47.5% ACN, 5% formic acid for 15 min at room temperature. The three collected extractions were pooled with the initial digestion supernatant, dried in a SpeedVac, and resuspended with 25 μL of 0.1% formic acid before nanoLC-MS/MS analysis. Online nanoLC-MS/MS analyses were performed using an Ultimate 3000 RSLC Nano-UPHLC system (Thermo Scientific, USA) coupled to a nanospray Q-Exactive hybrid quadruplole-Orbitrap mass spectrometer (Thermo Scientific, USA). Ten microliters of each peptide extract were loaded on a 300 μm ID × 5 mm PepMap C18 precolumn (Thermo Scientific, USA) at a flow rate of 20 μL/min. After 5 min desalting, peptides were online separated on a 75 μm ID × 25 cm C18 Acclaim PepMap® RSLC column (Thermo Scientific, USA) with a 4-40% linear gradient of solvent B (0.1% formic acid in 80% ACN) in 48 min. The separation flow rate was set at 300 nL/min. The mass spectrometer operated in positive ion mode at a 1.8 kV needle voltage. Data were acquired using Xcalibur 3.0 software in a data-dependent mode. MS scans (m/z 300-2000) were recorded at a resolution of R = 70,000 (@ m/z 200) and an AGC target of 1 × 10 6 ions collected within 100 ms. Dynamic exclusion was set to 30 s and top 15 ions were selected from fragmentation in HCD mode. MS/MS scans with a target value of 1 × 10 5 ions were collected with a maximum fill time of 120 ms and a resolution of R = 35,000. Additionally, only +2 and +3 charged ions were selected for fragmentation. Others settings were as follows: no sheath and no auxiliary gas flow, heated capillary temperature, 200°C; normalized HCD collision energy of 25% and an isolation width of 3 m/z. 13,782 entries, release 2015_04). Two missed enzyme cleavages were allowed. Mass tolerances in MS and MS/MS were set to 10 ppm and 0.02 Da. Oxidation of methionine, acetylation of lysine and deamination of asparagine and glutamine were searched as dynamic modifications. Carbamidomethylation on cysteine was searched as static modification. Peptide validation was performed using Target Decoy PSM Validator and only "high confidence" peptides were retained corresponding to a 1% False Positive Rate at peptide level. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium (http://proteomecentral. proteomexchange.org) via the PRIDE partner repository [45] with the dataset identifier PXD005091.
Bioinformatics and statistical analysis
The continuous response variables (viral and bacterial titers) were log 10 -transformed. They were analyzed using a multifactorial linear model, with a normal error distribution and an identity link function that included the effect of the time and MOI as ordinal variables, treatment as discrete variable and their interactions. All the statistical analyses were performed using R environment (version 3.1.0). An annotation in GO term was carried out on the proteins identified using Blast2GO (3.2.7) then they were used to detect possible interaction networks using Cytoscape (3.3.0).
Results and discussion
Wolbachia wAlbB affects CHIKV in cellulo As our previous study of the C6/36 infected with wAlbB showed that presence of the bacterium decreased the viral titer compared to uninfected cells [35], we measured wAlbB and CHIKV densities at 24 and 120 h post infection (p.i.) using qPCR and RT-qPCR, respectively. The density of Wolbachia was about 12 wsp gene/actin ratio (Fig. 1). The percentage of Wolbachia-infected cells ranged from 60 to 70% (not shown) as determined by fluorescent in situ hybridization published protocol [35]. The CHIK RNA copy number was estimated between 10 7 to 10 9 per ng of total RNA (Fig. 2). Both Wolbachiainfected and uninfected cells produce infectious viral particles without visible cytopathic effect (not shown). This was expected as Aedes cells are permissive to many arboviruses, including CHIKV, that are found non pathogenic to mosquitoes [46,47]. This is why the C6/ 36 cell line is extensively used to propagate viruses [48].
Statistical analyzes demonstrated that the density of Wolbachia was not affected by the presence of the CHIKV, and was marginally affected upon time (P = 0.05262) (Fig. 1). As expected, the viral titer was significantly reduced in the presence of Wolbachia (P < 2.2e-16), without reaching complete inhibition. The inhibitory effect decreased with time, being lower at a late time (P = 0.0007825) (Fig. 2). It has been reported that viral inhibition by Wolbachia is density-dependent [28,37]. At the two time points tested here the Wolbachia density remained stable, around 12 bacteria per cell, and the level of CHIKV inhibition was similar to previous studies [35]. The chronic Wolbachia infection and the permissiveness to viruses make the C6/36 cell line an interesting model for exploratory functional studies. One unfavorable point of this cell line is the lack of siRNA pathway [49], a primary immune response against viral infection in mosquitoes. However, it has been shown that insects can mobilize other RNA interference pathways to control viral replication. For instance, Aedes aegypti induces miRNA and specific piRNA pathways to control the replication of DENV [50][51][52]. Similarly, Wolbachia could have an effect on synthesis of small RNAs [53,54]. Therefore, this cellular model seems suitable for the study of induced host-cell responses following mono-or bi-partite infection by Wolbachia and/or CHIKV as well as the CHIKV replication cycle.
Differential cell proteome profiles upon microbial infection
For the two time points (24 h and 120 h p.i.) and the four modalities (uninfected, mono-infected by either Wolbachia or CHIKV and bi-infected by both microbes), three independent biological replicates were performed. Total proteins were extracted and similar amounts (approximately 150 μg, estimated on a 1D gel) were used for 2DE. For each modality and each replicate, a minimum of 4 and a maximum of 5 gels were used. Typical 2D gels with spots obtained are illustrated in Fig. 3. The global gel analysis using the ProGenesis SameSpots software enabled detection of 906 spots at 24 h and 901 spots at 120 h p.i. ANOVA analysis allowed identifying 58 spots at 24 h and 32 spots at 120 h p.i that were statistically different (p < 0.02 and fold change >2) in comparison to uninfected cells. As many of the spots identified at early time point were linked to Wolbachia infection alone, only 30 of the 58 spots were selected for mass spectrometry sequencing, including all 32 spots observed at the late time point.
A protein was considered present in a spot when a minimum of two different peptides were identified by mass spectrometry (Additional file 1: Table S1). Consequently, a total of 495 unique proteins were identified from 948 sequences at 24 h p.i., whereas 105 unique proteins were found among 168 sequences at 120 h p.i. The elevated number of identified sequences in the analysis can be explained by two major reasons; (i) a high number of proximate proteins that have possibly been subjected to post-translational modifications and (ii) protein fragmentation during experimentation that resulted in modified migration patterns. All peptide sequences and observed fold changes are described on the Additional file 1: Table S1. By combining the protein level in each time point and the modality of infection, a total of four major profiles were defined, including monoinfection, dominance, cumulative and interference (Table 1). Accordingly, in the monoinfection profile each microbial partner tends to affect a particular protein or a group of host proteins. The dominant profile indicates a major impact of one microbial partner on the host protein synthesis (up or down) whereas the other microbial partner showed an opposite profile. The cumulative effect means that the two microbial partners displayed a synergic effect on protein synthesis. Lastly, the interference profile indicates that each microbial partner induces a specific protein pattern but the co-infection displays a totally new trend.
The 2DE combined with mass spectrometry sequencing did not allow quantification of the level of protein accumulation per spot, and one spot can contain several proteins, consequently it was not possible to identify which protein was involved in the variation observed. In addition, the presence of many identical proteins in several spots simultaneously makes the analysis complex. Therefore we proceeded by annotating proteins in GO terms that were used to construct interacting networks for each protein profile. This procedure allowed comparison of functions shared by all modalities with those belonging specifically to each partner.
Wolbachia infection has a greater effect on cell functions than CHIKV
Among the 89 spots detected, 77% were specifically synthesized in the presence of Wolbachia in both monoinfection (53 spots) and dominance (16 spots) profiles (Table 1). Since the aim of this study was to characterize the impact of coinfection rather than monoinfection, we have chosen to sequence only 27 out of 53 spots, that were selected on the basis of a particular fold change as indicated above. Results of sequencing showed that spots linked to Wolbachia in both mono-infection and dominance profiles contained proteins involved in many cellular functions, including processes related to metabolism for acquisition of resources from the host, regulation of anti-oxidation and cellular functional machinery (transcription and translation), as well as active transport and cellular structures (Fig. 4). These proteins were present at the two times at relatively high percentage (64.5% at early time and 35% at late time) of the total proteins, and some of them have been already described in literature as being upregulated by Wolbachia [39]. One example is the Glutathione S-transferase (A0A023EL34) for the regulation of anti-oxidation process [38], which is abundant at early time in the presence of the bacterium. The large number of proteins mobilized in the presence of Wolbachia indicates a strong relationship between the two partners.
In contrast, the presence of CHIKV alone has only limited effect in comparison to uninfected cells. Few differential spots containing proteins at a very low percentage (<5%) were detected, with tendency to be down-regulated. The majority of the proteins detected were related to the ATP transport and binding, glycolysis, cytoskeleton and stress responses (Fig. 4). For instance, many proteins associated with ATP consumption were significantly reduced in the presence of CHIKV. Moreover, we observed a decrease in expression of the gene encoding A0A023END7 LSD2 (Lipid Storage Droplet-2), suggesting that CHIKV blocks lipid storage, potentially making them available incorporation into the viral envelope. This phenomenon has already been shown in Ae. aegypti mosquitoes infected by either dengue [55] or chikungunya viruses [14,15]. Another protein A0A023EQG9 negatively impacted encoded a kinase for double-stranded RNA necessary to the establishment of RISC complex in RNA interference phenomenon. Knowing that the C6/36 cell line has a non-functional siRNA mechanism [49], inhibition of the miRNA pathway is consistent with a viral mechanism to escape cellular defenses.
In dominance profiles, Wolbachia exhibited different protein trends in respect to virus, from a neutral level (W_DOM_1 and W_DOM_3), an increased (W_DOM_2) or repressed (W_DOM_4) synthesis (Fig. 5). The W_DOM_1 profile reduced the CHIKV structural polyprotein V5UMV1 at 24 h post infection (hpi)., whereas the W_DOM_4 profile targeted specifically the viral capsid protein (A0A059VQ68) at 120 hpi (Table 1). These results are in agreement with the Wolbachia blocking All profiles were normalized with respect to uninfected modality. Effective observed fold changes are reported on Additional file 1: Table S1. In comparison to uninfected C6/36 cells: Up: A positive difference on protein synthesis has been observed; Down: A negative difference on proteins synthesis has been observed; ø: No difference has been observed phenotype observed recently for CHIKV in C6/36 cells [35]. Viral blocking is therefore explained by inhibition of Wolbachia cellular proteolysis machinery, thus limiting the maturation of virion-associated protein structures and reducing viral replication. Overall, this effect appeared more diverse at early stages post-infection, but of greater magnitude at later times (Table 1). At 24 hpi Wolbachia tends to sustain necessary cellular processes, such as oxidizing processes including glutathione peroxidase activity, translation and transcription (Fig. 5). Whereas, this is not the case at 120 hpi, when the bacterium limits processes that will be exploited by the virus, including oxidative stress, transportation and translation. The viral dominance profile occurred at 120 hpi, when the virus had established chronic and dense infection (Fig. 6). The ATP synthase subunit beta (A0A023ETB9), involved in active trans-membrane ion transport, appeared negatively regulated as well as Glutathione peroxidase (Q16N54), albeit to a lesser extent. In contrast, some structural proteins such as actin (Q0Z987) and those related to heat shock (A0A023EWK8) were oversynthesized, suggesting a role in the production of virions [17]. The presence of ATP synthase subunit beta in both up and down-regulated profiles suggests several isoforms of this protein that Wolbachia modulates by regulating post-translational modifications.
Proteome trends during Wolbachia and CHIKV coinfection
Two different profiles emerged from bacterial and viral coinfection. The first was a cumulative profile in which a synergistic negative effect on protein synthesis was observed (Fig. 7). The processes observed to be affected by bi-infection were those already identified during infection by bacteria and viruses [14,15,17]. These proteins all act to maintain cell integrity and are associated with either down-regulation early post infection or upregulation late post infection.
The second pattern was an interference profile ( Fig. 8). At 24 hpi, interference seemed to be directed against CHIKV and in favour of Wolbachia. Indeed, despite Wolbachia neutral (INT_2) or negative (INT_1) effects, cellular processes that were found to be up-regulated were those that may be of benefit to the bacterium, including cell development processes, transcription, translation and various metabolic pathways. At 120 hpi the INT_3 profile showed establishment of a balance between Wolbachia, which decreases metabolic processes, and the virus who in turn activated them for its own benefit. This sum of effect allows maintaining these processes at a steady-state level in cell. INT_4 profile was essentially related to structural proteins that were inhibited by each microbial partner, but during bi-infection where these proteins were not downregulated. INT_5 profile identified ATP synthase subunit beta of Wolbachia (H0U0S7) that was inhibited by the virus. This later profile highlights a particular pattern where the presence of virus inhibited bacterial proteins through the blocking access to resources, thus limiting the potential of the bacterium to affect the virus. When comparing the peptides detected in this study with those already described in mono-infection models using mass spectrometry approaches, some common proteins were identified. These include enolase (A0A023ETA6), which was found to stimulate transcription of the Sendai virus genome [56], and upregulated in the interference profile (INT_3). If CHIKV seems to enhance the enolase synthesis, as already shown by Lee et al. [17], Wolbachia tends to reduce its production. Consequently, in the biinfection status, this conflictive pattern appears unfavorable to CHIKV replication. Among proteins involved in glycolysis and metabolism, one promising candidate is the disulfide isomerase protein (A0A023EP23) which has been shown to be modulated by CHIKV according to the infected organs [14,15] and the duration of infection [17]. In our study, this protein is modulated by Wolbachia (W_Up_1), affecting the early CHIKV replication. Similarly, some chaperonins such as the putative calreticulinlike 2 (A0A023EQL3), chaperonin 60 kDa (A0A023EV59), heat shock cognate 70 (Q1HQZ5), alpha and beta tubulin 1 (A0A023ERN1 and A0A023ESE6) have been described to be modulated during CHIKV infection [14,15,17], and for which we found to be impacted by bi-infection status. These observations are also operating in glycolysis with for instance triosephosphate isomerase (A0A023EIM8) shown to be important in energy input necessary for viral replication. Indeed, at early time, this protein is overexpressed in Wolbachia-infected cells, inducing a favorable environment for CHIKV. In contrast, at latter time, Wolbachia seems to reduce the expression of triosephosphate isomerase while CHIKV tends to increase its activity (V_DOM_3 profile), suggesting the importance of such protein in this tripartite interaction.
Conclusions
This study highlights complex processes that occur during arbovirus infection of mosquito cells in symbiosis with Wolbachia. Even though these findings were obtained using a cellular model, the observed trends pave the way for future research into the in vivo characteristics of tripartite interaction. In our experimental conditions, the combination of 2DE and nanoLC-MS/MS revealed a balance in protein synthesis mostly in favor of Wolbachia, which may explain the simultaneous inhibition of viral replication that we observed using RT-qPCR. At early times post infection, the presence of Wolbachia greatly influences many cellular processes related to management of anti-oxidant activity, protein production, various metabolic pathways linked to the provisioning of resources; likely impacting CHIKV replication. Under such conditions, CHIKV faces a hostile environment for replication and appears to counterbalance this negative impact by blocking some key cellular pathway, including the inhibition of transcription, translation and locking of an miRNA pathway.
At later times post infection, the proteome is clearly altered, and CHIKV activity seems to have taken control of some cellular functions. Consequently, the virus seems to limit the impact of Wolbachia on its replication cycle by hoarding the majority of resources, and even blocking Wolbachia's access to these resources. This shift partially explains the increased viral titer that is observed at later periods post-infection. Even if Wolbachia no longer controls some of these cellular processes, its presence limits the effect of CHIKV infection on certain cellular functions, thus modulating its replication, particularly early after the infection process. This cellular level interference could explain phenotypes observed in Ae. albopictus in vivo, where Wolbachia limits transmission of dengue virus by reducing the viral titer in salivary glands [27].
Several studies have shown that Wolbachia can modulate the expression of genes involved in immunity that affect arbovirus infection, suggesting that interference acts by pre-immunization of the host [26,28,34]. Strikingly, we do not observe significant modulation of proteins related to immune response upon CHIKV-inhibition by Wolbachia. Even though C6/36 lacks functional siRNA pathways, other immune response mechanisms could have been mobilized. The fact that we did not identify proteins involved in immunity might suggest that other cellular processes can lead to the antiviral profile, corroborating results obtained from other cellular models. For example, in Ae. albopictus Aa23 cells infected with either wAlbB, wMel or wMelPop, whose density varied from 2.5 to 38 bacteria per host cell, no changes were observed in innate immunity related functions [57]. Recently, an elegant work demonstrated that Wolbachia could inhibit viral replication at early stages post infection by affecting RNA translation or transcription, suggesting a likely direct effect [58]. Together these cellular models revealed alternative mechanisms to immunity in Wolbachia-based viral inhibition that need further investigations. An interesting perspective could be the extension of proteome profiling to mosquito organs as | 2017-08-19T05:40:16.756Z | 2017-08-18T00:00:00.000 | {
"year": 2017,
"sha1": "914bc6fbd27f772808778f32eed5aeba6a56b413",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12864-017-3985-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "116654d485c51f37fba78862ead46544a72af429",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221447283 | pes2o/s2orc | v3-fos-license | Renormalization group equations and the recurrence pole relations in pure quantum gravity
In the framework of dimensional regularization, we propose a generalization of the renormalization group equations in the case of the perturbative quantum gravity that involves renormalization of the metric and of the higher order Riemann curvature couplings. The case of zero cosmological constant is considered. Solving the renormalization group (RG) equations we compute the respective beta functions and derive the recurrence relations, valid at any order in the Newton constant, that relate the higher pole terms $1/(d-4)^n$ to a single pole $1/(d-4)$ in the quantum effective action. Using the recurrence relations we find the exact form for the higher pole counter-terms that appear in 2, 3 and 4 loops and we make certain statements about the general structure of the higher pole counter-terms in any loop. We show that the complete set of the UV divergent terms can be consistently (at any order in the Newton constant) hidden in the bare gravitational action, that includes the terms of higher order in the Riemann tensor, provided the metric and the higher curvature couplings are renormalized according to the RG equations.
Introduction
The ultra-violet (UV) divergences in a quantum field theory are handled by adding the respective counter-terms. The structure of the UV divergences is most conveniently analyzed in the framework of the dimensional regularization that preserves all symmetries present in the quantum system. The divergences then appear as a set of poles when the space-time dimension d → 4. The respective terms in the quantum effective action are not independent: the higher pole terms 1/(d − 4) n , n > 1 are completely determined by the terms in a single pole 1/(d − 4). These relations, known in the literature as the pole equations, come out as a consequence of the Renormalization Group equations as was demonstrated by 't Hooft [1]. Earlier the pole equations were derived and used, quite effectively, in the case of 2d sigma-models in which one has to renormalize the metric in the target space [2], [3].
In the case of the perturbative quantum gravity the computation of the loop diagrams is notoriously difficult. The one-and two-loop results are the only available in the literature [4], [5], [6] although the possible structure of the counter-terms that may in principle appear in the higher loops can be analyzed by means of the covariance principle, at least at some lower orders in the curvature. Their number is rapidly increasing with number of loops and in all orders in the Newton constant one deals with an infinite number of possible structures.
It was suggested in [7] (see also [8] and for some later developments [9]) that the Renormalization Group methods should be equally applicable to the non-renormalizable theories. This program if successful would make the non-renormalizable theories to look quite similar to the renormalizable ones. The available in the literature prescriptions, however, are rather implicit. We do not directly rely here on this previous work although it played an important inspirational role for our study.
In the present paper we develop a systematic approach to perturbative quantum gravity that uses the renormalization group equations. The primary goal of the paper is to derive the recurrence pole relations in the case of gravity. Our prescriptions are precise and unambiguous. It should be noted that over the last several decades there have been suggested a number of approaches to quantum gravity that refer to certain versions of the Renormalization Group, in most cases of the Wilsonian type. In order to avoid any possible confusion we would like to stress from the very beginning that none of these approaches will be used here. The closest analogue of the approach that we develop in the present paper is that of the renormalization group equations of 't Hooft [1]. The key important point in our construction, that, to the best of our knowledge, was missing in the earlier approaches to quantum gravity, is the necessity to consider a renormalization of the metric, much in the same way as in a renormalizable QFT one introduces a renormalization of the quantum fields. The peculiarity of this renormalization procedure for the metric is that it is not multiplicative but of a rather general, although still local, type.
The other important remark is that throughout the paper only the case of pure gravity with zero cosmological constant will be considered. The case of non-zero cosmological constant will be treated in a subsequent work.
The paper is organized as follows. In section 2 we briefly review the method of 't Hooft in the case of a renormalizable quantum field theory (QFT). In section 3 we derive the renormalization group equations in the case of pure quantum gravity. In section 3.1 we consider the renormalization of the metric and in section 3.2 the renormalization of the quantum effective action. In section 4 we solve some of the RG equations and determine the exact form of the higher loop metric beta function and the beta functions for the higher order curvature coupling constants. In section 5 we derive the pole recurrence relations and solve these relations to determine the exact form of the higher pole counter-terms in 2, 3 and 4 loops. In section 6 we focus on the General Relativity (GR) counter-terms and make some statements on their general form. In section 7 we demonstrate that, similarly to the renormalizable theories, the complete set of the UV divergences can be hidden in the bare gravitational action provided the bare metric and the bare higher curvature couplings are expressed in terms of the renormalized quantities. We conclude in section 8.
RG equations in renormalizable QFT
Before we start our analysis of quantum gravity we briefly review the derivation of the renormalization group equations in 't Hooft's method in the case of a 4d renormalizable theory [1], see also [3], [7] for a similar review. Consider a dimensionless coupling constant λ. In d = 4 − space-time dimensions the bare coupling λ B has dimension [µ ], where µ is mass scale, as for instance in the case of φ 4 theory. In dimensional regularization one develops a series of counterterms of the classical action such that the bare coupling constant is expressed as a function of a dimensionless renormalized coupling λ R , (2.1) The renormalized coupling λ R is a function of scale µ such that an equation holds. The bare coupling is supposed to be independent of µ so that µ∂ µ λ B = 0. Differentiating equation (2.1) with respect to µ one obtains the following equation where a k (λ R ) ≡ ∂ λ R a k (λ R ) and the terms linear in cancel out. The constant term 0 in (2.3) gives us equation that allows to express the beta function in terms of a single pole a 1 , The vanishing condition for a pole 1/ k , k ≥ 1 in eq. (2.3) produces a recurrence relation This relation together with the beta function (2.4) form a set of recurrence relations that uniquely determine the higher pole residues a k in (2.1) provided a single pole residue a 1 is given. Similar equations can be written for the renormalization of masses and the quantum fields [1]. The renormalization of fields should not be necessarily multiplicative in general. Besides other things, the pole equations play the role of the consistency conditions to be satisfied in the higher loop calculations. Below we generalize these equations in the case of pure quantum gravity without a cosmological constant.
3
Renormalization group equations in pure quantum gravity Our starting point is the theory of gravitational field described by the action where G is the Newton constant (notice that we absorb the usual factor 16π in the definition of G N ). This action does not include the cosmological constant that is assumed to vanish. As we have mentioned this above the case of non-vanishing cosmological constant deserves a separate study and will be reported later. Thus, there is only one dimensionful parameter, the Newton constant G N .
Renormalization of the metric
In the dimensional regularization one considers the space-time dimension d slightly different from 4. In doing so the otherwise dimensionless quantities acquire some dimension that can be compensated by introducing a new scale µ. In what follows we prefer to keep the dimensionality of the Newton constant to be the same as in d = 4, i.e. [G N ] = 2. Instead, the bare metric gets some dimensionality [g B,ij ] = −(d − 4) and it is the only quantity present in the classical action (3.1) that has to be renormalized. This renormalization is in fact a field renormalization that takes a general local form, where = 4 − d and g R,ij is a dimensionless renormalized metric, h k,ij (g R ) are local covariant functions of the renormalized metric. One has that where the beta function β ij (g R ) is a local function of g R . Eqs.(3.2)-(3.3) are quite similar to the renormalization of the target metric in the d = 2 sigma-models [2], [3]. We should, however, stress the obvious differences: the target metric in a sigma model represents an infinite set of couplings while here we deal with a field renormalization. The bare metric g B,ij is independent of the scale µ so that differentiating the both sides of equation (3.2) and using (3.3) we arrive at the equation where we skip the space-time indices and the terms linear in already canceled out. Let us explain our notations used in the above expression: for a tensor h ij (g), a local covariant function of the metric, and a tensor f ij (x) we define Matching the coefficients for a constant term in (3.4) one finds the expression for the beta function in terms of a single pole term, For the coefficients at a higher pole 1/ k , k ≥ 1 one finds This is a recurrence relation for the higher pole residues in terms of the lower pole residues. As always in the case of the RG equations, the complete information about the renormalization is contained in a single pole h 1 . In the perturbative expansion with respect to the Newton constant each term can be expressed as a power series in G N , where h k,l is a local polynomial in curvature of degree l (we count two covariant derivatives acting on a curvature to have same degree 1 as the curvature itself). Notice that h g (g)g is a transformation of h(g) under the rescaling of the metric, g → λg . So that one finds that h k,l (g) × g = (1 − l)h k,l . Applying this to equations (3.6) and (3.7) one finds for the terms in the expansion (3.8), This is first set of the RG equations that we will deal with. The other one arises when the renormalization of the effective gravitational action is considered. It will be discussed in the next section. We finish this section by saying that the solution to the RG equations for the metric that we have just derived is completely determined by either specifying the single pole terms {h 1,l } or the beta function terms {β l } in the decomposition (3.8). In the two-dimensional sigma-models the primary element is a single pole term that determines the beta function. In the present case it is rather the beta functions {β l } that we have to specify first and then determine all terms in the expansion (3.2), (3.8). However, in order to determine the beta function for the metric we have to look at the renormalization of the action. The metric beta function is then completely specified (up to the gauge coordinate transformations that we will discuss) by a single pole in the effective action. These issues are discussed in the next section.
Renormalization of the action
The renormalized gravitational action with all counter-terms added is a function of the renormalized metric g R . The counter-terms are divided on two classes: those that vanish on-shell, we call them L k , and those that do not, we call them V k . V k are invariants constructed from the Riemann tensor and its covariant derivatives that remain non-trivial provided the Bianchi identities are used. It is not a goal of the present paper to give a classification of all possible non-equivalent curvature invariants of a given order. We, however, assume that this classification can be done and, possibly, already exists in the mathematical literature although we are not aware of any relevant publications. In the classical part of the action we have to add terms that are due to the Riemann tensor only, we call them W , with the appropriate coupling constants that have to be renormalized in order to absorb the UV divergences due to V k . It is convenient to choose a basis of integral invariants of degree (l + 1) constructed from the Riemann tensor and its covariant derivatives, {P l , l ≥ 2}, and expand W and V k with respect to this basis where {λ l } is a set of dimensionless coupling constants and, since for each l there will be a certain number of independent invariants, both λ l and v k,l are supposed to have an extra index to enumerate the different invariants of same degree l + 1. In d = 4 the invariant quadratic in the Riemann tensor can be expressed in terms of invariants quadratic in the Ricci tensor and Ricci scalar and the Euler topological invariant. Therefore, in first order (l = 1) the UV divergent terms vanish on-shell and one has that v 1,1 = 0. In the second order (two loops), l = 2, there is only one independent invariant that can be constructed from the Riemann tensor, The numerical value of v 1,2 was first computed by Goroff and Sagnotti [5] and later by van de Venn [6].
Similarly to (3.10) we expand the counter-terms L k in powers of the Newton constant G, where L k,l contains the curvature invariants of degree l + 1. The terms L k,l vanish on-shell and, hence, have to contain at least one power of the Ricci tensor or Ricci scalar.
Higher order Riemann curvature terms in gravitational action
It should be noted that by adding W to the action L 0 one does not change the graviton propagator. Indeed, expanding metric over Minkowski spacetime, g ij = η ij + √ Gφ ij , where φ ij is a perturbation, one finds that the terms where Λ l = G l−1 λ l , start with a term of (l + 1)-th order in perturbation and, since l ≥ 2, do not contribute to the graviton propagator. These new terms lead to an additional (l + 1)-point vertex with a new coupling in the graviton Feynman diagrams. Each leg in the vertex has two spacetime derivatives. The corresponding coupling constant is Λ l ( √ G) l+1 . For l = 2 this is a 3-point vertex with two space-time derivatives at each leg. We remind that a (usual) 3-point vertex in General Relativity has effectively a coupling constant √ G and at maximum it has one leg with two derivatives. Effectively, the expansion will go with respect to all available coupling constants: G , Λ 2 , Λ 3 , . . . . We do not give here a detailed analysis of the corresponding UV divergences. It is, however, instructive to observe certain rules by looking at some simple examples and by using the dimensionality arguments. We first remark that by the dimensionality the UV divergent term that is linear in Λ p and that contains (l + 1)-th power of the curvature has the following form, R stands here for any curvature, the Ricci tensor or Riemann tensor. A one-loop diagram that may produce a UV divergent term of this type for p = l − 1 is shown in Fig.1. It contains one Λ p vertex and three GR 3-point vertices. For other values of p the diagram would include a graviton vertex of V p and r = (l − 1) − p internal GR graviton lines. It is clear that in the renormalization of the coupling Λ l there may appear only the couplings Λ p with p ≤ l − 1.
So that the upper limit in the sum in (3.13). Same restriction comes from the condition that the number of the internal graviton lines r ≥ 0. A similar analysis shows that a Feynman diagram with Λ p 1 , Λ p 2 , . . . , Λ pn vertices, r internal GR lines and m GR vertices such that p 1 + · · · + p n − n + m = l + 1 produces a UV divergent term of (l + 1)-th order in curvature, 14) The counter-term has to have dimension zero, so that one gets a condition for p i as above.
Since r ≥ 0 one has a condition on the values of p i : This discussion is not a rigorous proof of validity of this bound in general. So that its status is conjectural. Although we will not need it in the most of our consideration below this bound will help to avoid certain ambiguities in the beta function equations that will be discussed in section 7.2. Thus, the lowest parameter v 1,2 (or the counter-term V 1,2 ) is independent of any λ. On the other hand, the higher order counter-terms, V k,l and L k,l , l ≥ 3 can be polynomial functions of λ p , p ≤ l − 1 provided the condition (3.15) is satisfied. We see that, when the higher curvature terms are present, the lower loops may give contributions to the curvature terms that appear at the GR loop order l . For convenience, we will still refer to l as a loop order.
RG equations for λ couplings
In d space-time dimensions the bare coupling λ B l has dimension µ l(d−4) . Expressing the bare couplings in terms of dimensionless renormalized quantities λ R l one has The renormalized couplings λ R l satisfy equation whereβ l is the beta function for coupling λ l . The renormalization group equations for the couplings λ l , l ≥ 2 readβ Here we take into account that in the renormalization of a coupling λ l there may be involved only the couplings λ p with p ≤ l − 1.
Modified RG equations for metric
In general, the bare metric can be a function of the renormalized couplings λ R . So that the terms h (k) in the expansion (3.2) are, in general, functions of both the renormalized metric g R and the renormalized couplings {λ R l }. The condition µ∂ µ g B = 0 then leads to the modified RG equations, These equations present a modification of the metric RG equations considered in section 3.1.
RG equations for quantum action
In the renormalizable theories the renormalization procedure goes in few steps. The quantum action, which is a sum of the classical action and the counter-terms, is supposed to be a function of the renormalized fields and the renormalized couplings and masses. It does not depend on the scale µ. This condition imposes certain equations on the residues of the poles 1/(d − 4) k in the quantum action. Then, in the renormalizable theories the quantum action takes the form of the classical (bare) action provided it is expressed in terms of the bare fields, couplings and masses. The latter, on the other hand, are functions of the renormalized quantities. These steps can be repeated in a non-renormalizable theory. The RG equations for the quantum action in a rather general theory were previously considered in [7]. However, the construction in [7] was not accompanied, in a coherent way, by a suitable renormalization of the fields and couplings. Their equations are different from those considered in the present paper. The quantum gravitational action is a function of all renormalized quantities, the metric and the higher curvature couplings, and it takes a general form The power of µ is uniquely determined by the requirement that the action to have dimension zero. The quantum action does not depend on the scale µ, the differential equation µ∂ µ L Q = 0 leads to equation where in order to simplify the expression we drop the subscript R in the renormalized metric g R and in the renormalized higher curvature couplings λ R l , the second line in (3.21) is due to differentiation of W . In order to simplify further the formulas, throughout the paper, we will maximally use the index-free notations. Let us explain our notations used in the above expression. Each term in the action (3.20) is a functional of the renormalized metric g R . L stands for the metric variation so that it is a local tensor (L ) ij = δL δg ij (x) . In particular, Each term in the action (3.20) is an integral over spacetime. Therefore, we will systematically neglect any total derivatives that may appear under the integral. We notice that the equation (3.21) is invariant under a redefinition of the metric beta function, This is a usual ambiguity for the metric beta function. We stress that the metric beta function in equation (3.21) is the one that we studied in Section 3.1, see (3.3) and (3.20). Consider a rescaled metric λg ij . Then with our notations one has that for any functional of the metric dW dλ | λ=1 = W · g . In particular, this gives us a relation for any curvature polynomial of degree l + 1 (any two covariant derivatives are counted as one curvature degree). In particular, this relation holds for T l = P l , polynomials of the Riemann tensor and its covariant derivatives. In the first line of (3.21) the linear in terms cancel due to relation −L 0 + L 0 · g = 0 that is an extension of (3.24) for l = 0.
Beta functions
The vanishing of a constant, 0 , term in (3.21) will give us the relations to determine the beta functions β ij andβ l . One has that In order to proceed further we use the expansion of the counter-terms in series with respect to the Newton constant G, (3.10), (3.11). One finds that where l ≥ 1 and we used (3.24). Notice that the 3rd term in (4.2) is non-trivial for l ≥ 2 and the 4th and 5th terms are non-zero for l ≥ 3. We will use the following general representation for terms L 1,l ij is a curvature polynomial of degree l (any two covariant derivatives are counted as one curvature degree). We note that, provided that L 1,l is given, the term X Below we consider some particular values of l .
Equation (4.2) in this case reduces to only first two terms, In one loop, the quantum effective action contains terms quadratic in the Ricci scalar and in the Ricci tensor (the square of the Riemann tensor reduces to these two invariants plus a topological Euler number that we neglect in our study). We represent Values of a and b are available in the literature and are known to depend on the gauge. The equation for the beta function β 1 is A solution of this equation is This solution is not unique. One can add to (4.8) a term of the form ∇ i ξ j + ∇ j ξ i where ξ i is an arbitrary vector field. This is a general ambiguity (3.23) for the beta function of the metric.
We notice that at this order the beta function (4.8) vanishes on-shell (G ij = 0).
In this order equation (4.2) has more terms, We remind that, by definition, L 1,2 vanishes on-shell and hence can be presented in the form where X (2) is quadratic in curvature, it does not necessarily vanish on-shell, in particular it may contain a term quadratic in the Riemann tensor (see below). On the other hand, P 2 is invariant cubic in the Riemann tensor. In d = 4 there is only one such invariant, One sees that in (4.9) the first two terms vanish on-shell and the last term does not vanish. This means that the first two terms and the last term in the equation are independent and should vanish separately. This gives us the two loop beta functions for the metric and for the cubic coupling λ 2 , ij is a local tensor quadratic in curvature or its covariant derivatives. Its general form is where we did not include the term with covariant derivatives of scalar curvature since it has the form of a gauge transform (4.4). We also did not include a product of two Riemann tensors with two free indices since in d = 4 it is expressed in terms of other curvature invariants, (4.14) The simplest way to get this relation is to vary the d = 4 Euler density, see for instance [10] and, for an alternative derivation in terms of the Weyl tensor, [11]. When derived (4.13) we also took into account the fact that a certain combination of tensors is orthogonal to the Einstein tensor G kl , We see that in two loops there may appear a term (proportional to c 0 ) in the metric beta function that does not vanish on-shell. The term due to c 1 is linear in G ij and the other terms are quadratic in G ij .
In the cubic order (three loops) all terms in equation (4.1) contribute, In this equation the terms containing P 3 do not vanish on-shell. Hence the sum of these terms has to vanish separately from the other terms. This gives us a relation for the beta function for the coupling λ 3 in front of the quartic power of the Riemann tensor, We remind that v 1,3 is linear function of λ 2 . The rest of equation (4.16) can be resolved for a three-loop beta function for the metric, where a and b are those that appeared in the one-loop equation (4.6) and (P 2 ) ij is a metric variation of invariant P 2 , see for instance [12], where in (4.20) we used (4.14) and O(G 2 ) stands for terms quadratic in G ij .
In the quartic order (four loops) one finds As before, this equation splits on two parts: the first part contain terms that have at least one power of G ij and the other part contain terms that are due to the Riemann tensor only. The first part can be used to determine the metric beta function β 4 while the second part is used to determine the beta function for the higher curvature coupling λ 4 . In this equation the 3rd term is due to the Riemann tensor only and, thus, it does not vanish on-shell whilst the 1st, 2nd and 5th terms contain at least one power of the Einstein tensor G ij . The 4th term has a part due to the Riemann tensor only and the other part that has at least one power of G ij , where (P 2 ) ij is given by eq. (4.19) and O(G 2 ) stands for terms quadratic in the Einstein tensor and Tr (R n ) is the trace of a product of n copies of the Riemann tensor, Tr The second line in (4.22) contains at least one power of the Einstein tensor and, thus, should be taken into account in (4.21) when one determines the four loop beta function for the metric, where O(G) are terms linear in G ij . There is more than one invariant of 5th order that can be constructed from the Riemann tensor. Here we list some of such invariants,
The higher pole counterterms
Now it is time to look at the higher poles in the RG equation (3.21). The vanishing condition for the coefficient in front of the pole 1/ k , k ≥ 1 gives us equation, This is a recurrence relation that can be used to determine the counter-terms L k+1 and V k+1 provided the lower pole counter-terms L p , V p , p = k , k − 1 , . . . are given. In order to start the recurrence procedure one has to know the single pole terms L 1 and V 1 . Expanding in powers of the Newton constant as in (3.10) and (3.11) we find the recurrence relation for terms L k,l and V k,l that appear in this expansion, where β p are the coefficients in the power series (with respect to G N ) of the metric beta function andβ p is the beta function for a higher curvature coupling λ p . In the effective action the expansion goes in two directions: the powers series in 1/ and the powers of the Newton constant G N . Interchanging the order of these two expansions one finds The expansion on the right hand side is the loop expansion that indicates that in l -th loop the UV divergences run from a single pole k = 1 to the highest pole k = l . We stress once again that the basic information is always contained in the single pole. The higher poles are expressed in terms of the single pole using equation (5.2).
Below in this section we analyze the solutions of Eq.(5.2) for certain values of l and k .
l = 2
In this case there is no dependance on the couplings λ p and the RG equation (5.2) takes a rather simple form where we take into account that V 1,1 = 0. In this equation the right hand side vanishes on-shell since β 1 is linear in the Einstein tensor. On the left hand side of this equation L 2,2 also vanishes since by assumption L k,l contain at least one power of G ij . Since V 2,2 is the only term in (5.4) that does not contain the Ricci tensor or the Ricci scalar it has to vanish identically, We remark that the two loop result (5.5) was first obtained, using methods different from ours, by Chase in 1982 [13] (see also discussion in [14]). The counter-term L 1,1 takes the form (4.6).
Using the beta function (4.8) we find We see that L 2,2 is at least quadratic in G ij .
l = 3
Two values of k are possible: k = 1 and k = 2. For k = 1 the RG equation (5.2) is while for k = 2 the RG equation is where, as we have shown earlier,β 2 = 2v 1,2 . In this order the counter-terms can be at most linear in λ 2 so that V 1,3 = V 1,3 + V 1,3 λ 2 and the same for the counter-terms L 1,3 . Here and below we use the notations for a linear function of λ: f (λ) = f (0) + a=2 f (a) λ a . Separating in each equation the terms vanishing on-shell and the terms non-vanishing on-shell one solves these two equations and obtains 1,3 , Few things we should notice. First of all, none of the counter-terms V k,3 , k = 2 , 3 and L k,3 , k = 2 , 3 depends on λ 2 . Next, V 3,3 vanishes identically in exact parallel with the vanishing of V 2,2 and V 1,1 . Finally, looking a bit more careful at L 3,3 we can see that it is at least quadratic in the Einstein tensor, similarly to L 2,2 (5.6) and L 1,1 (4.6). We will see whether some of these observations persist to a higher loop order.
l = 4
In this case three values of k are possible: k = 1, k = 2 and k = 3. For the beta functionβ one has thatβ 2 = 2v 1,2 is independent of λ and thatβ 3 = 3v 1,3 λ 2 . The metric beta functions β 1 and β 2 do not depend on λ while 3 λ 2 is linear in λ 2 (see (4.18)). The analysis of the RG equations (5.2) goes along same line as for l = 3. We skip the details of the analysis and below summarize the results.
1,4 , 2,4 = L 3,4 = L 3,4 = 0 , V 3,4 = L The counter-terms (5.11) and (5.12) do not depend on λ 2 or λ 3 . Counter-term V 4,4 vanishes identically similarly to V 3,3 and V 2,2 . A careful analysis (which we perform in the next section) demonstrates that L 4,4 is a polynomial in G ij that starts with a quadratic term. This is similar to what we have found for L 2,2 and L 3,3 and what was known for L 1,1 (4.6).
The GR counter-terms
As we have seen above, the counter-terms have different origins. Some of them originate from the Feynman diagrams with only the usual GR vertices, that come from the General Relativity action, and the other counter-terms come from the diagrams where additional vertices due to the higher curvature couplings are present. In this section our goal is to isolate those counterterms that are due to the GR vertices only. We call them the GR counter-terms. They can be obtained from the total counter-terms V k,l and L k,l by taking the limit of vanishing couplings {λ p } and neglecting derivatives of the total counter-terms with respect to λ p , p ≥ 2. We will denote the GR counter-terms vanishing on-shell as L k,l and the counter-terms non-vanishing on-shell as V k,l . The recurrence relations for the GR counter-terms are obtained from (5.2) in the limiting procedure just described, is the metric beta function in the limit of vanishing λ. For p = 1 and p = 2 it is the same as β 1 and β 2 . As one can see from our analysis in the previous section V k,k = V k,k and L k,k = L k,k for k = 1 , 2 , 3 , 4.
Some general properties of GR counter-terms in the highest pole
The GR counter-terms in the highest pole satisfy equation where β 1 is the metric beta function in one loop (4.8). The right hand side of this equation necessarily contains at least one power of G ij (due to β 1 ) so does L k+1,k+1 in the left hand side of this equation. The only term that does not contain G ij is V k+1,k+1 and, hence, it has to vanish, So that one has the following Statement 1. The GR counter-terms in the highest pole k = l at any given loop order l vanish on-shell.
Taking into account (6.3) the RG equation (6.2) can be written in a simpler form We have seen that L 2,2 and L 3,3 vanish quadratically in G ij . This property can be extended for the GR counter-terms L k,k for any k ≥ 2. This can be done using the induction. The counterterm L 2,2 is at least quadratic in G ij . Let us assume that L k,k k > 2 is at least quadratic in G ij . So that it can be represented in the form where Y ij (k) has a term linear in the Einstein tensor G. Then, varying (6.5) with respect to metric and neglecting the terms quadratic and of a higher order in G ij one has ij . Substituting this into eq.(6.4) one finds that L k+1,k+1 can be represented in a form similar to (6.5) with as well has necessarily a term linear in the Einstein tensor. Hence, one concludes that L k+1,k+1 is at least quadratic in G ij , i.e.
Thus, one has the following Statement 2. At any loop order l the GR counter-terms in the highest pole k = l vanish on-shell quadratically.
We remark here that both a and b that appear in the one-loop counter-term L 1,1 (4.6) depend on the gauge conditions, see [15] and [16]. There may exist a certain gauge for which both a = 0 and b = 0, see [15] where some of such gauge conditions were found. In this case all the GR counter-terms in the highest poles vanish identically, L k,k = 0 , k ≥ 1 (as well as the one-loop beta function β 1,ij ). This is so up to a topological Euler term which we ignore here.
On the other hand, in an alternative approach which makes use of the so-called unique quantum effective action of Vilkovisky one ends up with the certain non-vanishing values of a and b which are claimed to be unique in quantum gravity [17], [18]. In any case, it makes sense to keep the consideration general and consider the arbitrary non-vanishing a and b. Then our Statement 2 is non-trivial since it restricts the possible dependence of the highest pole counter-terms on the Einstein tensor. For instance, a possibility that L k,k may vanish by a power law O(G k ) depending on the value of k is ruled out by Statement 2. Among other things, our results may also serve as a consistency condition to be used as a tool to check the higher loop calculations.
6.2 Some general properties of GR counter-terms in a sub-leading pole k = l − 1 Consider now the first sub-leading pole at a given loop order. From the recurrence relation (5.2) we find where we already took into account (6.3). Beta function β 2 is given by Eqs.(4.12)-(4.13). It does not necessarily vanish on-shell. However, by our Statement 2, L k,k vanishes linearly in G ij . Therefore, all terms on the r.h.s of (6.7) vanish on-shell. On the l.h.s. of (6.7) L k+1,k+2 vanishes at least linearly in G ij . Thus, we see that in eq. (6.7) there is only one term, V k+1,k+2 , that does not vanish on-shell. And hence it has to be zero, Statement 3. At any loop order l the GR counter-terms in the first sub-leading pole k = l − 1 do not contain any terms that are due to the Riemann tensor only.
We note that (6.8) is valid starting with k = 1 and, thus, is not true for k = 0. Indeed, V 1,2 = v 1,2 P 2 is a single pole that appears in two loops. It is proportional to cubic invariant P 2 (4.11). Let us consider (6.7) for k = 1, It is an expression for a second order pole that appears in three loops. We can not claim that L 2,3 is quadratic in G ij . Indeed, on the r.h.s. V 1,2 = v 1,2 P 2 is given by (4.19) that is cubic in the Riemann tensor. Taking that β 1 is linear in G ij we conclude that L 2,3 necessarily contains a term of the form G ij (R 3 ) ij , where R stands for the Riemann tensor. Similar reasonings are valid for the 3rd term in the r.h.s. of (6.9). The direct calculation gives us the following expression, For a larger value of k ≥ 3 equation (6.7), provided one uses (6.8), reduces to Clearly, in the r.h.s. of this equation one always has a term that vanishes linearly in G ij . This, in particular, rules out the possibility for L k,k+1 to vanish by a power law with the power growing with k .
6.3
Some general properties of GR counter-terms in a sub-leading pole k = l − 2 The recurrence relation (5.2) in this order leads to equation 3 ) ,(6.12) where we already took into account that V k,k = 0 , k ≥ 1. We have shown earlier that V k,k+1 = 0 , k ≥ 2. However, for k = 1 it is non-zero since V 1,2 is a non-trivial single pole. Looking at equation (6.12) we note that β 1 is linear in G ij , L k,k is quadratic in G ij and, hence, L k,k is at least linear in G ij . On the other hand, L k,k+1 is linear in G ij and hence L k,k+1 may be non-vanishing if G ij = 0. Putting G ij = 0 on both sides of (6.12) we find that This indicates that the second sub-leading pole may not vanish on-shell. Considering the decreasing order of the pole at a fixed loop order l , this is the first time when a higher pole may not vanish on-shell. The value k = 1 is a special case. One has more non-vanishing terms in equation (6.12) in this case, 14) The terms in the r.h.s. of this equation are due to a single pole (k = 1). We recall that L 1,1 is quadratic in G ij and hence L 1,1 is linear in G ij . Putting G ij on both sides of (6.14) we find This expression is what we called V 2,4 in (5.10). A general form for the beta function β 2,ij is given by (4.12), (4.13). Imposing G ij = 0 only the term with c 0 survives in (4.13). This term is proportional to g ij so that (6.15) is simplified in the limit G ij = 0, L 1,2 has the form (4.10), (4.13) while V 1,2 = v 1,2 P 2 , whose metric variation is given by (4.19), (4.20). Putting everything together we find A similar analysis can be done for k ≥ 2, see equation (6.13). Since L k,k+1 is linear in G ij it can be written as follows where tensor Z ij (k) has a curvature order k , it is constructed from the Riemann tensor so that it does not vanish on-shell. Then one finds that Using this equation and (4.12), (4.13) for the beta function β 2 one finds Our analysis in this subsection can be summarized in the following Statement 4. At any loop order l ≥ 4 the GR counter-terms in a sub-leading pole k = l − 2 do not necessarily vanish on-shell: there may appear some terms that are due to the Riemann tensor only.
The appearance of the non-vanishing on-shell terms is possible in other sub-leading poles, k = l − 3 , l − 4 , . . . . This can be easily analyzed using our equation (6.1). We, however, do not consider this question in the present paper.
Remarks on previous works
We finish this section with some remarks concerning the compatibility of our work with the previous results in the literature. The only earlier paper, we are aware of, that actually computed the higher pole counter-terms in quantum gravity is a paper of Goroff and Sagnotti [5]. In equation (3.18) of this paper they presented the result of an off-shell 2-loop calculation. It contains both the single pole terms and the double poles, i.e. in our notations they computed V 1,2 , L 1,2 and V 2,2 , L 2,2 . We are here interested in the double pole counter-terms. Unfortunately, the comparison of these earlier results with our analysis shows certain signs of disagreement. Indeed, in (3.18) of [5] the double pole includes terms cubic in the Ricci tensor, that are absent in our eq.(5.6). Even more, in (3.18) there presents a term which is linear in G ij , R αβγδ R αβγσ R δ σ , although in our analysis the counter-term L 2,2 is necessarily quadratic in G ij . We believe that a possible source of the disagreement is the following. The authors of [5], using a weak field approximation over Minkowski spacetime, compute in two loops the UV divergences for the cubic vertices that contain six derivatives and the result of the calculation then is used to fix the coefficients in front of the possible cubic curvature invariants. They count nine cubic invariants (see their equations (3.12 a-c)): In their calculation the authors of [5] drop the trace of the metric perturbation and its divergences. In this way they apparently can not determine the coefficients in front of invariants that contain the Ricci scalar. They, however, say that they can determine the coefficients for five invariants I 3 , I 5 , I 6 , I 8 and I 9 that do not contain the Ricci scalar.
It appears that the authors of [5] were not aware of the relation (4.14). Using this relation one finds that invariant I 8 is not an independent invariant, I 8 = 1 4 (I 2 + I 7 ) + 2(−I 4 + I 5 + I 6 ) . (6.21) So that, in reality, there are only 8 independent, cubic in curvature, invariants and not 9 as was assumed in [5]. Invariant I 8 has to be excluded. This means that, if everything were consistent, one would have been able to fix the coefficients for four (not five!) invariants that did not contain the Ricci scalar. This of course would correct the numerical factors in (3.18) of [5].
The other related earlier work that discusses (but does not compute) the higher pole counterterms in quantum gravity is [8]. It was assumed in [8] that the GR highest pole, what we call L l,l , has a first order zero, i.e. vanishes on-shell linearly. This was important for that the renormalization scheme suggested in [8] actually worked. As we demonstrate this here the counter-terms L l,l vanish quadratically so that the scenario suggested in [8] can not be realized in pure quantum gravity (with zero cosmological constant).
Quantum action as a renormalized gravitational action
In this paper we have introduced two renormalization group equations: one for the metric (3.2) and the other for the quantum effective action (3.20). These two sets of equations appear to know about each other through the metric beta function β ij : it is determined by the single pole terms in the effective action by means of the equation (4.1), (4.2). In this section we want to show that there is a deeper relation between the two RG equations. This relation can in fact be anticipated taking into account the way the renormalization works in the case of the renormalizable field theories. Indeed, in a renormalizable field theory all UV divergences in the quantum effective action can be hidden in the field renormalization and the renormalization of the coupling constants so that the quantum action takes the original classical (bare) form if expressed in terms of the bare fields and couplings. In the case of gravity the classical (bare) action should include not only the original General Relativity term G −1 N L 0 but also the higher curvature terms W = l=2 G l−1 N λ l P l that are needed to renormalize the UV divergent terms V 1 = l=2 v 1,l P l . The analysis in this section we do in two steps. First, we demonstrate that the renormalization of the bare GR action correctly reproduces a certain class of the UV divergent terms that we specify below. This is true not only for the single pole terms but also for the higher order poles. For the sake of completeness, the latter agreement we check in detail for the double pole. It is of course guaranteed by the RG structure that the higher poles agree provided the single pole is the same. Then, in the second part of this section we make a general statement that all UV divergent terms can be hidden in the renormalization of the total gravitational action, L gr = G −1 N L 0 + W . There we focus only on the single pole terms.
Renormalization of the GR action
As we have explained above we first concentrate our attention on the terms that are independent of the higher curvature couplings. Therefore, in our analysis in this section we assume that such terms are not present, i.e. consider the case when v 1,l = 0 and λ l = 0. Since the quantum action depends on v 1,l and λ l analytically this limit can always be arranged. Then for the corresponding part in the quantum action (3.20) one finds where V 1 (g R ) = 0 as we just have explained. Respectively, in the expression for the metric beta function one has to put v 1,l = 0 and λ l = 0. We now want to show that (7.1) is in fact identical to the classical GR gravitational action expressed in terms of the bare metric according to (3.2). Thus, our claim is that So that all divergences that are present in (7.1) are, effectively, hidden in the classical gravitational action. Provided g B (g R ) takes the form (3.2) one has that 3) The latter expression can be expanded as a formal power series in −1 , where we use the definitions (3.5) and (3.22). The expansion (7.4) is an appropriate generalization of Faà di Brunno's formula for derivatives of a composite function. A simple form of it is expressed in terms of the Bell polynomials. Here we concentrate our attention only on the first few terms in this formula. Comparing the expressions (7.1) and (7.4) we see that the first term in both expressions is the same, G −1 N L 0 . So that let us look at the second term in both expressions. In equation (7.4) this term is where we used the expansion in powers of G N , h 1 = l=1 G l N h 1,l , and in the last equality we used the relation (3.9) between the metric beta function β l (β = l=1 G l N β l ) and h 1,l : h 1,l = l −1 β l . The next point is that eq.(4.2), in the limit we consider in this section, reduces to equation So that eq.(7.5) is precisely −1 L 1 = −1 l=1 G l−1 N L 1,l . We remind once again that V 1 = 0 in the limit we consider here so that only L 1 appears as a single pole. We thus have proved that the single pole terms in both expressions (7.1) and (7.4) are identical. Since the higher poles are determined by the single pole and the metric beta function (that is by itself related to the single pole and is, thus, the same in both cases) this is sufficient for proving that all other terms (higher poles) in (7.1) and (7.4) are the same and the two expressions are identical. Below we check the equality for the double pole −2 in (7.1) and (7.4) in order to see how the equality works in a higher order and to check that there are no hidden underwater stones in this easy proof.
On the other hand, in equation (7.4) the double pole is where we used the second relation in (3.9) to express h 2,l in terms of h 1,p , p = 1, . . . , l − 1.
Comparing the expressions (7.7) and (7.8) we see that the second term, due to L 0 , is the same in both expressions. Then we note that L 0 is symmetric so that This can be demonstrated by a direct computation of L 0 . Indeed, for two symmetric tensors A ij and B ij one finds that that is the final ingredient needed for the demonstration of the equality of (7.7) and (7.8). Some remarks are in order. The relation (7.2) may have been anticipated. Indeed, many authors have noticed earlier that any counter-term of the form G ij X ij that contains at least one power of the Einstein tensor (and, thus, vanishing on-shell) may be absorbed in the original General Relativity action by means of a redefinition of the metric, g ij → g ij + G N X ij (see, for instance, [19] for a relevant discussion). Since the single pole terms L 1,l are of this type it is of course natural to expect that by a similar redefinition all of them (and the higher poles related to L 1,l by the RG equations) can be consistently hidden inside the classical action L 0 . The metric renormalization (3.2) could be viewed as a consistent way of doing a redefinition of this type. However, the point in this section is still non-trivial. The related higher poles contain V k,l , k ≥ 2 that do not vanish on-shell. Nevertheless, all such terms will go away as soon as metric in the classical action is redefined as (3.2), (3.9). The trick is done by the higher order variations of the classical action L 0 that produce terms that are non-vanishing on-shell. That the entire procedure is self-consistent is guaranteed by the renormalization group equations.
Renormalization of the total gravitational action
A natural question arises whether one can generalize this property to the complete set of counterterms, i.e. for a generic case of non-vanishing {λ l } and {v 1,l }? A reasonable guess is that the respective generalization of the classical action should include the higher order terms that are due to the Riemann tensor only, where, additionally to the renormalization of the metric, one has to include the renormalization of the higher order couplings {λ l }. Notice that the higher order curvature terms include only the terms with the Riemann tensor. We are now going to show that this is indeed the right form of the bare gravitational action. Consider (7.12) as a function of the bare metric g B and the bare coupling constants {λ B l } (3.16) and expand in a formal power series as above. It is sufficient to look at the UV divergent terms in a single pole (the higher poles are derived from a single pole by the RG equations), L gr (g B , λ B ) = L gr (g R , λ R ) + −1 Q 1 + . . . , Q 1 = l=1 G l−1 N Q 1,l , Q 1,1 = L 0 · h 1,1 , Q 1,l = L 0 · h 1,l + a 1,l P l + l−1 p=2 λ p P p · h 1,l−p , l ≥ 2 (7.13) The last term in Q 1,l is non-trivial for l ≥ 3. We now have to incorporate the dependence of all quantities such as the metric, the beta functions and the terms in the quantum action on the couplings {λ p }. Therefore, all these quantities are assumed to be decomposed in the power series with respect to λ. For a quantity A l that appears in a loop order l ≥ 2 one thus has that A l = A In each sum the condition (3.15) is assumed to be satisfied. For any given l there is a finite number of terms in (7.14). With these definitions we find for Q 1,l , l ≥ 2 1,l P l , Q 1,l P l . The same algorithm works in n-th order of equation (4.2). Using equations (7.16) and dropping the overall factor (l − p 1 − · · · − p n ) (that is non-zero due to condition (3.15)), one finds So that a single pole Q 1 in the power series expansion of the bare gravitational action (7.13) is indeed identical to a single pole L 1 + V 1 in the quantum effective action. This proves our final statement.
Statement 5. The complete set of the UV divergent terms in quantum gravity can be consistently hidden in the bare gravitational action (7.12), that includes terms of a higher order in the Riemann tensor, expressed in terms of the bare metric and the bare higher curvature couplings.
This completes the present analysis.
Conclusion
We have formulated a renormalization group approach to the perturbative quantum gravity based on 't Hooft's method developed earlier for the renormalizable theories. Our formulation includes the renormalization of the metric, of the higher Riemann curvature couplings and the renormalization of the quantum action. The equations (3.18), (3.20) and (5.2) form the complete set of the renormalization group recurrence equations that can be solved to determine the higher pole counter-terms in the quantum action. The metric and the higher coupling beta functions are determined by solving eq.(4.2). These equations and, based on them, Statements 1-5 constitute the main result of this paper. The analysis in the present paper has been done in the spacetime dimension d = 4. It is of interest to generalize it to other values of d.
We suspect that the approach developed in the present paper can be extended to other, conventionally considered as non-renormalizable, theories such as a scalar field theory with a non-renormalizable potential and the interacting theories of the Horndeski type. We plan to consider these theories elsewhere.
Our approach may have many applications in quantum gravity. The recurrence equations that we derived can be used as a consistency check in a higher loop calculation, that are in the case of quantum gravity are very laborious and time consuming, provided one will be performed in the future. The other possible application is related to the computation of the black hole entropy in the perturbative quantum gravity, along the lines developed in [20]. Finally, it would be interesting to analyze whether one can reconcile the approach developed in this paper with the renormalization ideas suggested in [21]. A related direction is to extend the present approach to the case of gravity with a non-zero cosmological constant. This will be considered in a subsequent work. | 2020-09-03T01:00:31.322Z | 2020-09-02T00:00:00.000 | {
"year": 2021,
"sha1": "12ab282ed0886dce631c59848eeb654d83e32c89",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.nuclphysb.2020.115246",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "543ce154c8067fbc6e938b02f95dfc2b0af596ae",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
1480073 | pes2o/s2orc | v3-fos-license | A Case of Secondary Hypertension Associated with the Nutcracker Phenomenon
A 25-year-old Korean woman was referred for uncontrolled hypertension. Laboratory examination revealed increased plasma renin activity and microscopic hematuria. Computed tomography demonstrated compression of the left renal vein (LRV) between the aorta and superior mesenteric artery; however, both renal arteries were intact and there was no adrenal mass. Renal vein catheterization showed external compression with a pressure gradient of up to 8 mm Hg between the LRV and the inferior vena cava. Plasma renin activity in the LRV was almost five times higher than that in the right renal vein. In this patient, renin-dependent hypertension was caused by renal congestion due to LRV obstruction.
without unilateral macroscopic or microscopic haematuria. 3) However, very rarely, NCS causes secondary hypertension due to renin hypersecretion. We report the first case of a Korean female patient who was diagnosed with renin-dependent systemic hypertension due to NCS in the absence of a renin-secreting tumor or renal artery stenosis.
Case
A 25-year-old Korean woman was referred by a general physician with a 7-month history of uncontrolled hypertension. Her past medical and family histories were otherwise non-contributory. On admission, she was 165 cm tall and weighed 53 kg. Physical examination was unremarkable except for high blood pressure of 182/115 mm Hg. Urinalysis revealed microscopic hematuria and minimal proteinuria. Laboratory examination results were within normal limits, except for an increase in plasma renin activity (>20 ng/mL/hr, normal range: supine, 0.5-1.9 ng/mL/hr; erect, 1.9-6.0 ng/mL/hr), plasma aldosterone (668.21 pg/mL, normal range: supine, 10-105 pg/ mL; erect, 34-273 pg/mL) and angiotensin II (90 pg/mL, normal range: 9-47 pg/mL). Electrocardiogram showed sinus rhythm with left ventricular hypertrophy (LVH), which was confirmed as mild LVH on echocardiography. Abdominal computed tomography demonstrated compression of the LRV between the aorta and SMA with pelvic congestion syndrome (Fig. 1). An adrenal tumor was not detected, and both renal arteries were intact. Renal scintigraphic findings using technetium-99m diethylenetriaminepentaacetic acid, with and without captopril challenge, were normal. Selective renal venography demonstrated stenosis of the LRV at the level of the aorta, with dilatation of the left ovarian vein and multiple collateral veins with contrast filling the pelvic cavity (Fig. 2). The pressure gradient between the LRV and the inferior vena cava was 8 mm Hg (normal <3 mm Hg). Plasma renin activity in the LRV was almost five times higher than that in the right renal vein (5.88 ng/mL/hr vs. 1.17 ng/mL/hr). Hypertension did not respond to calcium channel antagonist and beta adrenergic blocker, but, the blood pressure decreased to 110/65 mm Hg after administration of angiotensin receptor blocker, candesartan (16 mg/d). The patient has not suffered from hypertension for more than 2 years with the use of this medication.
Discussion
Renin-dependent hypertension is the most common cause of secondary hypertension induced by renin hypersecretion. In most of the cases, renin-dependent hypertension is caused due to a reninsecreting adrenal tumor or renal artery stenosis. In this patient, LRV compression due to the nutcracker phenomenon caused renindependent hypertension, in the absence of an adrenal tumor and renal artery stenosis. LRV entrapment syndrome, characterized by the compression of the LRV between the SMA and the abdominal aorta was first described in 1950. 2) Chait et al. 4) described the abdominal aorta and the SMA as the two arms of a 'nutcracker' that can potentially compress the LRV. This description prompted the Belgian physician De Schepper to name this phenomenon as the NCS. 5) The NCS is a very rare condition, and hence, there is no data about its actual prevalence or incidence. According to the recently published data including case reports and small case series, this disorder occurs in the 3rd or 4th decade of life, and has a predilection for women. 6) Common clinical manifestations of the NCS are hematuria, pain, pelvic varicosities, and varicocele formation. 3) The pathophysiology of NCS is not fully understood. There are some theories that explain why compression of the LRV by the SMA occurs only in a few patients, and why LRV hypertension causes hematuria and pain. One of the theories suggest that posterior renal ptosis with stretching of the LRV over the aorta may be a contributing factor, 7) and the other theory suggests that abnormal branching of the SMA from the aorta contributes to the development of NCS. 8) However, very rarely, NCS causes secondary hypertension due to renin hypersecretion. . The pressure gradient between the left renal vein and the inferior vena cava was 8 mm Hg (normal <3 mm Hg) and plasma renin activity in the LRV was almost five times higher than that in the right renal vein (5.88 ng/mL/hr vs. 1.17 ng/mL/hr).
A B C
Although cases of patients having NCS have been described regularly, most of these patients had the aforementioned symptoms, and they did not have systemic hypertension associated with renin hypersecretion. To the best of our knowledge, only one case of renin-dependent hypertension associated with NCS has been reported so far. This case was of a 23-year-old Japanese lady who had hypertension with elevated renin secretion due to the nutcracker phenomenon. 9) The patient underwent endovascular stent placement for the nutcracker phenomenon, although her blood pressure decreased to 100/60 mm Hg with the use of an angiotensin II receptor blocker. Although the mechanism of secondary hypertension induced by LRV compression is not obvious, the probable mechanism has been described below. In animal models, elevation in renal venous pressure increases renal interstitial pressure and renin secretion. [10][11][12] Some articles have suggested that increased renal pressures (venous and interstitial) reduce glomerular filtration, affect the intrarenal blood flow, and induce the release of renin. 10) With decreased glomerular filtration, there is reduction in sodium delivery to the macula densa, which stimulates renin secretion from the juxtaglomerular cells. In patients in whom alteration of renal interstitial pressure increased the plasma renin activity, renoparenchymal or excretory tract disorders have been reported. 12) Thus, LRV hypertension may have induced renin secretion in this patient.
The management options for NCS range from observation to nephrectomy, depending on the severity of symptoms. Conservative treatment has been proposed in patients with mild hematuria, while surgery such as nephrectomy, nephropexy, renocaval reimplantation or auto-transplantation is indicated in patients with massive hematuria and severe pain. However, based on the available data, LRV transposition seems to be the most common surgical intervention for the NCS. Long-term results of this surgical procedure show a high rate of improvement of symptoms. 6) Currently, external and internal stenting procedures performed either via the minimally invasive or endovascular approach are promising treatment options. Since the first case was reported in 1996, 13) some of the patients have shown a successful outcome of vein stenting. Although largescale clinical trials for treatment with stents have not been performed, a few case series showed good results in the long-term follow-up data. Sixty-one patients with NCS were treated with endovascular stents, and they were observed for a median period of 66 months. Most of the patients experienced amelioration of their symptoms and improvement in findings of ultrasound except for 2 patients; their symptoms were unchanged. 14) Based on these outcomes, we can consider an interventional or operational strategy in our patient if she wants to get pregnant or if she does not respond to the medications.
In conclusion, renal vein obstruction could be considered as one of the causes of renin-dependent secondary hypertension although renal vein obstruction in association with NCS is very rare. In this patient, laboratory examination results showed increased plasma renin activity in the LRV and blood pressure was reduced by an angiotensin II receptor blocker. Based on these findings, we attributed the renin-dependent hypertension to the nutcracker phenomenon. | 2017-11-08T22:47:25.355Z | 2014-11-01T00:00:00.000 | {
"year": 2014,
"sha1": "6c80ce6d98d839eef3426fa4a0299e6b5c063608",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc4248617?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d07fd6acb5313cf2044081f656c70810f7d33d1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55920858 | pes2o/s2orc | v3-fos-license | Suspension Stability and Characterization of Chitosan Nanoparticle – Coated Ketoprofen Based on Surfactants Oleic Acid and Poloxamer 188
In this research, ketoprofen was used as a drug model in the preparation of chitosan nanoparticles as a potential drug delivery system through the ionic gelation process with tripolyphosphate (TPP). The particle size analysis (PSA) revealed that the average particle size, polydispersity index (PI), and entrapment efficiency of chitosan nanoparticles prepared with oleic acid were 253.7 nm and 0.375 with drug entrapment efficiency of 73.30%. Those prepared with poloxamer 188 were 242.94 nm and 0.302 with drug entrapment efficiency of 87.89%. Scanning electron microscopy (SEM) analysis showed that the shapes of the nanoparticles, both prepared with oleic acid and poloxamer 188, were intact and spherical. Fourier transform infrared spectroscopy (FTIR) indicated several differences between the spectra of chitosanand ketoprofen-loaded chitosan nanoparticles; for example, a new peak at the wavenumber 1409/cm indicated the presence of electrostatic interaction between the carboxyl group of ketoprofen and the amino group of chitosan. The chitosan nanoparticle suspension prepared with poloxamer 188 showed smaller increases in turbidity and viscosity than that prepared with oleic acid after 34 d of storage.
Introduction
Ketoprofen is in the group of non-steroid anti-inflammatory drugs (NSAIDs) used to treat inflammation, pain, and rheumatoid arthritis [1].Ketoprofen has a short elimination half-life of about 1-3 h [2] and a low dissolution rate that requires a higher dosage to maintain a therapeutic level in the patient's blood.Use of ketoprofen dosages greater than 300 mg, however, may cause adverse effects in the upper part of the gastrointestinal tract [3].Therefore, a drug preparation that can improve the dissolution rate and reduce the dosage to minimize adverse effects is needed.
A nanoparticle drug delivery system holds promise for achieving this end, as this kinds of system can are deliver drugs to the right site, at the right time, and at the correct dose level [4].The capability to pass barriers in biological systems is the method's main advantage.The active ingredient in a biological medium can be protected from degradation and delivered to the treatment site in a controlled manner [5].Chitosan is one of the most abundant naturally occurring biopolymers with a cationic polyelectrolyte nature that is also non-toxic, biocompatible, and biodegradable, making it suitable as a nanoparticle drug delivery system [6].
Nanoparticle formation is not only affected by the material composition and the method used in synthesis, but the addition of surfactant can create a greater number of more stable nanometer-sized particles to agglomeration.Therefore, it is desirable to add a surface-active agent (surfactant) to lower the surface energy of the solution.According to Tojo et al. [7], the use of a surfactant in the preparation of nanoparticles can have an effect on the size, PI, and structure of the nanoparticles produced.The objective of this research was to prepare tripolyphosphate-modified chitosan nanoparticle-coated ketoprofen by adding different surfactants as the surface-active agents and determine the stability of the nanoparticle suspension thereafter.Surfactant used in this research is natural surfactant oleic acid and non-ionic surfactant poloxamer 188.
Experiment
The main materials used in this research were ketoprofen and chitosan, which were purchased from Kalbe Farma and Bratachem Indonesia.The specifications of the chitosan included the degrees of deacetylation, water, and ash contents, which were 77.26, 9.94, and 0.61%, respectively.
Preparation and Characterization of Chitosan
Nanoparticle-Coated Ketoprofen.Ketoprofen-loaded chitosan nanoparticles were prepared by mixing 0.84 mg/mL sodium triphosphate (STPP), chitosan (2.5%) (w/v), 0.2 mg/mL ketoprofen, and 0.8 mg/mL surfactant.Chitosan (50 mL) was added to 20 mL of STPP, while homogenizing at 13,500 rpm for 10 min.Then, 20 mL of ketoprofen was added to the mixture, followed by 20 mL of surfactant, while stirring with a magnetic stirrer for 30 min at 400 rpm.Ultrasonication (20 kHz) was performed on every 25 mL of mixture for 60 min at 20% amplitude.After the ultrasonication, each mixture was centrifuged at 19,900 rpm at 4 °C, for 2 h [8].Turbidity and viscosity were measured before and after ultrasonication and after centrifugation.The resulting supernatant, which was a suspension of nanoparticles, was separated from the precipitate, and then the particle size and suspension stability were measured after 34 d of storage under conditions room temperature and 4 °C.The chitosan nanoparticle suspension was then spray dried to powder.The characterization of its morphology was conducted with SEM, and the functional groups were analyzed with FTIR.
Efficiency of ketoprofen entrapment in chitosan nanoparticles.Chitosan nanoparticles (25 mg) were dissolved in 50 mL of phosphate buffer (pH 7.2), and then stirred for 24 h and filtered.The absorbance of the resulting filtrate was measured with UV spectrophotometer at λ max 259.8 nm [9].The absorbance from the measurement was used to determine the ketoprofen concentration on a standard curve.The entrapment efficiency was measured with the following equation (1).
Results and Discussion
Preparation of Ketoprofen-loaded nanoparticles based on types of surfactants.Chitosan nanoparticlecoated ketoprofen was prepared by the ionic gelation method with STTP as a crosslinking agent through the homogenization process at room temperature.The mechanism of the chitosan nanoparticle formation is an electrostatic interaction between the positively charged amine group of chitosan and the negatively charged phosphate group of STTP [10].This interaction creates a stable matrix, renders ketoprofen easier to entrap, and releases it back from matrix [9].The analysis of turbidity and viscosity of the chitosan nanoparticle suspension showed that ketoprofen-loaded chitosan nanoparticle preparation using poloxamer 188 gave turbidity and viscosity values lower than did oleic acid (Table 1).Different turbidity values were also observed in the research conducted by Sugita et al. [11] using 70.6 NTU with oleic acid as the surfactant, and work by Lidiniyah [12] using 5.42 NTU with poloxamer 188 as the surfactant.This was likely because of the decrease in the particle size due to the cavitation phenomenon during ultrasonication that may rupture the molecules into smaller sizes and separate the unruptured, larger particles in the ultrasonication process during centrifugation.In the third stage, the surfactant is used to form an organized group of molecules called micelle, the hydrophilic part of the surfactant in aqueous media, and to associate them with the lipophilic portion in the oil media.According to Schramm et al. [13], the formation of micelles in solution is generally seen as a compromise between the tendency of the alkyl chains to avoid contact with the water, and the tendency of the polar parts to maintain contact with the aqueous environment.Therefore, the addition of surfactant in the synthesis of chitosan nanoparticles can stabilize the particle sizes so the process does not undergo agglomeration.Figure 1 presents an illustration of the role of surfactants in the particle size reduction during the homogenization and ultrasonication stage.
Analysis of the particle size within the suspension using PSA after centrifugation showed that the particle size produced with poloxamer 188 was smaller than that produced with oleic acid (Table 2).A similar result was observed with PI, which showed nanoparticle size uniformity.This difference in size was assumed due to the different hydrophile-lipophile balance (HLB) value of surfactant used.HLB value greatly affects the stability of the particles in a liquid medium.The higher the HLB value of the surfactant, the more able to stabilize the particles present in the water medium.value of poloxamer 188, which is 29, while the HLB of oleic acid, which is only 1, so that poloxamer 188, which has a long hydrophobic tail, will give a higher particle stability because it could form a more compact micelle structure compared to oleic acid in the medium of water.This longer hydrophobic tail could reduce the surface tension during the cavitation process and result in more stable nanoparticles.The percentage of entrapped ketoprofen in the chitosan nanoparticle matrices of poloxamer 188 was higher than that of oleic acid (i.e.,87.89% and 73.30% respectively), as entrapment efficiency was correlated with particle size.The smaller the particle size, the larger the surface area, which resulted in an increase of entrapment capability.This result also corresponded to a study by Sugita et al. (2010) [14], which demonstrated that the use of Tween 80 (HLB = 15) improved the encapsulation efficiency and produced larger nanometer-sized particles, about 100-1,000 nm larger than what was produced using Span 80 (HLB = 4.3).
Characteristics of morphology and chitosan nanoparticle structure.
The SEM analysis of the ketoprofen-loaded chitosan nanoparticles prepared with poloxamer 188 and oleic acid showed intact, spherical particles, indicating that the chitosan nanoparticles were loaded with ketoprofen (Figure 2).According to Sugita et al. [14] and Wahyono et al. [9], the shape of the unloaded chitosan nanoparticles was wrinkled and slightly flat.However, the ketoprofen-loaded chitosan nanoparticles had an intact spherical shapes.The chitosan nanoparticles prepared with poloxamer 188 showed a smaller and more uniform particle size compared to those prepared with oleic acid.This result was also seen in the size percentages, amounts, and PI of nanoparticles obtained from the PSA analysis.
FTIR analysis was used to characterize the interaction of functional groups in the nanoparticles.FTIR spectra of chitosan, ketoprofen, and ketoprofen-loaded chitosan nanoparticles prepared with oleic acid and poloxamer 188 are shown in Figure 3.In the spectrum of chitosan, there were three specific peaks at wavenumbers 3,432/cm (-OH), 1,074/cm (C-O-C) and 1,651 cm −1 (NH 2 ) [15].The spectrum of chitosan was different from the spectrum of ketoprofen-loaded chitosan particles.There was a shift in the peak of NH 2 of chitosan from 1,651/cm to 1,640/cm in the ketoprofenloaded chitosan nanoparticles, which was accompanied by the appearance of a new peak at 1,562/cm as the result of electrostatic interaction between the amine group of chitosan and the phosphate group of TPP.The difference in the spectra of ketoprofen and ketoprofenloaded chitosan nanoparticles was in the appearance of a new peak at 1,409/cm due to the electrostatic interaction between the carboxylic group of ketoprofen and the amino group of chitosan forming a carboxylate salt [9] in the ketoprofen-loaded chitosan nanoparticles.Additionally, the appearance of a peak at 1,041/cm of the ketoprofen-loaded chitosan nanoparticles revealed the presence of the P-OH group of TPP [16].Figure 4 shows that the use of poloxamer 188 as the surfactant could slow the increase in turbidity of the resulting ketoprofen-loaded chitosan nanoparticle suspension compared to oleic acid.The percentage increases in turbidity of chitosan nanoparticle suspension for 34 days with surfactant poloxamer 188 at room temperature and 4 °C respectively were 44.08% and 42.17%, while the oleic acid surfactants were 51.40% and 51.07%respectively.The increased turbidity of the two suspensions were probably due to the agglomeration of particles in suspension.The result of PSA showed that the particle size of the chitosan nanoparticle suspension prepared with oleic acid increased to >300 nm, while that prepared with poloxamer 188 was still in the initial range of nanoparticle size (i.e., 200-300 nm).This may have been because the high value of HLB of poloxamer 188 reduced agglomeration during storage, which could have assisted in stabilizing the particle size.However, the turbidity analyses for both chitosan nanoparticle suspensions with poloxamer 188 and oleic acid surfactants stored at room temperature and 4 °C did not show any significant difference.Therefore, it appears that the suspension could be stored at room temperature or 4 °C.This result was similar to that of Joseph and Sharma [17], where cytarabine-loaded chitosan nanoparticles that maintained stability at room temperature and 4 °C after a 34 d storage.The viscosity of the stability of chitosan nanoparticles with poloxamer 188 or with oleic acid showed that the suspension viscosity of chitosan nanoparticles with surfactant poloxamer 188 increased to 59.48% during storage, while the oleic acid viscosity was 61.23% (Figure 5).This was caused by particle agglomeration due to surfactant activity decrement.This result supports the hypothesis of Duan et al. [18].However, in this study, the viscosity of poloxamer 188 on day 26 and oleic acid on day 22 decreased with the hydrolysis of chitosan.According to El-Hefian et al. [19], chitosan which is stored for a long time in organic acid solution, will cause viscosity decrement due to the hydrolysis of chitosan.
Conclusions
Chitosan nanoparticles prepared with poloxamer 188 were smaller and more numerousa, thereby resulting in a higher ketoprofen entrapment efficiency of chitosan nanoparticles than oleic acid.The addition of poloxamer 188 in the preparation of ketoprofen-loaded chitosan nanoparticles resulted in higher stability than oleic acid during 34 d of storage.The storage condition based on increased turbidity showed that the chitosan nanoparticle suspension could be stored at room temperature and 4 °C.SEM analysis showed that the nanoparticles obtained with poloxamer 188 and oleic acid were intact spherical nanoparticles.FTIR analysis showed that there were some differences in the spectra of chitosan, ketoprofen, and ketoprofen-loaded nanoparticles.
Figure 1 .Figure 2 .
Figure 1.Illustration of the Role of Surfactant in Reducing the Particle Size
Figure 4
Figure 4. Increased Turbidity of Ketoprofen-loaded Chitosan Nanoparticles with Surfactant ( ) Oleic Acid at Room Temperature, ( ) Oleic Acid at 4 °C, ( ) Poloxamer 188 at Room Temperature and ( ) Poloxamer 188 at 4 °C based on Period of Storage | 2018-12-07T10:33:03.203Z | 2014-10-09T00:00:00.000 | {
"year": 2014,
"sha1": "9f265fc1f3d16495208a8817d183131939c5fef0",
"oa_license": "CCBYSA",
"oa_url": "https://doi.org/10.7454/mss.v18i3.3720",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "9f265fc1f3d16495208a8817d183131939c5fef0",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
22677020 | pes2o/s2orc | v3-fos-license | Punicalagin Induce the Production of Nitric Oxide and Inhibit Angiotensin Converting Enzyme in Endothelial Cell
Introduction: punicalagin, a hydrolysable tannin polyphenol from pomegranate, reported to have a protective effect against many diseases due to its high antioxidant and free radical scavenging activities. Aim, this study investigated the potential antihypertensive activity of punicalagin in human derived EA.hy926 endothelial cell model, via two mechanisms. In first mechanism, punicalagin enhance the nitric oxide production through scavenging reactive oxygen specious (ROS) and activating the endothelial nitric oxide synthase enzyme (eNOS). In second mechanism, punicalagin showed effect activity by inhibiting the angiotensin converting enzyme (ACE). Methods: The effect of different concentrations (1-100 μM) of punicalagin on EA.hy926 cells was measured using MTT assay. Induction of ROS was done using Ang II and scavenging activity of punicalagin was assessed by flow cytometer and fluorimeter. NO production was measured in order to determine the effective dose of punicalagin followed by measuring the ability of punicalagin to induce eNOS activity and the enzyme expression by Western blotting cellular Ca concentration and ACE inhibition were also determined. Results: Punicalagin (1-60 μM) reduced ROS production in EA.hy926, which induced by added angiotensin II, as shown by flow cytometry and by fluorometry. In addition, at the same concentration the nitric oxide production was increased in a dose-dependent manner due to increased eNOS activation. The activation of eNOS enzyme was promoted by an increase of cellular calcium concentration at the tested concentrations. Examined punicalagin concentrations significantly inhibited ACE activity, possibly due to zinc binding. Conclusion: punicalagin clearly exhibits the potential for reducing hypertensive activity by a dual mechanism of nitric oxide synthase induction by increasing nitric oxide levels, and ACE inhibition.
Introduction
Cardiovascular disease affects the circulatory system (heart, artery and blood vessels). This disease is a leading cause of death all over the world [1]. In the USA, according to the American Heart Association report in 2013, cardiovascular disease (CVD) deaths were 21.42% [2]. According to the World Health Organization (WHO), in the United Kingdom, CVD was one of the main causes of death in 2011 reaching about 19.77%, while in Saudi Arabia the death rate from CVD was 23.98% in the same year [3,4].
High blood pressure is a major risk factor for some chronic diseases such as stroke, renal disease, and cardiovascular disease [5]. The regulation of blood pressure is maintained by various physiological systems including the kinin-nitric oxide system (KNOS) and renin-angiotensin system.
High and unregulated production of reactive oxygen species (ROS) is another risk factor for heart disease [6,7]. In endothelial cells, enzyme systems involving NADPH oxidase, xanthine oxidase, and the mitochondrial respiratory chain are responsible for ROS production [8]. ROS production needs to be regulated or adverse effects like oxidation may damage cell macronutrients such as proteins, lipids, and nucleic acids. Balance in ROS concentration can be achieved by the antioxidant system that exists in the human body and by taking antioxidant supplements [9]. Moreover, ROS has the ability to oxidize nitric oxide (NO) produced from endothelial cells, leading to endothelial dysfunction and the initiation and development of cardiac disease [10].
Nitric oxide (NO) is known as a relaxing factor because it acts as a vasodilator, increases blood flow, and inhibits platelet aggregation and adhesion [11]. Calcium-dependent endothelial NO synthase (eNOS) is one of the important factors responsible for the production of NO in endothelial cells. Increased levels of NO in the endothelial cells are often due to the increased protein expression of the eNOS enzyme or by scavenging the ROS produced within the cells [12,13].
In renin-angiotensin system, angiotensin II is produced from angiotensin I by the catalytic effect of angiotensin converting enzyme; this is higher than normal levels in patients with hypertension [14,15]. Angiotensin II acts as vasoconstrictor, increasing blood pressure. For this reason, inhibition of ACE activity is a pharmacological target for the treatment of hypertension [16]. The ACE enzyme has a Zn 2+ ion in its two active sites [17]. Substrate binding and catalysis of ACE indicate the mechanism of Zn 2+ ion [18]. One of the mechanisms of ACE inhibitors is the ability to bind the Zn 2+ ion [19].
Several research groups have reported a significant interaction between biological systems and dietary polyphenols from vegetables and fruits, that may lead to beneficial anticancer [20,21], anti-inflammatory [22], antioxidant (ROS scavenging or metal chelating) [23], and antibacterial properties [24]. Pomegranate contains a high quantity of different polyphenols e.g. tannins, ellagic tannins, anthocyanins, catechins, gallic, and ellagic acid [25,26] including punicalagin (of the ellagitannin class). There are very few studies on punicalagin to date. Therefore, the effect and mechanism of punicalagin as an anti-hypertensive compound was investigated in the EA.hy926 cell line for two pathways.
Materials
The human endothelial-like immortalized cell line Ea.hy926 was kindly donated by Dr. Bodman-Smith University of Surrey, UK. Ethylene glycol tetraacetic acid (EGTA), hippuryl-histidyl-leucine (H-H-L) and angiotensin II were obtained from Sigma-Aldrich Chemical Co, Poole, UK.
Cell Viability and Nitric Oxide (NO) Release
The EA.hy926 cell line was seeded in DMEM with high glucose content (4.5 g/L), and supplemented with 10% FBS and 5% penicillin. Different punicalagin concentrations (1-100 µM) were tested on Ea.hy926 cells to measure the cell viability and nitric oxide (NO) production. Briefly, Ea.hy926 cells were seeded into 96 well tissue culture plates at a density of 1 × 10 4 cells/200 µl DMEM serum media. Once the cells reached 60-70% confluence, they were treated with punicalagin for 12, 24, or 48 hours. Different incubation times were tested to determine the optimal time for nitrite production, as a marker of nitric oxide production and cell viability. Nitrite levels in the treated cell supernatant were measured by the Griess kit according to the Griess kit instructions and cell viability was measured by MTT assay, at each experimental time. Cells pretreated with different concentrations of punicalagin were incubated with 20 µl of MTT dye (5 mg/ml PBS) at 37 °C and 5% CO 2 to measure cell viability after different incubation times. At the end of each experimental time, the culture medium was aspirated and 100 µl of DMSO was added to each well for 30 seconds at room temperature to dissolve the formazan crystals. The purple colour produced was measured at 492 nm using a plate reader (Boehring CO, Marburg, Germany).
Measurement of ROS by Flow Cytometry
Dichlorofluorescein dye (non-fluorescent CM-H 2 DCFDA) has the ability to diffuse through the cell membrane [27]. The fluorescence intensity is proportional to the ROS content [27]. The ROS level in the Ea.hy926 cell line was determined by flow cytometry, using CM-H 2 DCFDA dye after incubation with 1, 20, 40, and 60 µM punicalagin. Ea.hy926 cells 1x10 6 cells/ml were exposed to different punicalagin concentrations for 24 hours after reach (60-70%) confluence. Half of pre-treated cells were exposed to 10 nM angiotensin II for 1 hour, while the other half were left untreated. The cells were trypsinised, washed and in re-suspended 2 ml HBSS. Cells were then incubated with 5 µM CM-H 2 DCFDA (prepared in DMSO) for 30 minutes at 37 °C and 5% CO 2 . DCFDA florescence was then measured using a BD FACSCanto Flow cytometer (California, USA). At least 10,000 events were acquired in the gated regions using an emission wavelength of 520 nm.
Measurement of ROS by Fluorometry
The florescent dye, dihydroethidium can penetrate the cell membrane and is oxidized by superoxide radicals to form 2-hydroxyethidium; a red fluorescent product [28]. This Punicalagin Induce the Production of Nitric Oxide and Inhibit Angiotensin Converting Enzyme in Endothelial Cell Line EA.hy926 interacts with DNA to enhance intracellular fluorescence [29,30]. Ea.hy926 cells 1x10 6 cells/ml was exposed to different punicalagin concentrations (1, 20, 40 and 60 µM) for 24 hours after reach 60-70% confluence. After 24 hours, one group of treated cells was incubated with angiotensin II (10 nM) for 1 hour and another group was left untreated (control). All cells were then washed with HBSS buffer. Cells were covered with 3 ml HBSS buffer and incubated with 25 µM of DHE dye for 30 minutes at 37 °C and 5% CO 2 . HBSS buffer was discarded and the cells were scraped from the flask with 1 ml cold methanol. The cell suspension was sonicated and filtered through a 0.22 µM membrane filter. The 2-hydroxyethidium was detected by fluorimetry (Varian, USA) with excitation and emission wavelengths of 480 nm and 580 nm, respectively.
Determination of Cellular Calcium Concentration by Fluorescence Method
Intracellular calcium concentrations were measured by fluorometry after loading the cells with fura-2/AM dye. Free intracellular calcium will bind with the membrane diffusible fluorescent dye fura-2/AM [31]. Briefly, Ea.hy926 cells were cultured in 25cm 2 flasks at 1x10 6 cells/ml and exposed to different concentration(1, 20, 40 and 60 µM) after they reached 60-70% confluence. Following 24 hours incubation, cells were loaded with 5 µM fura-2/AM dye and incubated for 45 minutes at 37 °C and 5% CO 2 . Consequently, the dye was removed and the cells were washed and scraped from the flask with 2 ml HBSS buffer. The calcium concentration was detected by fluorometry (Varian, USA) with excitation and emission wavelengths of 340 nm and 510 nm, respectively. Ethyleneglycoltetraacetic acid (EGTA) at 1 mM was used as negative control instead of punicalagin.
Determination of Endothelial Nitric Oxide Synthase
Enzyme (eNOS) Activity in the Ea.hy926 Cell Line Ea,hy926 Cells treated with different concentrations of punicalagin (1, 20, 40 and 60 µM) for 24 hours, were lysed as follows. After this incubation, the cells were trypsinized and the resulting cell suspension centrifuged at 1500 × g for 3 minutes with 5 ml PBS to wash. Supernatants were removed and the cell pellets were lysed by adding 300 µl of lysis buffer (Sigma Aldrich Chemical Company). The lysis cell was kept it on ice for 20 minutes and then stored them at -20°C until eNOS activity was measured. EGTA (1 mM) was used as a negative control in place of punicalagin. eNOS synthase activity was measured using a nitric oxide synthase assay colorimetric kit (Bioassay System, ENOS-100) According to the manufacturer's instructions.
Assessment of eNOS Expression by Western Blot
Ea,hy926 cells were treated with 1, 20, 40 and 60 µM punicalagin and incubated for 24 hours and lysed lysis buffer (Sigma Aldrich Chemical Company)as described in 2.2.5 and stored at -80 °C until protein determination and western blot experiment were carried out. The protein concentration for each sample was measured by Bradford method using BioRad assay [32] following the manufacturer's instructions. An Invitrogen NuPAGE 4-12% Bis-Tris gel was used for protein electrophoresis. The proteins were then transferred to a polyvinyl difluoride (PVDF). The results were visualized by chemiluminescence using Amersham film.
2.2.7. Measuring ACE Activity in Ea.hy926 Angiotensin converting enzyme activity was measured in the Ea.hy926 cell line after 24 hours of exposure to different concentrations of punicalagin (1, 20, 40 and 60 µM) using a modified fluorometric method [33]. Briefly, Ea.hy926 cells were seeded to confluence in 25cm 2 flasks (1x10 6 cells /ml) and then incubated with punicalagin (1, 20, 40 and 60 µM) for 24 hours. Treated cells were washed three times with 3 ml HBSS buffer. The cells were scraped from the flask with 1 ml HBSS and frozen at -20 °C until assayed. For the assay, frozen cells were thawed and sonicated then 20 µl samples were added to 80 µl of H-H-L (5 mM prepared in HBSS buffer) and incubated at 37 °C for 3 hours. The incubated cells were then mixed with 1.4 ml NaOH (0.5 N) to stop the reaction. The fluorescent dye, O-phthaldialdehyde (100 µl of 10 mg/ml in methanol) was used to detect the histidyl-leucine reaction product. The reagents were incubated for 5 minutes at room temperature followed by the addition of 250 µl of HCl (6 N). A was used to measure the fluorescence of the samples was measured by fluorometry (Varian, USA), using an excitation wavelength of 365 nm and an emission wavelength of 495 nm. Captopril (1 µM) was used as a positive control.
Statistical Analyses
All experiments were performed at least in triplicate. For the 96-well microtiter tissue culture plates, 4 replicate wells were used per category. The data were analyzed by Graphpad Prism version 6. For significant differences between control and experimental values, the P-value between groups was determined by one-way analysis of variance followed by Bonferroni test. The significance level was set at P ≤0.05.
Cell Viability and NO Production
Cell viability and nitric oxide production was measured over a time course to determine the optimal experimental exposure time. The data in (Figure 1 A) show that cell viability increased significantly (p ≤ 0.05) after treatment with 1, 20, 40, and 60 µM punicalagin for 12 hours, but that NO production was similar compared with the untreated control (Figure 1 B). In contrast, there was a significant decrease in cell viability at 80 and 100 µM punicalagin concentrations (p < 0.0001) with a corresponding significant decrease in NO production at 12 hours (p < 0.0001). After 24 hours incubation time with 1-60 µM punicalagin, cell viability and NO production were significantly increased (p ≤ 0.05) compared with untreated cells (control) as shown in (Figures 1 C and D). As for the 12-hours incubation, 80 and 100 µM punicalagin caused a significant decrease in cell viability and NO production compared with untreated cells (p ≤ 0.05).
After 48 hours incubation significant decreases in cell viability (p ≤ 0.05) and NO production (p < 0.001) were observed at all punicalagin concentrations (1, 20, 40, 60, 80 and 100 µM) (Figures 1 E and F). Based on the above cell viability and NO production results, punicalagin concentrations in the range 1 -60 µM were selected for exposure to cells for 24 hours in future experiments because higher punicalagin concentrations 80-100 µM produced a toxic effect on Ea.hy926 cells. The significantly enhanced release of NO due to the activation of eNOS enzyme is described below in section 3.3.
272
Punicalagin Induce the Production of Nitric Oxide and Inhibit Angiotensin Converting Enzyme in Endothelial Cell Line EA.hy926
Measurement of ROS by Flow Cytometry and Fluorometry
Angiotensin (Ang) II has the ability to stimulate ROS production in endothelial cells through activation of a redox-sensitive signaling system. In an endothelial cell line, NADPH oxidase was considered as a source of ROS, responding to Ang II with the donation of an electron to reduce a molecule of oxygen to produce the superoxide anion, O 2 •− [34]. ROS production in the Ea.hy926 cell line, treated with different concentrations of punicalagin for 24 hours, in the absence of Ang II stimulation, was not significantly different to ROS production in cells not treated with punicalagin. This was true using either flow cytometry (Figure 2 A) or fluorimetry (Figure 3 A) as the ROS determination method.
The ability of different concentrations of punicalagin added to Ea.hy296 cells for 24 hours to reduce ROS production was assessed by flow cytometry after treating cells with 10 nM Ang II for 1 hour (Figure 2 The results were confirmed by a second method, measuring the conversion of DHE 2-hydroxyethidium in the presence of ROS, by fluorimetry (Figure 3 B). The fluorescence intensity in Ea.hy926 cells incubated for 24 hours with punicalagin (1, 20, 40 and 60 µM) and then treated with 10 nM Ang II also decreased in a dose-dependent manner compared with cells treated with Ang II only (positive control). The fluorescence intensity was 0.83, 0.59, 0.54, 0.48 and 0.46 for positive control, and 1, 20, 40 and 60 µM punicalagin, respectively. These findings are in contrast with cells not stimulated with Ang II, in which no significant difference was found in ROS levels between punicalagin-treated cells and control by flow cytometry or fluorescence methods (Figure 2 A and 3 A, respectively). These results illustrate that punicalagin has the ability to scavenge ROS and thereby protect cellular macromolecules from ROS-mediated damage. 1-60 µM); B) pre-treated cells with punicalagin incubated with 10 nM angiotensin II for 1 hour. Values are mean ± SD, n=3. Comparisons of means were made using a one-way ANOVA followed by Bonferroni's test (ns= non significant, **= p < 0.001 and ***P < 0.0001 when compared with control). Protein was extracted from Ea.hy926 cells after treatment with 1-60 μM punicalagin for 24 hours. EGTA 1 mM incubated for 3 hours was used as a negative control. Data represent mean ± SD of more than three experiments. Comparisons of means were made using a one-way ANOVA followed by Bonferroni's test (*= p < 0.05; **=p < 0.001; and *** = p < 0.0001).
Determination of Cellular Calcium Concentration, eNOS Activity and Expression
Different concentrations of punicalagin (1, 20, 40 and 60 µM) exposed to Ea.hy926 cell line for 24 hours caused a significant dose-dependent increase in intracellular Ca 2+ concentration p < 0.001 (Figure 4 A). Since eNOS activity is dependent on Ca 2+ , the cytoplasmic Ca 2+ concentration was investigated in response to different punicalagin concentrations. EGTA (a Ca 2+ inhibitor) was used as the negative control.
This study demonstrates that the punicalagin from pomegranate (1-60 µM) produced a significant increase in calcium concentration in EA.hy926 cell line after incubation for 24 hours. Polyphenols can affect intracellular Ca 2+ store and release it or increase the entrance of Ca 2+ through the cell membrane [35]. This suggestion could explain the significant induction of Ca 2+ concentration by different punicalagin concentrations in the endothelial cell line (p < 0.001).
The effects of punicalagin at different concentrations (1,20,40, and 60 µM) on the activation of the eNOS enzyme were determined in Ea.hy926 cells after 24 hours exposure time. As shown in Figure 4 B, punicalagin caused a significant increase in eNOS enzyme activity that was dose-dependent p ≤ 0.05. EGTA (1 mM) was used as the negative control.
Western blot analysis using a specific antibody against eNOS revealed no change in eNOS protein expression in Ea.hy926 cells treated with punicalagin compared with untreated control cells (Figure 4 C). The calcium concentration for 1 mM EGTA (negative control) was significantly decreased (p ≤ 0.05); therefore, the protein expression level of eNOS enzyme was down regulated and eNOS enzyme activity decreased. Punicalagin may increase eNOS activity via eNOS phosphorylation, which is enhanced by the activation of redox-sensitive phosphatidylinositol-3 (PI3)/ protein kinase B (AKT) pathways [36,37].
Measuring ACE Activity in Ea.hy926 Cells
In this study, incubation of Ea.hy926 with different concentrations of punicalagin (1, 20, . ACE inhibition activity in Ea.hy926 cell exposed to punicalagin (1-60 µM) for 24 hours. Data represent mean ±SD of more than three experiments. Comparisons of means were made using a one-way ANOVA followed by Bonferroni's test (*=p< 0.05, **p<0.001 and *** = p < 0.0001).
Captopril was used as a positive control.
Discussion
Several studies have shown the effect of polyphenols to produce NO in agreement with our findings. EA.hy296 cell line incubated with red wine polyphenol extract (100-600 µg/ml for 18 hours produced a significant increase in NO production [38]. Dihydrocaffeic acid (caffeic acid metabolite) at different concentrations (0-200 µM) significantly increased the production of NO after being added to EA.hy926 cells for 18 hours in a dose-dependent manner [39]. Another study was performed on NO production after incubating the bovine pulmonary artery endothelial cell line with 50 and 100 µM pomegranate juice for 24 and 48 hours. The observed result showed an increased NO production [40]. Punicalagin and o-galloyl punicalagin extracted from Terminalia calamansanai plant leaves (50 µM each) produced a significant increase in NO production in bovine aortic cell line [41]. NO production was also significantly increased in a dose-dependent manner after incubating the EA.hy926 cell line for 24 hours with different polyphenols (resveratrol, epicatechin gallate and epigallocatechin gallate) [42]. Different concentrations (0.1-10 µM) of polyphenols extracted with 70 % acetone from black currant were exposed to EA.hy926 cell line for 10 minutes and produced a significant increase in NO productionin a dose-dependent manner [36].
Numerous studies have been conducted on endothelial cell lines to examine the effect of polyphenols as ACE inhibitors. Black tea, green tea and rooibos tea, epicatechin, epigallocatechin, epicatechin gallate and epigallocatechin gallate were incubated with HUVEC cell line at different concentrations for 10 minutes. All experimental components inhibited the activity of ACE at all concentrations except rooibos tea. Rooibos tea did not contain any catechin compounds [43]. In 2009, another study was performed on HUVEC cell line to investigate the inhibition of ACE enzyme after exposure to aqueous phenolic extract from Vaccinium myrtillus (bilberry). The bilberry extract contained several polyphenols such as quercetin, stillbene, resveratrol, ferulic acid and coumaric acid; these components (0.000625-0.1 mg/ml) significantly inhibited ACE activity in a dose dependent manner in HUVEC cell line treated for 10 minutes [16]. Aviram and Dornfeld (2001) found that serum ACE enzyme activity was significantly inhibited by 36% in seven-hypertensive patients who consumed pomegranate juice for 2 weeks. Each treated patient's serum was incubated with pomegranate juice (50-350 µM) for 15 minutes at 37 °C. The enzyme activity was significantly inhibited in a dose dependent manner [14]. Thus, all the previous research results show similar effects to punicalagin in the present study as an inhibitor for ACE activity on EA.hy926 cell line which is a new finding. The ACE inhibitory activity may be due to metal chelation by punicalagin as zinc ion is present in the active site of ACE.
As reported previously from several in vitro studies, the presence of the OH group in phenolic compounds plays an important role in scavenging ROS [44,45,46]. The Chinese medicine, Seabuckthorn, contains different flavonoids e.g. quercetin, isorhamnetin and kaempherol. Seabuckthorn at different concentrations (9.38-37.5 µg/ml) and 15 µg/ml quercetin was incubated with EA.hy926 cell line for 20 minutes separately before adding 100 µg/ml oxidized LDL as a ROS inducer for a 24 hours incubation time. A protective effect against superoxide anion in EA.hy926 cells was found [47]. ROS production was significantly decreased in liver cell line (HepG2) exposed to different concentrations of quercetin and rutin (1-100 µM) for 24 hours followed by incubation with 200 µM H 2 O 2 for 3 hours [48]. Pomegranate juice, concord juice and blueberry juice antioxidant activities were examined and showed scavenging of superoxide anion and protection of NO destruction from ROS action. Pomegranate juice, diluted 6-fold and using a small volume of 3 µl, showed very high antioxidant activity while the same effect was only possible from 300 µl undiluted blueberry or 1000 µl undiluted concord grape [40]. In this study, punicalagin concentrations (1-60 µM) showed a significant reduction of ROS levels in Ea.hy926 cell line after stimulation by Ang II (p < 0.001).
This study demonstrates that the punicalagin from pomegranate (1-60 µM) produced a significant increase in calcium concentration in EA.hy926 cell line after incubation for 24 hours. Polyphenols can affect intracellular Ca 2+ store and release it or increase the entrance of Ca 2+ through the cell membrane [35]. This suggestion could explain the significant induction of Ca 2+ concentration by different punicalagin concentrations in the endothelial cell line (p < 0.001).
As eNOS enzyme is calcium-dependent, several researchers measured the calcium concentration in EA.hy926 cell line after incubation with different polyphenols. EA.hy926 cell line was incubated for 20 hours with 100-600 µg/ml red wine polyphenols water extract. The experimental dose showed a significant increase in calcium concentration in EA.hy926 cell line, which increased the eNOS protein, level, and showed a significant release of NO [38]. Another study was performed on calcium concentrations in EA.hy926 after incubation with punicalagin and o-galloylpunicalagin extracted from Terminalia Calamansanai plant leaves; EA.hy926 cell line, exposed to 50 µM of each component for 12 hours produced a significant increase in calcium concentration [41].
Similar to this research outcome, pomegranate juice (50 and 100 µM) was examined on bovine pulmonary artery endothelial cell line for 24 and 48 hours. The treatment did not show any significant effect on protein expression on eNOS enzyme at any concentration [40]. Chen et al., (2008) examined the effect of punicalagin and o-galloylpunicalagin extracted from Terminalia Calamansanai leaves on eNOS expression. Protein expression of eNOS enzyme in bovine aorteic cell line was not affected after incubation for 12 hours with 50 µM of punicalagin and o-galloylpunicalagin extracted from Terminalia Calamansanai leaves [41].
In contrast, Leikert et al., (2002) found that red wine polyphenols water extract (100-600 µg/ml) had increased eNOS protein levels after incubation in EA.hy926 cell line for 20 hours [38]. Huang et al., (2004) found a significant increase in protein expression and activity of eNOS enzyme in EA.hy926 cell line after incubation with dihydrocaffeic aicd (caffeic acid metabolite) at different concentrations (0-200 µM) for 18 hours. The effect of resveratrol, epigallocatechingallate and epicatechingallate on eNOS enzyme was investigated on EA.hy926 cell line at different concentrations (0-100 µM) for 24 hours. The protein level of eNOS enzyme was significantly increased in a dose dependent manner [42]. Edirisinghe et al., (2011) studied the black currant polyphenols 70% acetone extract on protein expression of eNOS. Different extract concentrations (0.1-10 µM) were applied on EA.hy926 cell line for 10 minutes. The treated cells produced a significant increase in eNOS enzyme expression associated with the concentration increase [36]. Different concentration from Hesperetin, type of flavanone present in citrus fruits, (0.01-10 µM) and incubated for 1-30 minutes significantly stimulates phosphorylation of Akt, AMPK, and eNOS to produce NO in a concentration-and time-dependent manner in vascular endothelial cells [49].
Conclusions
All results presented in this study support the proposition that punicalagin is a type of polyphenol that could play a role in reducing the risk of cardiovascular disease. As observed previously, punicalagin incubated with EA.hy926 cells at different concentrations for 24 hours produced a significant inhibition of ACE enzyme activity and increased NO production. Increased NO production was via increased eNOS activity due to an increase in calcium concentration. There was no induction of eNOS enzyme expression observed. This NO level is protected from destruction by ROS through punicalagin scavenging activity. The above findings indicate that punicalagin may be helpful for lowering blood pressure; this could be achieved through dietary intervention or by the production of new anti-hypertensive treatments from a natural product. | 2019-04-08T13:08:29.523Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "cc8a150d5eaa3d5227f85264450a0b8c23a414cc",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/20160930/UJPH7-17607638.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b11f56f5a345ced7745c2aef7c8fc52ff9f95c17",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
256629938 | pes2o/s2orc | v3-fos-license | Diagnostic value of the posterior talofibular ligament area for chronic lateral ankle instability
An injured posterior talofibular ligament (PTFL) is one of the reasons for chronic lateral ankle instability (CLAI). Previous researches have demonstrated that the PTFL thickness (PTFLT) is associated with chronic ligament injuries. However, ligament hypertrophy is different from ligament thickness. Thus, we created the PTFL cross-sectional area (PTFLCSA) as a diagnostic image parameter to assess the hypertrophy of the whole PTFL. We assumed that the PTFLCSA is a key morphological diagnostic parameter in CLAI. PTFL data were obtained from 15 subjects with CLAI and from 16 normal individuals. The T1-weighted axial ankle-MR (A-MR) images were acquired at the level of PTFL. We measured the PTFLT and PTFLCSA at the posterior aspect of the ankle using our imaging analysis program. The PTFLT was measured as the thickness between point of anterior and posterior fiber of PTFL. The PTFLCSA was calculated as the whole cross-sectional PTFL area. The average PTFLT was 3.43 ± 0.52 mm in the healthy group and 4.89 ± 0.80 mm in the CLAI group. The mean PTFLCSA was 41.06 ± 12.18 mm2 in the healthy group and 80.41 ± 19.14 mm2 in the CLAI group. CLAI patients had significantly greater PTFLT (P < .001) and PTFLCSA (P < .001) than the healthy group. A receiver operating characteristic curve analysis demonstrated that the optimal cutoff score of the PTFLT was 4.19 mm, with 93.3% sensitivity, 93.7% specificity, and an area under the curve of 0.97. The most suitable cutoff value of the PTFLCSA was 61.15 mm2, with 93.3% sensitivity, 100% specificity, and area under the curve of 0.99. Even though the PTFLT and PTFLCSA were both significantly associated with CLAI, the PTFLCSA was a more exact morphological measurement parameter.
Introduction
Lateral ankle ligamentous sprain is considered as one of the most common injuries affecting the lateral ligaments of the ankle. According to previous researches, 20 to 30% of all sports injuries are ankle injuries. Lateral ankle ligamentous sprain are frequently partially treated. The rate of repeated ankle sprain is >40%, and recurrent ankle injury can lead to chronic lateral ankle instability (CLAI) and ankle osteoarthritis. [1][2][3][4] The lateral ankle ligament is very important to maintain static and dynamic ankle stability and intact ligaments for movement and support functions of ankle and foot. [5,6] As we already know, frequent injuries of the ligaments such as calcaneofibular ligament (CFL), deltoid ligament and anterior talofibular ligament (ATFL), lead to CLAI. [5][6][7][8] However, there is no attentions about the relationship between CLAI and posterolateral ankle ligaments such as posterior talofibular ligament (PTFL). PTFL on the ankle-magnetic resonance (A-MR) image is visualized between the medial surface of the lateral malleolus and talus. [9][10][11] The original insertions of the posterior and anterior fibers of the PTFL are the medial aspect of the lateral malleolus of the fibula. The posterior fibers of the PTFL were attached into the lateral tubercle of the posterior process of the talus. The anterior fibers were attached into the surface of the lateral talus posterior to the lateral malleolar facet. [11] An injured PTFL has been thought to be one of an important finding of CLAI. [11,12] A-MR images facilitate the analysis of the pathologic disorders of the PTFL. [13] However, many treating doctors rarely consider the A-MR findings when assessing chronic morphologic changes in the PTFL in the CLAI patients.
Moreover, previous researches only analyzed the PTFL using a single measurement at the approximate "halfway" of the PTFL. However, an asymmetrical thickening and partial tear of the PTFL can occur anywhere. [14] Therefore, measurement mistakes could occur in some cases. In contrast to the PTFL thickness (PTFLT) between anterior and posterior fiber, the cross-sectional area of PTFL does not worry about these measurement mistakes because the PTFL cross-sectional area (PTFLCSA) measures the whole PTFLCSA. Thus, to assess the hypertrophy of the PTFL, we created the PTFLCSA as an adjuvant new image diagnostic parameter. We assumed that the PTFLCSA is an important morphologic parameter in CLAI diagnosis. Thus, we used A-MR to compare the PTFLT and PTFLCSA between CLAI patients and healthy groups.
Patients
This retrospective research has been examined by the Catholic Kwandong Univertsity Institutional Review Board. (IRB no: IS19RISI0049). We reviewed CLAI patients who diagnosed our orthopedic clinic with ankle discomfort from May 2016 to January 2019 and who had taken A-MR. The criteria for inclusion of CLAI were as follows: past history of recurrent ankle injuries; chronic lateral ankle discomfort and persistent pain on anterior drawer test; no feel confident exercise; had not responded to conservative treatment; and at least 1 year has elapsed since the first ligament injury, but ankle pain persists. Patients were excluded if they had any following issues: past ankle surgical history such as the ankle arthroscopy, modified Broström procedure, peroneal nerve disorder, and hindfoot varus.
There were 4 (26.6%) males and 11 (73.4%) females with a mean age of 38.60 ± 14.36 years (range, 16 to 59 years) ( Table 1). To compare the PTFLT and PTFLCSA between the CLAI group and normal group, we also enrolled healthy individuals. In the normal group, 16 subjects (8 males and 8 females) were enrolled with a mean age of 38.13 ± 17.51 years (range, 16-65 years).
Image analysis
PTFLT and PTFLCSA measurements were analyzed by the one specialist, who was blinded to the ankle classification. We acquired the T1 weighted axial MR cut at the thickest view of the PTFL. We measured the PTFLT and PTFLCSA on MR images using an image analysis software (INFINITT PACS system, Seoul, South Korea). (Fig. 1A and B). The PTFLT was measured as the thickness between point of anterior and posterior fiber of PTFL. The PTFLCSA was measured as the cross-sectional area of the PTFL that was extremely hypertrophied on T1-weighted MR images.
Statistical analysis
We compared the PTFLT and PTFLCSA between the CLAI and the normal healthy groups using unpaired t tests. Data are presented the mean standard deviations. A receiver operating characteristic (ROC) curve analysis was generated for diagnostic method. P values <.05 were considered statistically significantly different. We used options in the SPSS package (IBM/SPSS for Windows ver 22.0, Chicago, IL) for the presentation and area under the curve (AUC) calculation of the ROC curve.
Results
Demographic datas such as age, ankle image, and gender that a similar arbitrary selection is being made between groups. The mean PTFLT was 3.43 ± 0.52 mm in the healthy group and 4.89 ± 0.80 mm in the CLAI group. The average PTFLCSA was 41.06 ± 12.18 mm 2 in the healthy group and 80.41 ± 19.14 mm2 in the CLAI group (Table 1). CLAI patients had significantly higher PTFLT (P < .001) and PTFLCSA (P < .001) than the control subjects ( Fig. 2). The most suitable cutoff point of the PTFLCSA was 61.15 mm 2 , with 93.3% sensitivity, 100% specificity, and AUC of 0.99 (95% CI, 0.96-1.00) ( Table 3, Fig. 2).
Discussion
Lateral ankle sprains are the most common disorders of sporting injuries. Even though most of these lateral ligament injuries treated well with non-operative treatment, the development of CLAI, characterized by the occurrence of repetitive ankle injuries and the persistence of symptoms, is common. [6,9,10,15] CLAI limits physical activity and leads to severe disability due to joint pains and osteoarthritis. [4,7,[16][17][18] According to previous studies, approximately 25% of CLAI patients continue to suffer from ankle disabilities in spite of the successful management. [10] Multiple imaging systems, such as ultrasound, computed tomography, stress radiography and MRI, are available, [19] but the exact diagnosis of CLAI is still not easy due to the lack of a high sensitive objective morphological parameter. These results mean that the presence of untreated and undetected ligamentous complex. In this original research, we found the PTFL is one of the major cause of CLAI. We demonstrated that the optimal cutoff value of the PTFLCSA as 61.15 mm 2 , with 93.3% sensitivity, 100% specificity. The best cutoff value of the PTFLT was 4.19 mm, with 93.3% sensitivity, 93.7% specificity. The lateral collateral ligament complex is divided into ATFL, lateral talocalcaneal ligament, CFL, and PTFL. This complex is known to provide ankle stability against inversion of ankle joint. Chandnani et al have insisted that a 50% sensitivity and a 83% specificity for the assessment of the CFL and a 50% sensitivity and a 100% specificity for the assessment of the ATFL, when using MR arthrography. [9] Cha et al found the 60% sensitivity for ATFL injuries using MRI. However, there are few studies to evaluate relationship between PTFL and CLAI. [20] Zhu et al have demonstrated that the PTFL play an important role to maintain ankle stability. The serious injuries of PTFL would affect posterolateral ankle stabilities. [21] Liu et al reported that an increased ligament thickness reflects morphologic changes that have occurred secondary to chronic ankle ligament injuries. [22] Therefore, the PTFL was important to joint stability. The PTFL runs horizontally from a prominent tubercle on the posterior aspect of the talus immediately lateral to the groove for the flexor hallucis longus tendon to the malleolar fossa of the fibula lateral malleolus. It is the strongest lateral ligament, and plays an important role in ankle stability. PTFL also acts to limit posterior talar displacement and has under greatest strain in ankle dorsiflexion. [11] Our current research compared healthy group to those with CLAI. Our data indicated that healthy group have a PTFLT size of 3.43 mm and CLAI patients have PTFLT of 4.89 mm. However, we recognized some problems about measuring PTFLT. All previous researches evaluated the PTFLT using a single measurement of the PTFL between the anterior and posterior fibers. However, some studies demonstrated that the morphology of the ligament injury as a curved or wavy contour, ligament discontinuity, elongation, contour irregularities, and variable signal intensity in MRI. [23] Thus, measurement bias can occur. This study assumed that the cross-sectional area of the PTFL may predict CLAI exactly because the PTFLCSA is not influenced by this measurement bias since the PTFLCSA measures the whole PTFL area, in contrast to the PTFLT.
In the current research, we demonstrated that the PTFLCSA had 93.3% sensitivity, 100% specificity, and AUC of 0.99 (95% CI, 0.96-1.00) to predict CLAI. In contrast, the PTFLT had 93.3% sensitivity, 93.7% specificity, and AUC of 0.97 (95% CI of 0.92-1.00). These results concluded that the PTFLCSA is a valuable predictor of CLAI than the PTFLT. There are several important limitations of this research. First, small sample may be weakness of this research. Second, the ankle lateral ligament complex comprises the CFL ATFL, and the PTFL. In this study, we only focused on PTFL. Third, there are several different methods to assess CLAI, such as manual anterior drawer test, stress radiography, ultrasound examination or arthroscopy have been proved to be effective at discriminating CLAI. [24][25][26][27][28][29][30][31][32][33] However, we analyzed the PTFLCSA and PTFLT on MRI only.
Despite these limitations, this is the first research to document the association of PTFLCSA with CLAI. The PTFLCSA is a simple, reliable measurement tool with high sensitive value to evaluate CLAI.
Conclusion
The PTFLCSA was a more reliable measurement parameter for CLAI than PTFLT. We demonstrated that the best cutoff point of the PTFLCSA as 61.15 mm 2 , with 93.3% sensitivity, 100% specificity. When assessing patients with CLAI, physicians should evaluate carefully the PTFLCSA rather than the PTFLT. | 2023-02-08T06:17:50.883Z | 2023-02-03T00:00:00.000 | {
"year": 2023,
"sha1": "a4e272a56b783d88c248d8e397e62abc067ebca0",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/md.0000000000032827",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "746760b7b9e45f08c65761c93855e7f89fa10dbf",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55496629 | pes2o/s2orc | v3-fos-license | BOARD COMPOSITION AND AUDIT FEE : EVIDENCE FROM RUSSIA
In the recent years the Russian Government has undertaken serious steps to improve corporate governance practices by introducing the Corporate Code of Governance (CCG) and strengthening the role of corporate boards to monitor top management performance. This paper investigates whether these measures have stimulated positive changes by increasing the demand for higher quality audit. We test our hypotheses using 147 non-listed companies to examine whether board composition influences audit fee in the Russian capital market. Our findings support the demand-side perspective of audit services and suggest that audit fees are associated positively with the presence of an independent chairman, higher proportion of independent directors and State representatives on the
Introduction
This study examines the empirical relationship between board composition and audit fees in the Russian capital market.Board composition is defined by the existence of an independent chairman on the board, the proportion of independent directors and State representatives on the board.Recent studies in audit pricing research have examined the impact of various corporate governance mechanisms on corporate reporting, audit quality and level of audit fees (Gul, 2006;Abbott et al., 2003;Tsui et al., 2001;Carcello et al., 2002;Gul and Tsui, 1998).These studies examine the relationship between audit fees and corporate governance based on the agency theory notion that the quality of reported accounting numbers is affected by the separation of ownership and control (Mitra et al., 2007).Agency theory views managers and owners as separate.Managers are regarded as having incentive to act in their own interests and to misreport financial results for opportunistic reasons (Jensen and Meckling, 1976).Prior research on the association between corporate governance and audit fees has focused primarily on developed capital markets (e.g.USA, UK and Australia).Little research has been conducted in countries with emerging capital markets (Lifschutz et al., 2010).No such studies have been conducted in the context of the Russian capital market.
The motivation for this study is twofold.First, numerous studies examine the relationship between audit fees and corporate governance characteristics in settings where companies have freedom to determine the composition of boards of directors and the State has limited power to appoint representatives to the board.In Russia, the composition of a board of directors is strictly regulated by legislation and listing rules requirements.These set a minimum quota for independent directors on boards and prohibit CEO duality.The large proportion of State shareholding also allows the State to exercise a high degree of interference in board operations by including its representatives as outside directors.The above mentioned factors create the unique corporate governance environment, explored here.
Second, there have been limited studies exploring the relationship between corporate governance and audit fees in emerging economies where the disclosure of audit fees is not mandatory.This study examines the association between board composition and audit fees in the context of Russian capital market.
Weak corporate governance is perceived as one of the reasons for recent corporate scandals (Bremer and Elias, 2007).Thus, it seems important to examine the association between the recent introduction of a corporate governance code in Russia and audit fees in that country.Prior studies have argued that audit fees are determined from either a supply-side or demand-side perspective.Studies by Carcello et al. (2002) and Abbott et al. (2003) produced evidence consistent with the demand-side perspective: that is, that governance mechanism requiring high-quality audits to reduce agency costs lead to higher audit fees being charged.Additionally, there is some evidence from the supply-side perspective that corporate governance mechanisms mitigate agency problems in financial reporting and reduce the risk of accounting misstatements or irregularities (e.g.Gul and Tsui, 1998;Tsui et al., 2001).In our study we use board composition to examine the relationship with audit fees from a demand-side perspective.We argue that the presence of a higher proportion of independent directors and the existence of an independent chairman on the board leads to higher demand for audit work, and that this is reflected in higher audit fees.Second, we examine whether the presence of representatives of the State on the board leads to an increase in perceived inherent risk.
The key findings are that there are positive associations between audit fees and presence of an independent chairman on the board, between audit fees and the proportion of independent directors, and between audit fees and the number of State representatives on the board.Our results reveal that having an independent chairman and higher proportion of independent directors on the board are associated with stronger corporate governance mechanisms.This requires additional assurance from auditors and is reflected in higher audit fees.Additionally, the results show that the presence of State representatives on the board lowers the level of corporate governance, increases the perceived inherent audit risk, and leads to higher audit fees.
The remainder of the study is organized as follows.In Section 2 we discuss the corporate governance environment in Russia, review the theoretical background and develop hypotheses.In Section 3 we describe the research design, sample selection and data collection.Section 4 tests the pricing model.Section 5 contains the summary and conclusion including limitations and suggestions for further research.
Theory and hypotheses development
Two factors influence an auditor's fee structure (Bell et al., 2001): the risk characteristics of the client and the extent of the audit work demanded by the client to obtain greater assurance about the presentation of information in the financial statements.These factors influence the extent of the audit work and the risk premium in the quoted fee (Mitra et al., 2007).An audit firm will make a feeincreasing adjustment in situations of high liability exposure (Simunic, 1980), mostly through a higher level of audit efforts than a pure price premium.Bell et al. (2001) conclude that audit fees increase as an engagement partner's assessment of business risk increases.They observe that an increase in audit fees arises due to an increase in planned audit hours, and is indicative of greater audit efforts (Mitra et al., 2007).These prior studies indicate that audit fees will be higher from the demand-side perspective when the scope of audit work increases due to client demand.Additionally, the demandside perspective suggests a positive association between corporate governance characteristics and audit fees (e.g.Goodwin-Stewart and Kent, 2006; Abbott et al., 2003;Carcello et al., 2002).The audit fee charged by the audit firm will be higher when firms with strong corporate governance structure demand additional assurance to preserve their reputation and avoid potential litigation (Abbott et al., 2003;Carcello et al., 2002).
Russian corporate governance environment
Russia is one of the largest emerging market economies, the eleventh largest economy in the world by nominal value, and a world superpower in terms of reserves of mineral and energy resources (Kokoshin, 2002).
Drastic economic and political reforms at the beginning of 1990s put Russia on the path of radical changes in all spheres of life.One of the aims of those changes was to transform Russian enterprises (which were all State-controlled) into independent participants in the market economy.The revival of privately owned enterprises in Russia started in 1990 with approval of Regulations of the Council of Ministers of USSR (Nos 590 and 601) and the Federal Law on Enterprises and Entrepreneurial Activities those gave legal definition of companies and entrepreneurship.
The application of corporate governance practices in Russia is regulated by the Federal Law on Joint-Stock Companies (adopted in 1995) and the Corporate Code of Governance (CCG) (introduced in 2002).Originally, the CCG did not have any legal binding force but could issue recommendations.One of the positive outcomes of the CCG was the introduction of board committees at firm-level.However, in most cases such committees were not established until late 2003 due to lack of proper enforcement mechanisms (Peng et al., 2003).Additional steps to improve application of the corporate governance regime were taken in 2006, when audit committees became a mandatory requirement for listed companies (Russian Federal Service on Financial Markets, 2002).
One of the distinctive features of corporate governance in Russia from the time of the privatization reforms in the 1990s has been insiders' control (Yakovlev, 2004).Lately, there has been growing attention to steps taken by the Russian Government to overcome it, including strengthening the role of the board to monitor top management performance.
Prior to 2004 a traditional structure of a Russian board of directors was one that includes representatives of the main shareholder and top executive management (Filatov et al., 2005).To change this, the Federal Law set requirements for the minimum number of directors on the board to depend on the number of voting shares [1].All directors are elected for a one year term at a regular shareholder meeting.A board chairman is elected by the directors, approved at a shareholder meeting, by a simple majority.One of the distinctive characteristics of Russian boards of directors is their comparatively severe restrictions regarding managers assuming board memberships (Iwasaki, 2008).A CEO cannot serve as board chairman, and senior management cannot occupy more than onefourth of the seats on the board.
The exact board composition at leading companies depends on the size of the company, its strategic significance to the Russian government, and the size of the State's holding of the share capital (Filatov et al., 2005).Despite the general belief that Russian corporate boards are heavily insider-dominated, nearly half of board directors come from outside companies (Iwasaki, 2008).However, not all outside directors are independent.Apart from independent directors, State representatives are also included on boards as outside directors.Additionally, in Russian practice independent directors are defined broadly; in particular, they include minority shareholders' nominees (Appendix A).
Russian laws do not require companies to have independent directors.However, the CCG mandates that boards of directors of joint-stock companies should include at least three independent directors who account for no less than one-fourth of the board membership (Appendix A).Vernikov (2007) argued that this leads to the situation in which most Russian companies appoint independent directors just to satisfy listing and law requirements or to increase the borrowing capacity of the company.Without 'independent directors' they will be unable to borrow from capital markets at reasonable cost or to offer shares successfully to investors outside Russia.
Other outside directors on the board are representatives of the State.The majority of middle-scale and large-scale enterprises in Russia are privatized enterprises.Many of those still have some State ownership and representatives on their boards (Iwasaki, 2008).This is despite the OECD recommending that representatives of the State not be members of the board of directors in order to avoid conflicts of interest (OECD, 2002).Prior studies have shown that the State can have a direct or indirect ownership interest in an enterprise (e.g.Filatov et al., 2005).With direct State ownership it is common for large enterprises to have officers from the Presidential Administration or ministers and their deputies on boards of directors.For example, the board of directors of Inter RAO Unified Energy System of Russia, the largest company in the power generation and supply industry, is dominated by State representatives.In 2010 outside directors on their board included the deputy chairman of the Russian government, I.I.Sechin, the Minister of Energy of Russia, S.I.Shmatko; and the Head of the Federal Agency for State Property Management, Y.A.Petrov.At companies the State owns indirectly through other enterprises, the board would usually have representatives of the parent company as well as public officers to give the perception of an increased proportion of State members as directors.
Audit in Russia
Historically, the auditing functions in the former Soviet Union were conducted by the revision system which was a state-financed system of financial control put in place to ensure proper use of state resources and to prevent the misappropriation of assets at state-owned entities (Enthoven et al., 1998; McGee and Preobragenskaya, 2005).The development of the Western-style auditing started only in the late 1980s spurred on by an increase of foreign investment in the Russian economy and a growing demand for auditing in the developing private sector.However, an increase in local audit firms was not supported by the development of a regulatory base.This promoted ambiguity regarding the scope of audit services, and the roles and objectives of auditing (Samsonova, 2007), and led to an increase in fraud and corruption.At this time, big audit firms entered the Russian market and brought with them Western audit practices.The promotion of international audit rules was further supported by the expansion of supranational institutions (e.g.World Bank, WTO, OECD, etc.) and thus led to the adoption of Western practices by numerous local audit firms (Samsonova, 2009).
Currently, auditing in Russia is governed by the Federal Law on Auditing (2008).To a large degree this aligns Russian audit practice with International Standards on Auditing and reinforces mandatory audit of annual financial reports of entities of a particular public interest, including those whose securities are traded on a stock exchange.According to the Russian Department of Finance (2007), at the end of 2006 there were more than 7,000 licensed audit firms in Russia with 40% of them in Moscow.Big 4 firms controlled 31% of the market with the rest serviced by local Russian companies.
Financial reporting in Russia is governed by the Federal Law on Accounting (1996) that mandates companies prepare their annual reports in accord with Russian Accounting Standards.However, the Russian Federal Service for Financial Markets imposes an additional requirement for listed companies: that is to disclose their financial information according to either IFRS or US GAAP.In the meantime, the disclosure of audit-related information is regulated only to a certain extent.For example, disclosure of audit fees is not mandatory for Russian companies.Additionally, neither IFRS nor US GAAP prescribes the disclosure of audit fees directly.Thus, the additional requirement of compliance with IFRS/ US GAAP does not require companies to disclose audit fee information.In other countries such disclosure is regulated by local versions of accounting standards (e.g.AASB101 in Australia) or Federal law (e.g.Sarbanes-Oxley Act in USA).However, such disclosure in Russia is voluntary.Thus, the interest of this study is primarily to investigate the relationship between board composition and audit fee in the context of voluntary disclosure of audit-related information.
Board composition
Board composition in this study is proxied by the existence of an independent chairman on the board, the proportion of independent directors on the board, and the proportion of State representatives on the board.Under agency theory, the board of directors is an important and feasible element of effective corporate control.A critical function of the board of directors is to monitor managers' performance (Fama, 1980;Fama and Jensen, 1983).Monitoring safeguards the investments of shareholders and protects the interests of various stakeholders against management's self-interest.
Numerous studies have investigated those characteristics that enable boards to increase their efficiency and firm performance (e.g.Baysinger and Butler, 1985; Rechner and Dalton, 1991; Finkelstein and D'Averi, 1994; Rediker and Seth, 1995).The results of these previous studies are mixed, however.Most list independent directors and dual leadership as important factors.
The role of independent directors on the board is to provide the objectivity necessary to properly ratify and monitor decisions of the firm's managers.The importance of effective board composition has been discussed extensively in the literature.Fama and Jensen (1983) found that independent directors are more efficient in facilitating the governance functions of the board.Beasley (1996) showed that the proportion of independent directors on the board is significantly and negatively associated with financial statement fraud.O'Sullivan (2000) investigated the relationship between audit fees and board independence for a sample of UK listed companies and found that having a greater proportion of independent directors is associated with more expensive audits.
The Cadbury Committee Report (1992) and the OECD Guidelines on Corporate Governance (2004) emphasized the role of non-executive directors who should bring a broader view to the company's activities and greater independence and objectivity to board decisions, and the importance of an independent chairman.The role of the chairman is to monitor and evaluate the performance of the CEO and executive directors on the board.However, this process might be impeded when the same person occupies the position of chairman and CEO.
The importance of having an independent chairman and threats of CEO duality were also discussed by Jensen (1993).He argued that corporate officers who report to the CEO cannot be effective in monitoring and evaluating CEO performance.Furthermore, Pi and Timme (1993) found that firms with separated functions outperform firms where CEO duality exists.
Russian Corporate Law "On Joint Stock Companies" follows best corporate governance practice by prohibiting a CEO from holding the position of chairman.However, it is common for boards of directors in Russia to have an executive director as a chairman, leading to chairman duality.The presence of independent directors on the board increases demand for quality audit services from the external auditor (Lifschutz et al., 2010) so as to give additional assurance and confidence to shareholders.Hence, this will result in higher audit fees as the scope of audit work increases.Based on the preceding discussion, we propose the following hypotheses:
H1: Audit fees are associated positively with the proportion of independent directors on the board. H2: The presence of an independent chairman on the board has a positive association with audit fees.
Prior studies have used the presence of State representatives on the board as one of the proxies for political connections of the firm (Gul, 2006).Firms in countries with more State involvement in the economy are perceived to speed the recognition of good news and to slow the recognition of bad news in earnings of firms in countries with less political involvement in the economy (Bushman and Piotroski, 2006).Politically connected firms are believed to be associated with higher inherent risk, resulting in an increase in the scope of audit work and higher audit fees.Prior studies have found that there is positive association between audit fees and politically connected firms (Gul, 2006).This leads us to the following:
Research design 3.1. Sample
Our sample is comprised of the top 147 non-finance companies listed on Russian Trading System (RTS) stock exchange who disclose the information regarding their audit fees.As Table 1 shows, our sample includes companies from a wide crosssection of industries.Simunic (1980) we use the natural log of audit fees to avoid problems of heteroscedasticity.
Experimental variables
To test hypotheses H1, H2 and H3, we examine three variables that reflect the hypothesized relationships between chairman independence, board members independence, presence of a representative of the State as board members, and audit fees.The variables of interest are the proportion of independent directors on the board (INDBD), proportion of representatives of the State (STATEBD) and the dummy variable INDC that shows independence of the chairman of the board: 1. Chairman independence (INDC): measures the effect of the presence of an independent chairman on audit fees.INDC is set equal to 1 if a chairman of the board is an independent director, 0 otherwise.2. Proportion of independent directors on the board (INDBD): reflects the effect of independent directors on the board on audit fees.INDBD is defined as the proportion of independent directors on the board to the total number of members on the board.3. Proportion of State representatives on the board (STATEBD): defines the effect of the presence and proportion of representatives of the State as board members on the board.It is measured as the proportion of State representatives on the board to the total number of board members.
Control variables
The other variables in the model are size, proportion of shares owned by directors, Big 4 auditors, current ratio, market to book value of equity, return on equity, leverage, proportion of foreign subsidiaries and loss incurrence.These variables were identified from prior literature with preference for recent research [2] as shown in Table 2).The descriptive statistics for both dependent and independent variables are shown in Table 3.As shown in Table 5 the model is highly significant at p<0.001.The explanatory power reflected by its adjusted R 2 of 0.49 is consistent with prior studies (Bliss et al., 2007;Wang et al., 2009).To examine potential multicollinearity in the regression model, we regressed all the explanatory variables on LnAF.The results indicate that the variance inflation factor (VIF) is below 1.6 and tolerance levels are above 0.6 for all the explanatory variables.This suggests that multicollinearity between the explanatory variables is not likely to pose a serious problem in interpretation of the regression results.
The coefficients for ROE, LnTA, INDC, INDBD, AUDITOR, DEBT and STATEDB are significant at 0.01, 0.05, or 0.1 levels and positive.The coefficient for CRE is not significant, although it has a positive sign.The coefficient for LOSS is significant at 0.1 and negative.The coefficient for FOR is not significant, although it has a negative sign, consistent with our hypothesis.Our experimental variables follow the predicted behaviour.The coefficients for INDC, INDBD, and STATEDB are significant and positive.
The results support hypotheses H1, H2 and H3.The higher level of corporate governance within a firm will lead to demand for high quality audit assurance and will result in higher audit fees.These results are consistent with studies by Carcello et al. (2002) and Abbott et al. (2003), among others.Also, the results show that a high proportion of State representatives on the board is associated with high audit fees.This supports the hypothesis that the increased inherent risk in politically connected firms will result in higher audit fees (Gul, 2006;Bushman and Piotroski, 2006).
Summary and conclusions
We find that audit fees are associated positively with the presence of an independent chairman, the higher of proportion of independent directors and State representatives on the board.These results are consistent with the demand-side perspective of audit services where good corporate governance practices demand for a higher level of audit assurance and result in higher audit fees.
The results support the view that the reforms of the Russian government have had a positive effect on the application of the corporate governance regime.It is perceived that Russia as a past communist State is linked to high level corruption and immaturity of corporate governance structures.The introduction of the CCG was a big step to align Russia with effective corporate governance practices in the international community.Many politicians argue that regulations to prevent CEO duality and increase the number of independent directors on the board have had a superficial effect rather than trigger any radical changes at the corporate level.However, our results suggest that the above mentioned measures have increased the demand for higher quality audit and, thus, have stimulated positive changes in the quality of the financial information disclosure.
This study raises an important question about the role of State representatives on boards of directors.Do they actually safeguard State property and ensure transparency as their role implies?Or do they promote corruption and fraud?The positive association between the proportion of State representatives and audit fees suggests some ideas for further investigation.It shows positive association with inherent audit risk and leads indirectly to a conclusion about negative effects of State representatives on corporate management.
The results have implications for regulatory bodies in Russia.They show areas that require further improvement.It seems critical to ensure the appointment of independent directors who have appropriate knowledge and experience, and capable of adhering to the best practices of information transparency and disclosure, and of ensuring a high level of corporate governance at State-owned enterprises.
The results should be considered in the light of several limitations.First, the sample is limited to 2008 year data of public listed non-finance companies who disclose their audit fee data voluntarily.Second, the focus of this study is on board composition variables.This imposes further limitations to the generalizability of the results.Future research could consider other corporate governance variables which may affect the perceived inherent riskiness of Russian companies.
Simunic
et al. (1995), Gul and Tsui, (1998), Gul (1999), Tsui et al. (2001)3.3.Regression ModelThis study uses the traditional audit fee model adapted from prior research bySimunic (1980) andCraswell et al. (1995).LnAF = b 0 + b 1 INDC + b 2 INDBD + b 3 STATEBD + b4AUDITOR + b 5 LnTA+ b 6 CRE + b 7 ROE+ b 8 DEBT + b 9 FOR + b 10LOSS + e if chairman of the board is independent director, '0' otherwise INDBD = proportion of independent directors on the board STATEBD = proportion of representatives of the State on the board Control variables AUDITOR = '1' if the firm audited by a Big 4 audit firm, '0' otherwise LnTA = natural log of client's total assets at year end CRE = current ratio ROE = EBIT divided by total equity DEBT = total liabilities divided by total equity FOR = proportion of foreign subsidiaries to total subsidiaries LOSS = '1' if loss incurred during the year, '0' otherwise 4. Results and discussion 4.1.Descriptive statistics
Table 1 .
Industry representation of companies in the sample We use an OLS regression model to estimate the predictive importance of the independent variables by comparing beta weights.The audit fee model is evaluated using 2008 fiscal-year data.Year 2008 is chosen as it reflects a relatively stable application of the CCG introduced in 2002 and changes in the listing rules of Russian stock exchanges in 2006 that are related to the composition of boards of directors(RTS, 2006; MICEX, 2006).The data are obtained from the Osiris database and hand collection from publicly available Russian annual reports, financial
Table 2 .
Control variables
Table 4 .
Pearson correlation coefficientsTable5reports the results of the multivariate linear regression analysis of the audit fee models.
Table 5 .
Multiple regression results | 2018-12-11T15:16:27.904Z | 2011-01-01T00:00:00.000 | {
"year": 2011,
"sha1": "78c3cfe0a850756a1452d5df8a2cca5d60669985",
"oa_license": "CCBYNC",
"oa_url": "https://virtusinterpress.org/spip.php?action=telecharger&arg=5142&hash=3072afc82ff4f36c13f467594999e30bd24318fd",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "78c3cfe0a850756a1452d5df8a2cca5d60669985",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
88441153 | pes2o/s2orc | v3-fos-license | SARS Incubation Period: Estimating SARS Incubation Period
No SARS transmission was shown among contacted passengers seated in close proximity to the index patient; these results suggest that in-flight transmission of SARS is not common. These results are consistent with other studies that assessed the risk for inflight transmission of SARS (5,6). The results also suggest that SARS-CoV is not efficiently transmitted, as reflected in its basic reproduction number R0 (range 2–4) (7). The SARS-infected patient on the indicated flights was in his first week of illness; infectivity is greatest in the second week (8). Therefore, the likelihood of SARS transmission on the indicated flights was not high. These results are further supported by the fact that all contacts were asymptomatic 13 days after their last contact with the SARS patient. No information was available on healthcare contacts. Although we did not observe any SARS transmission, we cannot rule out the possibility that it may have occurred. We had no contact information on 56% of the passengers on the indicated flights and, therefore, had to exclude them from the investigation. Obtaining complete contact information from the remaining passengers was difficult, which severely impeded the investigation. Similarly, we were unable to contact crew members and had to exclude them. Recent studies have documented SARS transmission to passengers seated more than four rows away from an index patient (5,9); thus, studying the passenger proximity to the patient may not be sufficient. Because of these limitations, our final sample size was small and probably biased. Since we did not observe any evidence to indicate in-flight transmission of SARS, we were unable to assess the importance of seat assignment proximity as a risk factor. The study shows that the roles of public health authorities and the aviation industry should be to “harmonise the protection of public health without the need to avoid unnecessary disruption of trade and travel” in public health emergencies such as global SARS transmission (10). We recommend strengthening the collaboration between national health authorities and the airline industry. Furthermore, the International Air Transport Association should establish procedures to ensure that complete contact information is available for all passengers and that rapid notification can be accomplished in case of potential exposure to infectious diseases.
No SARS transmission was shown among contacted passengers seated in close proximity to the index patient; these results suggest that in-flight transmission of SARS is not common.These results are consistent with other studies that assessed the risk for inflight transmission of SARS (5,6).The results also suggest that SARS-CoV is not efficiently transmitted, as reflected in its basic reproduction number R 0 (range 2-4) (7).The SARS-infected patient on the indicated flights was in his first week of illness; infectivity is greatest in the second week (8).Therefore, the likelihood of SARS transmission on the indicated flights was not high.These results are further supported by the fact that all contacts were asymptomatic 13 days after their last contact with the SARS patient.No information was available on healthcare contacts.Although we did not observe any SARS transmission, we cannot rule out the possibility that it may have occurred.We had no contact information on 56% of the passengers on the indicated flights and, therefore, had to exclude them from the investigation.Obtaining complete contact information from the remaining passengers was difficult, which severely impeded the investigation.Similarly, we were unable to contact crew members and had to exclude them.Recent studies have documented SARS transmission to passengers seated more than four rows away from an index patient (5,9); thus, studying the passenger proximity to the patient may not be sufficient.Because of these limitations, our final sample size was small and probably biased.Since we did not observe any evidence to indicate in-flight transmission of SARS, we were unable to assess the importance of seat assignment proximity as a risk factor.
The study shows that the roles of public health authorities and the aviation industry should be to "harmonise the protection of public health without the need to avoid unnecessary disrup-tion of trade and travel" in public health emergencies such as global SARS transmission (10).We recommend strengthening the collaboration between national health authorities and the airline industry.Furthermore, the International Air Transport Association should establish procedures to ensure that complete contact information is available for all passengers and that rapid notification can be accomplished in case of potential exposure to infectious diseases.
Estimating SARS Incubation Period
To the Editor: In a recent article, Meltzer described a simulation method to estimate the incubation period for patients infected with SARS with multiple contact dates (1).In brief, he assumed a uniform distribution of all possible incubation periods derived from these contact dates for each patient and randomly selected an incubation period from all contact dates for each patient to obtain a distribution of the incubation period for all 19 patients.The process is repeated 10,000 times to obtain an overall frequency distribution of the incubation period.
Instead of using this cumbersome iterative approach, the same results can be obtained by a simple method.When a uniform distribution is assumed for all possible incubation periods, the expected frequency for a day x as the incubation period is either 0 or 1/(total number of possible days).Taking the first patient (Canada 1) in (1) as an example, the expected frequency for 1, 2, 3, …, 18 days is 0, The total expected frequency for each day is the sum of the expected frequencies for all patients for that day.Therefore, the frequency distribution of the incubation period is given by dividing each total expected frequency by the sum of the total expected frequencies (x 100%) and is 7.6, 22.1, 14.2, 9.0, 6.5, 11.5, 4.6, 3.7, 3.7, 6.4, 3.7, 1.7, 1.1, 1.1, 0.7, 0.7, 0.7, 0.7.This is identical to the frequency distribution shown in Figure 1 of the paper by Meltzer (1).
Tze-wai Wong* and Wilson Tam* *The Chinese University of Hong Kong, Hong Kong
In Reply: Drs.Wong and Tam (1) are correct in stating that their method of calculating mean frequencies of possible incubation periods for patients with severe acute respiratory syndrome (SARS) is simpler than the method that I presented (2).However, their method cannot replicate the confidence intervals shown in Figure 1 The comparative complexity of my method provides data that are essential for making public health decisions.For example, public health officials need to know incubation periods to determine appropriate periods of quarantine and isolation and how long to conduct intensive (and expensive) surveillance after the last clinical case has been reported.To reduce costs and to enhance public support, public health officials may keep quarantine and isolation periods to a minimum.They also need to know the risk for failure of such interventions attributable to patients with relatively long incubation periods.Both Figure 2 in my article and Drs.Wong and Tam's data show that approximately 95% of the mean incubation period will be <12 days (i.e., 5% will incubate for 13 to 18 days).By summing the 95th percentiles for days 13 through 18 from my Figure 1, it can be seen that there is a probabiltiy that <30% of patients will have incubation periods >12 days (the actual probability of any given percentage incubating for >12 days can be easily calculated by using the spreadsheet which is an appendix to my article).Public health officials need to understand the degree of variability associated with any data used to make public health policies.Sole reliance on the mean incubation periods (or mean frequencies) will hide more than is shown, which increases the probability of failed public health interventions.
Martin I. Meltzer*
*Centers for Disease Control and Prevention, Atlanta, Georgia, USA in my article.Their suggested methodology can only replicate Figure 2 in my article, which shows the cumulative distribution of the mean frequencies of individual incubation periods. | 2014-10-01T00:00:00.000Z | 2004-08-01T00:00:00.000 | {
"year": 2004,
"sha1": "ea47f2de38544b4a949c0ebd4dbefef7d7adc5db",
"oa_license": "CCBY",
"oa_url": "https://wwwnc.cdc.gov/eid/article/10/8/pdfs/04-0284.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3d5d95980b324b4f03ed9f38ece3c1072c5fd291",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
33609898 | pes2o/s2orc | v3-fos-license | Increased Extent of and Risk Factors for Pandemic (H1N1) 2009 and Seasonal Influenza among Children, Israel
During the pandemic (H1N1) 2009 outbreak in Israel, incidence rates among children were 2× higher than that of the previous 4 influenza seasons; hospitalization rates were 5× higher. Children hospitalized for pandemic (H1N1) 2009 were older and had more underlying chronic diseases than those hospitalized for seasonal influenza.
W e compared the extent and pattern of pandemic (H1N1) 2009 with the previous 4 infl uenza seasons (2005-2009) among Israel's child population, both for community-based surveillance and pediatric hospitalizations. We also sought a possible association between the pandemic waves and schools closure. The study was approved by the Institutional Review Board Committee of Hadassah Medical Center.
The Study
Israel's Center for Disease Control seasonal infl uenza surveillance system operated throughout our 5-year study. The system is based primarily on 1) anonymous patient visits for infl uenza-like illnesses (ILI) to Maccabi Community Clinics, Israel's second largest health maintenance organization, insuring ≈1 of every 4 Israelis; and 2) nasopharyngeal swabs from sample ILI patients at designated sentinel clinics countrywide. ILI was defi ned as fever (>37.8°C) with >1 of the following: cough, coryza, sore throat, or myalgia. Swab samples were tested for infl uenza viruses at the Health Ministry's Central Virology Laboratory (1) by using multiplex real-time reverse transcription PCR (RT-PCR) (TaqMan chemistry quantitative RT-PCR) (2). ILI rates constituted 3 escalating waves of infection, all at times atypical for seasonal infl uenza ( Figure 1). The fi rst peaked early August (week 32). Israel's schools close July/August, but children stay together in summer frameworks during July. Wave Israel identifi ed its fi rst pediatric pandemic (H1N1) 2009 cases in June 2009 (week 24) and recorded local transmission the following week ( Figure 2). During weeks 28-43, the weekly percentage of positive infl uenza samples among children was 40%-60%, peaking at We compared hospitalization of children with laboratory-confi rmed infl uenza infection during the pandemic with the previous 4 infl uenza seasons in the pediatric departments of Hadassah's 2 hospitals in Jerusalem. These departments provide primary medical care for ≈250,000 children (1 of every 10 children in Israel), as well as tertiary care for chronic diseases. We performed our study at these hospitals because respiratory specimens were routinely taken year-round for laboratory confi rmation from all children with suspected infl uenza or respiratory virus infection during the 5-year study. Direct immunofl uorescence assay was used at Hadassah in previous years for detection of infl uenza and other respiratory viruses and multiplex real-time PCR (TaqMan chemistry quantitative RT-PCR) for detection of infl uenza viruses during the pandemic.
Findings from pandemic (H1N1) 2009 were retrospectively compared with those from previous infl uenza seasons. In previous shorter infl uenza A/B seasons, fewer children were hospitalized; none were treated with antiviral agents, and statistically signifi cant differences included age, underlying chronic diseases, underlying chronic lung disease, and neonatal fever as the initial symptom ( Table 2). No signifi cant differences were found regarding history of prematurity (<33 weeks), weight percentile, pediatric intensive care unit admission, evidence of pneumonia, oxygen saturation <90%, and leukopenia. In previous seasons, 6 nosocomial infl uenza infections and 2 co-infections with respiratory syncytial virus were reported; none were seen for pandemic (H1N1) 2009.
Conclusions
Children, mainly those 5-10 years of age, were affected by pandemic (H1N1) 2009 markedly more so than by seasonal infl uenza, similar to results reported from the United States, Spain, and Switzerland (3)(4)(5)(6). During the 1918 Spanish infl uenza pandemic, the highest incidence rates were among older children (7). In our study, hospitalized children infected with pandemic (H1N1) 2009 were older, and fi ndings were compatible with reports from several other countries (8,9), but Emerging Infectious Diseases • www.cdc.gov/eid • Vol. 17, No. 9, September 2011 fi ndings were unlike those from Argentina, where 60% were infants (9). The age of children who died in Israel also underlines the impact on older children, as reported elsewhere (10,11). Although pandemic (H1N1) 2009 virus may cause severe, life-threatening disease in previously healthy children of all ages (12), the children we studied had signifi cantly more underlying chronic diseases than did children hospitalized for seasonal infl uenza (13). We, like others (3), found no increase in pneumonia or pediatric intensive care unit admissions caused by pandemic (H1N1) 2009. However, this fi nding could be because antiviral therapy was administered during the pandemic but not in previous years; 98/127 (77.2%) of children hospitalized for pandemic (H1N1) 2009 received oseltamivir ( Table 2).
The nationwide pandemic (H1N1) 2009 infl uenza mortality rate in Israel is similar to that reported for the United Kingdom (14) but cannot be compared with previous years because laboratory data are lacking and there was no requirement to report the death of children >12 months of age. Our study is limited in that it was retrospective. During the pandemic, parents were advised not to attend the clinic for mild disease, although anxiety may have increased visits. There may have been differences between diagnoses of ILI among different Maccabi physicians. The 2 hospitals studied, which represented 10% of hospitalized children, were selected not as nationally representative but because of the feasibility of viral diagnosis since 2005. Infl uenza detection during the pandemic in patients hospitalized at Hadassah was based on PCR; immunofl uorescent antibody assay was used for previous seasons.
Awareness that pandemic infl uenza may have unique clinical characteristics, risk factors, and increased incidence, mainly among children 5-18 years of age, is advocated. Because school opening in late summer 2009 triggered the wave of pandemic (H1N1) 2009 infl uenza (15), closing or delaying opening schools until vaccine is available should be considered among mitigation strategies in future infl uenza pandemics, especially for more virulent viruses. | 2014-10-01T00:00:00.000Z | 2011-09-01T00:00:00.000 | {
"year": 2011,
"sha1": "fa73e21335138787074b31c1910c7170a3a452a8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3201/eid1709.102022",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa73e21335138787074b31c1910c7170a3a452a8",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54051277 | pes2o/s2orc | v3-fos-license | On Kilian’s Randomization of Multilinear Map Encodings
. Indistinguishability obfuscation constructions based on matrix branching programs generally proceed in two steps: first apply Kilian’s randomization of the matrix product computation, and then encode the matrices using a multilinear map scheme. In this paper we observe that by applying Kilian’s randomization after encoding, the complexity of the best attacks is significantly increased for CLT13 multilinear maps. This implies that much smaller parameters can be used, which improves the efficiency of the constructions by several orders of magnitude. As an application, we describe the first concrete implementation of multi-party non-interactive Diffie-Hellman key exchange secure against existing attacks. Key exchange was originally the most straightforward application of multilinear maps; however it was quickly broken for the three known families of multilinear maps (GGH13, CLT13 and GGH15). Here we describe the first implementation of key exchange that is resistant against known attacks, based on CLT13 multilinear maps. For N = 4 users and a medium level of security, our implementation requires 18 GB of public parameters, and a few minutes for the derivation of a shared key.
Introduction
Multilinear maps and indistinguishability obfuscation. Since the breakthrough construction of Garg, Gentry and Halevi [GGH13a], cryptographic multilinear maps have shown amazingly powerful applications in cryptography, most notably the first plausible construction of program obfuscation [GGH + 13b]. A multilinear map scheme encodes plaintext values {a i } into encodings {[a i ]} such that the a i 's are hidden; only a restricted class of polynomials can then be evaluated over these encoded values; eventually one can determine whether the evaluation is zero or not, using the zero testing procedure of the multilinear map scheme.
The goal of program obfuscation is to hide secrets in arbitrary running programs. The first plausible construction of general program obfuscation was described by Garg, Gentry, Halevi, Raykova, Sahai and Waters (GGHRSW) in [GGH + 13b], based on multilinear maps; the construction has opened many new research directions, because the notion of indistinguishability obfuscation (iO) has tremendous applications in cryptography [SW14]. Since the publication of the GGHRSW construction, many variants of GGHRSW have been described [MSW14,AGIS14,PST14,BGK + 14, BMSZ16]. Currently there are essentially only three known candidate constructions of multilinear maps: • GGH13. The first candidate construction of multilinear maps is based on ideal lattices [GGH13a]. Its security relies on the difficulty of the NTRU problem and the principal ideal problem (PIP) in certain number fields. • CLT13. An analogous construction but over the integers was described in [CLT13], based on the DGHV fully homomorphic encryption scheme [DGHV10].
• GGH15. Gentry, Gorbunov and Halevi described another multilinear maps scheme [GGH15], based on the Learning With Errors (LWE) problem with encoding over matrices, and defined with respect to a directed acyclic graph.
However the security of multilinear maps is still poorly understood. The most important attacks against multilinear maps are "zeroizing attacks", which consist in using linear algebra to recover the secrets of the scheme from the encodings of zero. At Eurocrypt 2015, Cheon et al. described a devastating zeroizing attack against CLT13; when CLT13 is used to implement non-interactive multipartite Diffie-Hellman key exchange, the attack completely breaks the protocol [CHL + 15]. The attack was also extended to encodings variants, where encodings of zero are not directly available [CGH + 15]. The key-exchange protocol based on GGH13 was also broken by a zeroizing attack in [HJ16]. Finally, the Diffie-Hellman key exchange protocol under GGH15 was broken in [CLLT16], using an extension of the Cheon et al. zeroizing attack.
However, not all attacks against the above multilinear map schemes can be applied to indistinguishability obfuscation. While multipartite key exchange based on any of the three families of multilinear map schemes is broken, iO is not necessarily broken by zeroizing attacks, because of the particular structure that iO constructions induce on the computation of multilinear map encoded values. Namely, in iO constructions, no low-level encodings of zeroes are available, and the obfuscation of a matrix branching program can only produce zeroes at the last level, moreover when evaluated in a very specific way. However some partial attacks against iO constructions have already been described. In [CGH + 15] it was shown how to break the GGHRSW branching-program obfuscator when instantiated using CLT13, when the branching program to be obfuscated has a very simple structure (input partition). For GGH13, Miles, Sahai and Zhandry introduced "annihilation attacks" [MSZ16] that can break many obfuscation schemes based on GGH13; however, the attack does not apply to the GGHRSW construction, because in GGHRSW the matrix program is embedded in a larger matrix with random entries (diagonal padding). In [CGH17], the authors showed how to break iO constructions under GGH13, using a variant of the input partitioning attack; the attack applies against the GGHRSW construction with diagonal padding. A new tensoring technique was introduced in [CLLT17] to break iO constructions for branching programs without the input partition structure. Finally, an attack against iO over GGH15 was described in [CVW18] based on computing the rank of a certain matrix.
Obfuscating matrix branching programs. The GGHRSW construction and its variants consist of a "core component" for obfuscating matrix branching programs, and a bootstrapping procedure to obfuscate arbitrary programs based on the core component, using fully homomorphic encryption and proofs of correct computation. The core component relies on multilinear maps for evaluating a product of encoded matrices corresponding to a branching program, without revealing the underlying value of those matrices.
More precisely, the core component of the GGHRSW construction and its variants proceeds in two steps: first apply Kilian's randomization of the matrix product computation, and then encode the matrices using a multilinear map scheme. In this paper, our main observation is that for CLT13 multilinear maps, the complexity of the best attacks is significantly increased when Kilian's randomization is also applied after encoding. We note that applying Kilian's randomization "on the encoding side" was already used in GGH15 multilinear maps as an additional safeguard [GGH15, §5.1]. For CLT13 this implies that one can use much smaller parameters (noise and encoding size), which improves the efficiency of the constructions by several orders of magnitude.
More precisely, a matrix branching program BP of length n is evaluated on input x ∈ {0, 1} by computing: (1) where {B i,b } 1≤i≤n,b∈{0,1} are square matrices and b 0 and b n+1 are bookend vectors; then BP(x) = 0 if C(x) = 0, and BP(x) = 1 otherwise. The function inp(i) indicates which bit of x is read at step i of the product matrix computation. To obfuscate a matrix branching program, the GGHRSW construction proceeds in two steps. First one randomizes the matrices B i,b as in Kilian's protocol [Kil88]: The randomized matrix branching program can then be evaluated by computing C(x) =b 0 × n i=1B i,x inp(i) ×b n+1 . Namely the successive randomization matrices R i cancel each other; therefore the matrix product computation evaluates to the same result as in (1).
The second step in the GGHRSW construction is to encode the entries of the matricesB i,b using a multilinear map scheme. Every entry of a given matrix is encoded separately; the bookend vectorsb 0 andb n are also encoded similarly. Therefore one defines the matrices and vectorsB The matrix branching program from (1) can then be evaluated over the encoded matrices: (2) Eventually one obtains an encodedĈ(x) over the universe set S = {1, . . . , n+2}, and one can use the zero-testing procedure of the multilinear map scheme to check if C(x) = 0, thereby learning the output of the branching program BP(x), without revealing the values of the matrices B i,b .
(In)efficiency of iO. However, even with some efficiency improvements (as in [AGIS14]), the main issue is that indistinguishability obfuscation is currently not feasible to implement in practice. The first obstacle is that when converting the input circuit to a matrix branching program using Barrington's theorem [Bar86], one induces an enormous cost in performance, as the length of the branching program grows exponentially with the depth of the circuit being evaluated. The second obstacle is that the multilinear map noise and parameters grow with the degree of the polynomial being computed over encoded elements, which corresponds to the length of the matrix branching program.
In this paper, we consider both issues. For the second one, we show that for CLT13 multilinear maps, when applying Kilian's randomization "on the encoding side", one can significantly reduce the noise and encoding size while keeping the same level of security; this leads to major improvements of performance. For the first issue, we craft a sequence of matrix products that only performs a multipartite DH key-exchange, rather than generating one from a circuit through Barrington's theorem, so that its degree becomes much more manageable. We can then describe the first concrete implementation of multipartite DH keyexchange based on multilinear maps that is resistant against existing attacks.
Kilian's randomization on the encoding side. As already observed in [GGH15], Kilian's randomization can also be applied over the encoding space, as an additional safeguard. Namely starting from the encoded matricesB i,b used to computeĈ(x) as in Equation (2), one can again choose n + 1 random invertible matrices {R i } n i=0 and then randomize the matricesB i,b with: Since the matricesR i cancel each other in the matrix product computation, the evaluation proceeds exactly as in (2), withĈ(x) =b 0 × n i=1B i,x inp(i) ×b n+1 , and therefore the same zero-testing procedure can be applied toĈ(x). Note that theR i matrices are applied on the encoding side, that is on the encoded matricesB i,b , instead of the plaintext matrices B i,b as previously; obviously both randomizations (before and after encoding) can be applied independently.
In this paper we focus on Kilian's randomization on the encoding side in the context of the CLT13 multilinear maps. In CLT13 the encoding space is the set of integers modulo x 0 , where x 0 = n j=1 p j ; therefore the matrices {R i } n i=0 are random invertible matrices modulo x 0 . We show that the complexity of the best attacks against CLT13 is significantly increased thanks to Kilian's randomization of the encodings. One can therefore use much smaller parameters (noise size and encoding size), which can improve the efficiency of a construction by several orders of magnitude.
More precisely, the security of CLT13 is based on the hardness of the multiprime Approximate-GCD problem. Given x 0 = n i=1 p i for random primes p i , and polynomially many integers c j such that for small integers r ij 's, the goal is to recover the secret primes p i 's. The multiprime Approximate-GCD problem is an extension of the single-prime problem, with a single prime p to be recovered from encodings c j = q j ·p+r j and x 0 = q 0 ·p, for small integers r j . The two main approaches for solving the Approximate-GCD problem are the orthogonal lattice attacks and the GCD attacks.
First contribution: solving the multi-prime Approximate-GCD problem. For the single-prime Approximate-GCD problem, the classical orthogonal lattice attack has complexity 2 Ω(γ/η 2 ) , where γ is the size of x 0 and η is the size of the prime p; see [DGHV10, §5.2]. However, extending the attack to the multiprime case as in CLT13 is actually not straightforward; in particular, we argue that the approach described in [CLT13] is incomplete and does not recover the primes p i 's, except for small values of n; we note that solving the multi-prime case was actually considered as an open problem in [GGM16]. Our first contribution is to solve this open problem with an algorithm that proceeds in two steps. The first step is the classical orthogonal lattice attack; it recovers a basis of the lattice generated by the vectors r i = c mod p i , where c = (c 1 , . . . , c t ). However, the vectors r i cannot be recovered directly; namely by applying LLL or BKZ one recovers a basis of moderately short vectors, and not necessarily the r i 's which are the shortest vectors in the lattice. Therefore the approach described in [CLT13] does not work, except in low dimension. In the second step of our algorithm, using the lattice basis obtained from the first step, we show that by computing the eigenvalues of a well chosen matrix, we can recover the primes p i 's, as in the Cheon et al. attack [CHL + 15]. The asymptotic complexity of the full attack is the same as in the single-prime case; using γ = η ·n for the size of x 0 as previously, where n is the number of primes p i , the complexity is 2 Ω(n/η) . Therefore, as in [CLT13], one must take n = ω(η log λ) to prevent the lattice attack, where λ is the security parameter.
Second contribution: extension to the Vector Approximate-GCD problem. When working with matrix branching programs and Kilian's randomization on the encoding side, we must actually consider a vector variant of the Approximate-GCD problem, in which we have access to randomized vectors of encodings instead of scalar values as in (3). Therefore, our second contribution is to extend the orthogonal lattice attack to the Vector Approximate-GCD problem, and to show that the extended attack has complexity 2 Ω(m·n/η) , for vectors of dimension m. This implies that the new condition on the number n of primes p i in CLT13 becomes: n = ω η m log λ Compared to the previous condition, the number of primes n in CLT13 can therefore be divided by a factor m, for the same level of security, where m is the matrix dimension. This implies that the encoding size γ can also be divided by a factor m, which provides a significant improvement in efficiency.
Third contribution: GCD attacks against the Vector Approximate-GCD problem. The naive GCD attack against the Approximate-GCD problem with c 1 = q 1 · p + r 1 and x 0 = q 0 · p consists in computing gcd(c 1 − r 1 , x 0 ) for all possible r 1 and has complexity O(2 ρ ), where ρ is the bitsize of r 1 . At Eurocrypt 2012, Chen and Nguyen [CN12] described an improved attack based on multipoint polynomial evaluation, with complexityÕ(2 ρ/2 ). The Chen-Nguyen attack was later extended by Lee and Seo at Crypto 2014 [LS14], when the c i 's are multiplicatively masked by a random secret z modulo x 0 , as it is the case in the CLT13 scheme; their attack has the same complexityÕ(2 ρ/2 ). As previously, when working with matrix branching programs and Kilian's randomization on the encoding side, we must consider the vector variant of the Approximate-GCD problem. Our third contribution is therefore to extend the Lee-Seo attack to this vector variant; we obtain a complexityÕ(2 m·ρ/2 ) instead ofÕ(2 ρ/2 ), where m is the vector dimension. Assuming that this is the best possible attack, one can therefore divide the noise size ρ by a factor m. Similarly, when Kilian's randomization is applied to a m × m matrix, we show that the attack complexity becomesÕ(2 m 2 ·ρ/2 ), and therefore the noise size ρ used to encode those matrices in CLT13 can be divided by m 2 . Combined with the previous improvement, this improves the efficiency of CLT13 based constructions by several orders of magnitude.
Fourth contribution: non-interactive DH key exchange from multilinear maps. In principle the most straightforward application of multilinear maps is non-interactive multipartite Diffie-Hellman (DH) key exchange with N users, a natural generalization of the DH protocol for 3 users based on the bilinear pairing. This was originally described for GGH13, CLT13 and GGH15, but was quickly broken for the three families of multilinear maps; in particular, key exchange based on CLT13 was broken by the Cheon et al. attack [CHL + 15]. The main question is therefore: Can we construct a practical N -way non-interactive key-exchange protocol from candidate multilinear maps constructions?
In this paper we provide a first step in that direction. Namely our fourth contribution is to describe the first implementation of N -way DH key exchange resistant against known attacks. Our construction is based on CLT13 multilinear maps and is secure against the Cheon et al. attack and its variants. Our construction contains many ingredients from the GGHRSW and other similar constructions. Namely we express the session key as the result of a matrix product computation, and we embed the matrices into larger randomized matrices before encoding, together with some special "bookend" components at the start and end of the computation, as in [GGH + 13b]. We use the "multiplicative bundling" technique from [GGH + 13b] to prevent the adversary from combining the matrices in arbitrary ways. As explained previously, we use Kilian's randomization on the encoding side. With no additional cost, we can also use the straddling set systems from [BGK + 14] to further constrain the attacker, and Kilian's randomization at the plaintext level. Finally, we use k repetitions in order to prevent the Cheon et al. attack against CLT13, when considering input partitioning attacks as in [CGH + 15], and its extension with the tensoring attack [CLLT17]. We argue that the extended Cheon et al. attack has complexity Ω(m 2k−1 ) in our scheme, where m is the matrix dimension and k the number of repetitions.
For N = 4 users and a medium (62 bits) level of security, our implementation requires 18 GB of public parameters, and a few minutes for the derivation of a shared key. We note that without Kilian's randomization of encodings our construction would be completely unpractical, as it would require more than 100 TB of public parameters.
Related work. In [MZ18], Ma and Zhandry described a multilinear map scheme built on top of CLT13 that is provably resistant against zeroizing attack, and which can be used to directly construct a non-interactive DH key-exchange. More precisely, the authors develop a new weak multilinear map model for CLT13 to capture all known attack strategies against CLT13. The authors then construct a new multilinear map scheme on top of CLT13 that is secure in this model. The construction is based on multiplying matrices of CLT13 encodings as in iO schemes. To prevent zeroizing attacks, the same input is read multiple times, as in iO constructions. The input consistency is ensured by a clever use of "enforcing" matrices based on some permutation invariant property. Finally, the authors construct a non-interactive DH key-exchange scheme based on their new multilinear map scheme. However, the authors do not provide implementation results nor concrete parameters (except for multilinear map degree and number of public encodings), so it is difficult to assess the practicality of their construction. The authors still provide the following parameters for a 4-party DH key exchange with 80 bits of security; see Table 1. We provide our corresponding parameters for comparison (see more details in Section 7).
Scheme
MMap degree Public encodings Public-key size Boneh et al. [ The main advantage of the Ma-Zhandry construction is that it has a proof of security in a weak multilinear map model, whereas our construction has heuristic security only. It seems from Table 1 that our construction would require a smaller multilinear map degree for the same number of public encodings. We stress however that providing concrete parameters is actually a complex optimization problem (see Section 7), so Table 1 should be handled with care. In any case, the Ma-Zhandry construction can certainly benefit from our analysis, since Kilian's randomization on the encoding side can also be applied "for free" in their construction.
Source code. We provide the source code of our construction, and the source code of the attacks, in [CP19].
Preliminaries
We denote by [a] n or a mod n the unique integer x ∈ (− n 2 , n 2 ] which is congruent to a modulo n. The set {1, 2, . . . , n} is denoted by [n].
The CLT13 multilinear map
We briefly recall the (asymmetric) CLT13 multilinear map scheme; we refer to [CLT13] for a full description. For large secret primes p i 's, let x 0 = n k=1 p i , where n is the number of primes. We denote by η the bitsize of the p i 's, and by γ the bitsize of x 0 ; therefore γ n · η. The plaintext space of CLT13 is Z g1 × Z g2 × · · · × Z gn for secret prime integers g i 's of α bits.
The CLT13 scheme is based on CRT representations. We denote by CRT(a 1 , . . . , a n ) or CRT(a i ) i the number a ∈ Z x0 such that a ≡ a i (mod p i ) for all i ∈ [n]. An encoding of a vector m = (m 1 , . . . , m n ) at level set S = {j} is an integer c ∈ Z x0 such that c = [CRT(m 1 + g 1 r 1 , . . . , m n + g n r n )/z j ] x0 for integers r i of size ρ bits, where z j is a secret mask in Z x0 uniformly chosen during the parameters generation procedure of the multilinear map. This gives: To support a -level multilinearity, one uses distinct z j 's. It is clear that encodings from the same level can be added via addition modulo x 0 . Similarly multiplication between encodings can be done by modular multiplication in Z x0 , but the encodings must be of disjoint level sets; the resulting encoding level set is then the union of the input level sets. At the top level set S = {1, . . . , }, one can zero-test an encoding by multiplication with the and therefore if m i = 0 for all 1 ≤ i ≤ n then the result will be small compared to x 0 . From the previous equation the high-order bits of c · p zt mod x 0 only depend on the m i 's; therefore from the zero-testing procedure one can extract a value that only depends on the m i 's.
The Approximate-GCD Problem and its Variant
The security of the CLT13 multilinear map scheme is based on the Approximate-GCD problem. For a specific η-bit prime integer p, we use the following distribution over γ-bit integers: We also consider a noise-free x 0 = q 0 · p where q 0 is a random (γ − η)-bit prime integer (alternatively the product of γ/η − 1 primes of size η bits each).
Definition 1 (Approximate-GCD problem with noise-free x 0 ). For a random η-bit prime integer p, given x 0 = q 0 · p and polynomially many samples from D γ,ρ (p), output p.
We also consider the following variant, in which instead of being given elements from D γ,ρ (p), we get vectors of elements multiplied by a secret random invertible matrix K modulo x 0 .
Definition 2 (Vector Approximate-GCD problem with noise-free x 0 ). For a random η-bit prime integer p, generate x 0 = q 0 · p and a random invertible m × m matrix K modulo x 0 . Given x 0 and polynomially many samplesṽ The vector variant of the Approximate-GCD problem cannot be easier than the original problem, since any algorithm solving the vector variant can be used to solve the Approximate-GCD problem, simply by generating vectorsṽ = v · K (mod x 0 ) for some random matrix K. However, the vector variant could be harder to solve, so that smaller parameters could be used when dealing with the Vector Approximate-GCD problem. We show in the next sections that the generalizations of the attacks to the vector variant indeed have higher complexity.
In the context of the CLT13 scheme, one actually works with multiple primes p i 's. Therefore we consider the multi-prime variant of the Approximate-GCD problem.
Definition 3 (Multi-prime Approximate-GCD problem). For n random η-bit prime integers p i , let x 0 = n i=1 p i . Given x 0 and polynomially many inte- Finally, we consider the vector variant of the multi-prime Approximate-GCD problem.
Definition 4 (Vector multi-prime Approximate-GCD problem). For n random η-bit prime integers p i , let x 0 = n i=1 p i . Let K be a random invertible m×m matrix modulo x 0 . Given x 0 and polynomially many vectorsṽ = v·K mod The two main approaches for solving the Approximate-GCD problem are the orthogonal lattice attacks and the GCD attacks. We consider the orthogonal lattice attacks in Section 3, and the GCD attacks in Section 4.
Lattice attack against the Approximate-GCD Problem
We first recall the lattice attack against the single-prime Approximate-GCD problem [DGHV10, §B.1], based on the Nguyen-Stern orthogonal lattice attack [NS01]. As mentioned in introduction, extending the attack to the multi-prime case is actually not straightforward; in particular, we argue that the approach described in [CLT13] is incomplete and does not recover the primes p i 's, except for small values of n. Therefore, we describe a new algorithm for solving the multi-prime Approximate-GCD problem, using a variant of the Cheon et al. attack against CLT13. We then extend the algorithm to the vector variant of the Approximate-GCD problem. Finally, we run our attacks against both the multiprime Approximate-GCD problem and the vector variant, in order to derive concrete parameters for our construction. We provide the source code of our attacks in [CP19].
The orthogonal lattice
We first recall the definition of the orthogonal lattice, following [NS97]. Let L be a lattice in Z m . The orthogonal lattice L ⊥ is defined as the set of elements in Z m which are orthogonal to all the lattice points of L, for the usual dot product. We define the latticeL = (L ⊥ ) ⊥ ; it is the intersection of Z m with the Q-vector space generated by L; we have that L ⊂L and the determinant ofL divides the determinant of L. We have that dim(L) + dim(L ⊥ ) = m and det(L ⊥ ) = det(L).
From Minkowski's bound, we expect that a reduced basis of a "random" lattice L has short vectors of norm (det L) 1/ dim L . For a "random" lattice L, we also expect that det(L) det(L) = det(L ⊥ ). Moreover, for a lattice L generated by a set of d "random" vectors b i ∈ Z m , from Hadamard inequality we expect that det L d i=1 b i . In that case, we therefore expect the short vectors
The classical orthogonal lattice attack against the single-prime
Approximate-GCD problem In this section we recall the lattice attack against the Approximate-GCD problem, based on the Nguyen-Stern orthogonal lattice attack [NS01]; see also the analysis in [DGHV10, §B.1]. We consider a set of t integers x i = p · q i + r i and x 0 = p · q 0 , for r i ∈ (−2 ρ , 2 ρ ) ∩ Z. We consider the lattice L of vectors u that are orthogonal to x modulo x 0 , where x = (x 1 , . . . , x t ): Therefore, applying lattice reduction should yield a reduced basis (u 1 , . . . , u t ) with vectors of length where γ is the size of x 0 , for some constant ι > 0 depending on the lattice reduction algorithm, where 2 ιt is the Hermite factor. Now given a vector u ∈ L, we have u · x ≡ 0 (mod x 0 ), which implies that u · r ≡ 0 (mod p) where r = (r 1 , . . . , r t ). The main observation is that if u is short enough, the equality will hold over Z. More precisely, if u · r < p, we get u · r = 0 in Z. From (6), this happens under the condition: In that case, the vectors (u 1 , . . . , u t−1 ) from the previous lattice reduction step should be orthogonal to the vector r. One can therefore recover ±r by computing the rank 1 lattice orthogonal to those vectors. From r one can recover p by computing p = gcd(x 0 , x 1 − r 1 ).
Lattice attack against multi-prime Approximate GCD
We consider the setting of CLT13, that is we are given a modulus x 0 = n i=1 p i and a set of integers x j ∈ Z x0 such that x j mod p i = r ij for r ij ∈ (−2 ρ , 2 ρ ) ∩ Z, and the goal is to recover the secret primes p i .
First step: orthogonal lattice attack. As previously we consider the integer vector x formed by the first t integers x j , and we consider the lattice L of vectors u that are orthogonal to x modulo x 0 : Note that the lattice L is of full rank t since it contains x 0 Z t . For 1 ≤ i ≤ n, let r i = x mod p i . For any u ∈ Z t , if u · r i = 0 in Z for all 1 ≤ i ≤ n, then u · x ≡ 0 (mod x 0 ). Therefore, denoting by L r the lattice generated by the vectors r i , the lattice L contains the sublattice L ⊥ r of the vectors orthogonal in Z to the n vectors r i 's. Assuming that the n vectors r i 's are linearly independent, we have dim L ⊥ r = t − n, and we expect a reduced basis of L ⊥ r to have vectors of norm ( n i=1 r i ) 1/(t−n) 2 ρ·n/(t−n) . Given a vector u ∈ L, we have u · x ≡ 0 (mod x 0 ), which implies that u · r i ≡ 0 (mod p i ) for all 1 ≤ i ≤ n. As previously, if u is short enough, the equalities will hold over Z. More precisely, if u · r i < p i for all 1 ≤ i ≤ n, we get u · r i = 0 in Z for all i; therefore we must have u ∈ L ⊥ r under the condition u < (min p i )/(max r i ) 2 η−ρ . Hence, when applying lattice reduction to the lattice L, we expect to recover the vectors from the sublattice L ⊥ r if there is a gap of at least 2 ι·t between the short vectors in L ⊥ r and the other vectors in L \ L ⊥ r , where 2 ι·t is the Hermite factor. Since the vectors in L \ L ⊥ r must have norm at least approximately 2 η−ρ , this gives the condition: In that case, applying lattice reduction to L should yield a reduced basis (u 1 , . . . , u t ) where the first t − n vectors belong to the sublattice L ⊥ r . By computing the rank n lattice orthogonal to those vectors, one recovers a basis B = (b 1 , . . . , b n ) of the latticeL r = (L ⊥ r ) ⊥ , where L r is the lattice generated by the n vectors r i , However this does not necessarily reveal the original vectors r i . Namely even by applying LLL or BKZ on the basis B, we do not necessarily recover the short vectors r i 's, except possibly in low dimension; therefore the approach described in [CLT13] only works when n is small.
However, the main observation is that since each vector b j of the basis B is a linear combination of the vectors r i , it can play the same role as a zero-tested value in the CLT13 scheme. More precisely, since the vectors b 1 , . . . , b n form a basis ofL r , we can write for all 1 ≤ j ≤ n: λ ji r i for unknown coefficients λ ji ∈ Q. The above equation is analogous to Equation (5) on the zero-tested value c · p zt , which is a linear combination of the r i 's over Z when all m i 's are zero. Therefore, we can apply a variant of the Cheon et al. attack to recover the primes p i 's, by computing the eigenvalues of a well chosen matrix. Since we have n vectors b j instead of a single p zt value, we only need to work with equations of degree 2 in the x j 's, instead of degree 3 as in [CHL + 15].
Second step: algebraic attack. The second step of the attack is similar to the Cheon al. attack. Recall that we receive as input x 0 = n i=1 p i and a set of integers x j ∈ Z x0 such that x j mod p i = r ij for r ij ∈ (−2 ρ , 2 ρ ) ∩ Z. Since we must work with an equation of degree 2 in the inputs, we consider an additional integer y ∈ Z x0 with y mod p i = s i with s i ∈ (−2 ρ , 2 ρ ) ∩ Z for all 1 ≤ i ≤ n.
We define the column vector x = x 1 . . . x n T . Instead of running the orthogonal lattice attack with x, we run the orthogonal lattice attack from the previous step with the column vector z of dimension t = 2n defined as follows: Letting r i = x mod p i , this gives the column vectors for 1 ≤ i ≤ n: We denote by Z the 2n × n matrix of column vectors z mod p i : where R is the n × n matrix of column vectors r i , and U := diag(s 1 , . . . , s n ). By applying the orthogonal lattice attack of the first step on the known vector z, we obtain a basis of the lattice intersection of Z 2n with the Q-vector space generated by the n vectors z mod p i , which corresponds to the columns of the matrix Z. Therefore we obtain two matrices W 0 and W 1 such that: for some unknown matrix A ∈ Q n×n . Therefore, as in the Cheon et al. attack, we compute the matrix: and by computing the eigenvalues of W , one recovers the components s i of the diagonal matrix U , from which we recover the p i 's by taking gcd's. We provide the source code of the attack in [CP19].
Asymptotic complexity. As previously, we derive a heuristic lower bound for the complexity of the attack. The attack requires a lattice dimension t = 2n, and moreover the vectors r i have norm 2 2ρ instead of 2 ρ ; therefore condition (8) gives 4ρ + 2ιn < η which implies the condition ι < η 2n . Achieving an Hermite factor of 2 ιt heuristically requires 2 Ω(1/ι) time, by using BKZ reduction with block-size β = ω(1/ι) [HPS11]. Therefore, the orthogonal lattice attack has time complexity at least 2 Ω(n/η) . Note that with γ = η · n, we get the same time complexity lower bound 2 Ω(γ/η 2 ) as for the single-prime Approximate-GCD problem. Finally, as shown in [CLT13], to prevent the orthogonal lattice attack, one must take: n = ω(η log λ) Namely, in that case there exists a function c(λ) such that n(λ) = c(λ)η(λ) log 2 λ with c(λ) → ∞ for λ → ∞. With a time complexity at least 2 k·n/η for some k > 0, the time complexity is therefore at least 2 k·c(λ) log 2 λ = λ k·c(λ) . This implies that the attack is not polynomial time under Condition 9.
Lattice attack against the Vector Approximate-GCD Problem
In this section we extend the previous orthogonal lattice attack to the vector variant of the Approximate-GCD problem with multiple primes p i 's. We still consider a modulus x 0 = n i=1 p i , but instead of scalar values x j , we consider t row vectors v j , each with m components (v j ) k , such that: for all components 1 ≤ k ≤ m and all 1 ≤ i ≤ n, where r ijk ∈ (−2 ρ , 2 ρ ) ∩ Z. We consider the t × m matrix V of row vectors v j . We don't publish the matrix V directly; instead we first generate a random secret m × m invertible matrix K modulo x 0 and publish the t × m matrix: The goal is to recover the primes p i 's as in the previous attack.
Actually, we cannot solve the original multi-prime vector Approximate-GCD problem directly, since the algebraic step of the attack requires degree 2 equations in the inputs. Instead, we assume that we can additionally obtain two m × m matrices:C for some random invertible matrix K modulo x 0 , where the components of the matrices C 0 , C 1 ∈ Z m×m x0 are small modulo each p i . This assumption is verified in our construction of Section 5.
First step: orthogonal lattice attack. In our extended attack we consider the lattice L of vectors u that are orthogonal to all columns ofṼ modulo x 0 : Since the matrix K is invertible, we obtain: The lattice L is of full rank t since it contains x 0 Z t . Let R i = V mod p i . As previously, the lattice L contains the sublattice L of dimension t − m · n of the vectors orthogonal in Z to the m · n column vectors in R i for 1 ≤ i ≤ n. We expect a reduced basis of L to have vectors of norm 2 ρ·m·n/(t−m·n) . Therefore, applying lattice reduction to L should yield a reduced basis (u 1 , . . . , u t ) where the first t−m·n vectors belong to the sublattice L , under the modified condition: As previously, by computing the rank n · m lattice orthogonal to the vectors (u 1 , . . . , u t−m·n ), we obtain a basis of the lattice intersection of Z t with the Q-vector space generated by the column vectors of the R i 's.
Second step: algebraic attack. The second step is similar to the second step of the attack from Section 3.3 and is described in the full version of this paper [CP18], with a lattice dimension t = 2mn.
Asymptotic complexity. As previously, we derive a heuristic lower bound for the complexity of the attack. Since the attack requires a lattice dimension t = 2mn, condition (11) with noise size 2ρ instead of ρ gives 4ρ+2ιmn < η which gives the new condition ι < η 2mn . Therefore, the orthogonal lattice attack has time complexity at least 2 Ω(n·m/η) . This implies that to prevent the orthogonal lattice attack, we must have: Compared to the original condition of [CLT13] recalled by (9), the value of n can therefore be divided by m. This implies that the encoding size γ = η · n can also be divided by m. We show in Section 7 that this brings a significant improvement in practice.
Practical experiments and concrete parameters
Practical experiments. We have run our two attacks from sections 3.3 and 3.4 against the multi-prime approximate-GCD problem and its vector variant; we provide the source code in [CP19]. We summarize the running times for various values of n in tables 2 and 3. We see that the running time of the lattice step in the vector variant is roughly the same as in the non-vector variant, when the number of primes n is divided by m in the vector variant. This confirms the asymptotic analysis of the previous section.
For the algebraic step of the non-vector problem, it is significantly more efficient to compute the matrix kernel and eigenvalues modulo some arbitrary prime integer q of size η, instead of over the rationals. However we have not found a similar optimization for the vector variant; we see in Table 3 that for larger n the cost of the algebraic step becomes prohibitive (but still polynomial time) for the vector variant. In this paper we conservatively fix our concrete parameters by considering the lattice step only. We leave as an open problem the derivation of a "practical" algebraic step for the vector variant.
LLL and BKZ practical complexity. To derive concrete parameters for our construction from Section 5, we have run more experiments with LLL and BKZ lattice reduction algorithms applied to a lattice similar to the lattice L of the previous section. Recall that we must apply lattice reduction on the lattice: invertible modulo x 0 , otherwise we can partially factor x 0 . We obtain Therefore, a basis of L is given by the matrix of row vectors: For simplicity, we have performed our experiments on a simpler lattice: where the components of A are randomly generated modulo x 0 . We expect to obtain a reduced basis (u 1 , . . . , u t ) with vectors of norm: where 2 ι·t is the Hermite factor, and γ the size of x 0 . Experimentally, we observed the following running time (expressed in number of clock cycles) for the LLL lattice reduction algorithm in the Sage implementation: The Sage implementation also includes an implementation of BKZ 2.0 [CN11]. Experimentally we observed the following running-times (in number of clock cycles): where the observed constant b(β) and the Hermite factor are given in Table 4. However we were not able to obtain experimental results for block-sizes β > 60, so for BKZ-80 and BKZ-100 we used extrapolated values, assuming that the cost of BKZ sieving with blocksize β is poly(t) · 2 0.292β+•(β) (see [BDGL16]). The Hermite factors for BKZ-80 and BKZ-100 are from [CN11].
Setting concrete parameters. When applying LLL or BKZ with blocksize β on the original lattice L, we obtain an orthogonal vector u under the condition (11), which gives with t = 2nm and vectors with noise size 2ρ instead of ρ: Therefore we must run LLL or BKZ-β with a large enough blocksize β so that ι is small enough for condition (14) to hold. For security parameter λ, we require that T lat (t, γ) ≥ 2 λ , with t = 2nm, where the running time (in number of clock cycles) T lat (t, γ) is given by (12) or (13), for γ = η · n. We use that condition to provide concrete parameters for our scheme in Section 7.
The Naive GCD Attack.
For simplicity we first consider the single prime variant of the Approximate-GCD problem. More precisely, we consider x 0 = q 0 · p and an encoding c with c ≡ r (mod p), where r is a small integer of size ρ bits. The naive GCD attack, which has complexity O(2 ρ ), consists in performing an exhaustive search of r and computing gcd(c − r, x 0 ) to obtain the factor p.
The Chen-Nguyen Attack
At Eurocrypt 2012, Chen and Nguyen described an improved attack based on multipoint polynomial evaluation [CN12], with complexityÕ(2 ρ/2 ). One starts from the equation: The main observation is that the above product modulo x 0 can be written as the product of 2 ρ/2 evaluations of a single polynomial of degree 2 ρ/2 . Using a tree structure, it is possible to evaluate a polynomial of degree 2 ρ/2 at 2 ρ/2 points iñ O(2 ρ/2 ) time and memory, instead of O(2 ρ ). More precisely, one can define the following polynomial f (x) of degree 2 ρ/2 , with coefficients modulo x 0 ; we assume for simplicity that ρ is even: One can then rewrite (15) as the product of 2 ρ/2 evaluations of the polynomial f (x): There are classical algorithms which can evaluate a polynomial f (x) of degree d at d points, using at mostÕ(d) operations in the coefficient ring; see for example [Ber03]. Therefore, the Chen-Nguyen Attack has time and memory complexityÕ(2 ρ/2 ). We provide in [CP19] an implementation of the Chen-Nguyen attack in Sage; our running time is similar to [CN12, Table 1]; see Table 5 below for practical experiments. In practice, the running time in number of clock cycles of the Chen-Nguyen attack with a γ-bit x 0 is well approximated by: T CN (ρ, γ) = 0.3 · ρ 2 · 2 ρ/2 · γ · log 2 γ (16)
The Lee-Seo Attack
The Chen-Nguyen attack was later extended by Lee and Seo at Crypto 2014 [LS14], when the encodings are multiplicatively masked by a random secret z modulo x 0 , as it is the case in the CLT13 scheme; their attack has the same complexityÕ(2 ρ/2 ). Namely in the asymmetric CLT13 scheme recalled in Section 2.1, an encoding c at level set {i 0 } is such that: for some random secret z i0 modulo x 0 . Therefore, we consider the following variant of the Approximate-GCD problem. Instead of being given encodings c i with c i ≡ r i (mod p) for small r i 's, we are given encodings c i with: for some random integer z modulo x 0 , where the r i 's are still ρ-bit integers. Since c 1 /c 2 ≡ r 1 /r 2 (mod p), the naive GCD attack consists in guessing r 1 and r 2 and computing p = gcd(c 1 /c 2 − r 1 /r 2 mod x 0 , x 0 ), with complexity O(2 2ρ ). The Lee-Seo attack with complexityÕ(2 ρ/2 ) is as follows. First, one generates two lists L 1 and L 2 of such encodings, and we look for a collision modulo p between those two lists; such collision will appear with good probability when the size of the two lists is at least 2 ρ/2 . More precisely, let c i be the elements of L 1 and d j be the elements of L 2 , with c i ≡ r i · z (mod p) and d j = s j · z (mod p). If r i = s j for some pair (i, j), then c i ≡ d j (mod p) and therefore: where the product is over all c i ∈ L 1 and d j ∈ L 2 . A naive computation of this product would take time |L 1 |·|L 2 | = 2 ρ ; however, as in the Chen-Nguyen attack, this product can be computed in time and memoryÕ(2 ρ/2 ). Namely one can define the polynomial f (x) = i (c i − x) mod x 0 of degree |L 1 | = 2 ρ/2 and the previous equation can be rewritten: This corresponds to the multipoint evaluation of the degree 2 ρ/2 polynomial f (x) at the 2 ρ/2 points of the list L 2 ; therefore, this can be computed in time and memoryÕ(2 ρ/2 ).
As observed in [LS14], if only a small set of elements c i is available (much less than 2 ρ/2 ), one can still generate exponentially more c i 's by using small linear integer combinations of the original c i 's, and the above attack still applies, with only a slight increase in the noise ρ. We provide in [CP19] an implementation of the Lee-Seo attack in Sage. Its running time is roughly the same as Chen-Nguyen, except that the attack is probabilistic only; its success probability can be increased by taking slightly larger lists L 1 and L 2 to improve the collision probability.
GCD Attack against the Vector Approximate GCD Problem
We now consider the Vector Approximate-GCD problem (Definition 2). We consider a set of row vectors v i of dimension m, such that for each vector v i , all components (v i ) j of v i are small modulo p: However, we only obtain the randomized vectors: for some random invertible matrix K modulo x 0 . The goal is still to recover the prime p.
Our attack is similar to the Lee-Seo attack recalled previously. We only consider the first component c i = (ṽ i ) 1 of each vectorṽ i . We have: We build the two lists L 1 and L 2 from the c i 's as in the Lee-Seo attack. Since each c i is a linear combination modulo p of m random values r ij 's (where the coefficients are initially generated at random modulo p), it has m · ρ bits of entropy modulo p, instead of ρ in the Lee-Seo attack. Therefore a collision between the two lists will occur with good probability when the lists have size at least 2 m·ρ/2 . This implies that the attack has time and memory complexitỹ O(2 m·ρ/2 ). Note that the entropy of each c i modulo p is actually upper-bounded by the bitsize η of p. If m · ρ > η, the attack complexity becomesÕ(2 η/2 ), which corresponds to the complexity of the Pollard's rho factoring algorithm. We provide in [CP19] an implementation of the attack in Sage; see Table 5 below for practical experiments.
With an attack complexityÕ(2 mρ/2 ) instead ofÕ(2 ρ/2 ), one can therefore divide the size of the noise ρ by a factor m compared to the original CLT13, which is a significant improvement. For example, it is recommended in [CLT13] to take ρ = 89 bits for λ = 80 bits of security; with a vector dimension m = 10, one can now take ρ = 9 for the same level of security. Note that we can take m · ρ/2 < λ because we only require that the running time in number of clockcycles is at least 2 λ . More precisely, the running time can be approximated by T CN (mρ, γ) for a γ-bit x 0 , where T CN (ρ, γ) is given by (16), and we require T CN (mρ, γ) ≥ 2 λ .
With matrices. The previous GCD attack can be generalized to m×m matrices V i instead of m-dimensional vectors v i . More precisely, we consider a set of matrices V i of dimension m × m with small components modulo p, that is: for ρ-bit integers r ijk . As previously, instead of publishing the matrices V i , we publish the randomized matrices for two random invertible m × m matrices K and K modulo x 0 . In that case, each component ofṼ i depends on the m 2 elements of the matrix V i . This implies that the entropy of each component ofṼ i is now m 2 · ρ and therefore the GCD attack has complexityÕ(2 m 2 ·ρ/2 ). Formally, using the Kronecker product, we can rewrite (18) as vec (Ṽ i ) = (K T ⊗ K) vec (V i ), where vec (V i ) denotes the column vector of dimension m 2 formed by stacking the columns of V i on top of one another, and similarly for vec (Ṽ i ). We can therefore apply the previous attack with vectors of dimension m 2 instead of m; the attack complexity is thereforeÕ(2 m 2 ·ρ/2 ). This implies that we can divide the noise size ρ by a factor m 2 compared to [CLT13], where m is the matrix dimension. We provide in [CP19] an implementation of the attack in Sage; see Table 5 below for practical experiments.
With multiple primes p i 's. Instead of considering an encoding c that is small modulo a single prime p, we consider as in CLT13 a modulus x 0 = n i=1 p i and an integer c ∈ Z x0 such that c mod p i = r i for ρ-bit integers r i . With good probability, we have |r i | ≤ 2 ρ /n for some i but not all i, and Equation (15) from the Chen-Nguyen attack can be rewritten: where the gcd is not equal to x 0 ; therefore a sub-product of the p i 's is revealed. Since the number of terms in the product is divided by n, the complexity of the Chen-Nguyen attack for recovering a single p i (or a sub-product of the p i 's) is divided by √ n. By repeating the same attack n times in different intervals of the r i 's, one can recover all the p i 's; the running time of the Chen-Nguyen attack is then increased by a factor √ n. Similarly, in the Lee-Seo attack with multiple primes p i 's, the collision probability for recovering a single p i is multiplied by n, and therefore the attack complexity is divided by √ n for recovering a single p i . The same applies to our variant attack against the Vector Approximate GCD problem and to the matrix variant. In the later case, with noise size ρ m , the running time of the attack in number of clock cycles can therefore be approximated by with ρ = m 2 ρ m . We will use that approximation to provide concrete parameters for our scheme in Section 7.
Practical experiments. We provide in Table 5 the result of practical experiments against the Approximate-GCD problem and its vector variant with a single prime p. We see that our attack against the vector variant with dimension m and noise size ρ v has roughly the same running time as the Chen-Nguyen attack on the original problem with noise ρ = m · ρ v ; similarly, the running time of our attack against m × m matrices with noise ρ m has roughly the same running time as Chen-Nguyen with noise ρ = m 2 · ρ m ; this confirms the above analysis. We provide the source code in [CP19].
Non-interactive Multipartite Diffie-Hellman Key Exchange
A multipartite key exchange protocol aims to derive a shared value between N parties. This is achieved via a procedure in which the parties broadcast some values and then use some secret information together with the values broadcasted by the other parties to set up the shared key. In a non-interactive protocol, the parties broadcast their public values only once and at the same time (or equivalently, the values broadcasted by each party do not depend on the values broadcasted by the others). Following the notation of [BS03], such protocol can be described with three randomized probabilistic polynomial-time algorithms as follows.
- We say that the protocol is correct if s = s 1 = s 2 = · · · = s N , i.e., if all the parties share the same value at the end. We say that the protocol is secure if no probabilistic polynomial-time adversary can distinguish the shared value s from a random string given the public parameters params and the broadcasted values pk 1 , . . . , pk N .
Our Construction
We describe our N -party one-round key exchange protocol. We start with the Setup procedure, which is run a single time by a trusted authority to generate the public parameters. As illustrated in Table 6, Setup generates for each party v two sequences of matrices (C In the KeyGen procedure, each party v will use the product of the matrices C (v) i,b on his row v to generate the session-key. The product is computed according to the secret-key sk v of Party v and the secret-keys sk u of the other parties. Therefore, in the Publish procedure, each party u will compute and publish the partial sub-products corresponding to his sk u on the other rows v = u, to be used by each party v on his row v.
Setup(1 λ , N ): given a security parameter λ and the number of participants N , we set the length µ of each parties' secret, the number of repetitions k, and the dimension m of the matrices, with m ≡ 0 (mod 3). We then instantiate the CLT13 multilinear map with degree of multilinearity + 2 with := µN k. Let g = n i=1 g i be the integer defining the message space Z g . Let ν be the number of high-order bits that can be extracted from a zero-tested value.
To ensure that all users 1 ≤ u ≤ N compute the same session-key, we define A (u) i,b as a larger matrix embedding a matrix B i,b that is the same for all users, with some random block padding in the diagonal and the multiplicative bundling scalars α (u) i,b to prevent the adversary from switching the corresponding bits b i 's between the k repetitions of the secret keys: More precisely, we first sample 2 random invertible matrices The scalars α (u) i,b must satisfy the following condition: In addition, we sample the vectors s * , t * uniformly from Z m g , and for each u ∈ for 0 ≤ i ≤ . We then use Kilian's randomization "on the encoding side" and define: (mod x 0 ). Note that thanks to Kilian's randomization "on the encoding side", the matrices A (u) i,b can be encoded with denominator z j = 1 in (4) for all levels j; namely we obtain the same distribution in the final C (u) i,b as with random z j 's. Finally we output params, which is defined as the set containing all the matrices C (u) i,b , the bookend vectorss (u) andt (u) , and the scalars µ, k, N, , x 0 , ν and m.
Publish(params, u): Party u samples a bit string sk (u) ∈ {0, 1} µ and for each v ∈ [N ] such that u = v, Party u computes k products using matrices from the row of party v. This ensures that from the extraction procedure of the multilinear map scheme, each user u can derive the session key from his own sk (u) by computing on his row u the partial products corresponding to his sk (u) , combined with the published partial matrix products from the other users. More precisely, Party u computes and broadcasts the following products: for each v = u and r ∈ [k]. The notation u → v stands for "computed by u to be used by v". We let pk u = {D KeyGen(params, v, sk (v) , {pk u } u =v ): Using secret sk (v) , party v computes the products D (v→v) r for all r ∈ [k] using (22), and then the product Eventually the shared key is obtained by applying a strong randomness extractor to the ν most-significant bits of z (v) . This terminates the description of our construction.
Correctness. It is easy to verify the correctness of our construction. Namely defining sk as the concatenated secret-keys with the k repetitions: we obtain from (22) and (23), and then from the cancellation of Kilian's randomization on the encoding side: This corresponds to a zero-tested encoding of: From the condition satisfied by the α are independent from v. Therefore, each party v will extract from z (v) the same session-key, as required.
Additional safeguard: straddling sets
As an additional safeguard one can use the straddling set systems from [BGK + 14]. Like the multiplicative bundling scalars α (u) i,b , this prevents the adversary from switching the secret-key bits between the k repetitions. Additionally, the straddling set system prevents the adversary from mixing the matricesÃ
Optimizations and Implementation
In this section we describe a few optimizations in order to obtain a concrete implementation of our construction from Section 5.
Encoding of elements
For the bookend vectors, the components are CLT13-encoded with random noise of size ρ b bits. Letting α be the size of the g i 's, for simplicity we take ρ b = α. Therefore the encoded bookend vectors have α · (2m/3) + ρ b · m = 5αm/3 bits of entropy on each slot. For the matrices, we can use a much smaller encoding noise thanks to the analysis from Section 4.4. On a single slot, the matrices A (u) i,b have entropy α · m 2 /3, and when CLT13-encoded with noise ρ m , the matricesà have entropy α · m 2 /3 + ρ m · m 2 on each slot; the GCD attack complexity is thereforeÕ(2 m 2 ·(ρm+α/3)/2 ). For the parameters from Table 7 below, it suffices to take ρ m = 2 to prevent GCD attacks.
Number of matrices per level
Instead of taking only two matrices A i,1 for each 1 ≤ i ≤ , we can take 2 τ matrices for each i. In that case, the secret key of each user has µ words of τ bits, where each word selects one of the 2 τ matrices; the size of the secret-key is therefore µ · τ bits. For the same secret-key size, one can therefore divide the total degree by a factor τ , but the number of encoded matrices is multiplied by a factor 2 τ /τ . In order to minimize the size of the public parameters, we use τ = 3.
Other attacks
Orthogonal lattice attack on zero-tested values. There is an orthogonal lattice attack against the values obtained by subtracting two zero-tested lastlevel encodings from two different rows. The attack is analogous to the attack described in Section 3.3, and is prevented under the condition n = ω( ν 2 η−ν log λ), where ν is the number of extracted bits in the zero-tested values.
Meet-in-the-middle attack. Given the matrix products D (u→v) r published by each party u corresponding to his secret sk (u) , there is a meet-in-the-middle attack that can recover sk (u) . Since each sk (u) has length µ · τ bits, the attack's complexity is O(2 µ·τ /2 ). More precisely, the attack complexity is at least M (m, γ) · 2 µ·τ /2 , where M (m, γ) is the time it takes to multiply m × m matrices with entries of size γ. We ensure M (m, γ) · 2 µ·τ /2 ≥ 2 λ .
Concrete parameters and implementation results
In this section we propose concrete parameters for our key-exchange construction with N = 4 parties. These parameters are generated so that all known attacks have running time ≥ 2 λ clock cycles. In the construction the total number of encoded matrices is 2 τ · · N with τ = 3, with a total degree = µ · k · N . Therefore, the total number of CLT13 encodings is N CLT 13 2 τ · · N · m 2 . The size of the secret key is τ µ = 3µ bits. The size η of the primes p i is adjusted so that we extract ν = λ bits. During the publish phase, each party must broadcast k · (N − 1) matrices of dimension m × m and γ-bit entries. The size of those broadcasted values along with the other parameters are shown in Table 7. The main difference with the original (insecure) key-exchange protocol from [CLT13] is that we get a much larger public parameter size; for λ = 62 bits of security, we need 18 GB of public parameters, instead of 70 MB originally. However our construction would be completely unpractical without Kilian's randomization on the encoding side. Namely for λ = 62 and a degree = 168, one would need primes p i of size η (α + ρ) · 2.4 · 10 4 with α = 80 and ρ = 62 as in [CLT13]. Since γ = ω(η 2 log λ) in [CLT13], one would need γ 4 · 10 9 . With N CLT 13 = 1.9 · 10 5 , that would require 100 TB of public parameter size. Hence Kilian's randomization on the encoding side provides a reduction of the public parameter size by a factor 10 4 .
We have implemented the key-exchange protocol in SAGE [S + 17] and executed it on a machine with processor Intel Core i5-8600K CPU (3.60GHz), 32 GB of RAM, and Ubuntu 18.04.2 LTS. The execution times are shown in Table 8. We could not run the Large and High instantiations (λ = 72 and λ = 82) because of the huge parameter size. While the Setup time is significant, since we need to sample all the random values and perform expensive operations like CRT and inverting matrices, the Publish and KeyGen times remain reasonable. In fact, each user just has to multiply m × m matrices µ · k · (N − 1) times to publish their values and k · (µ + N ) times to derive the shared key. We provide the source code of the key-exchange in [CP19]. Table 8. Timings for a 4-party key-exchange.
Conclusion
We have shown that Kilian's randomization "on the encoding side" can bring orders of magnitude efficiency improvements for iO based constructions when instantiated with CLT13 multilinear maps. As an application, we have described the first concrete implementation of multipartite DH key exchange secure against existing attacks. The main advantage of Kilian's randomization is that it can be applied essentially for free in any existing implementation; for example it could be easily integrated in the 5Gen framework [LMA + 16] for experimenting with program obfuscation constructions. | 2018-11-29T09:00:09.385Z | 2019-12-08T00:00:00.000 | {
"year": 2019,
"sha1": "2d9191baa0452505b00b6dfe874b768932a86fa5",
"oa_license": "CCBYNCSA",
"oa_url": "https://orbilu.uni.lu/bitstream/10993/41687/1/1129.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "81cf3b8601ac414f9945ccb97da34b5c2344cc2b",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
235348946 | pes2o/s2orc | v3-fos-license | Improving timeliness in the neglected tropical diseases preventive chemotherapy donation supply chain through information sharing: A retrospective empirical analysis
Background Billions of doses of medicines are donated for mass drug administrations in support of the World Health Organization’s “Roadmap to Implementation,” which aims to control, eliminate, and eradicate Neglected Tropical Diseases (NTDs). The supply chain to deliver these medicines is complex, with fragmented data systems and limited visibility on performance. This study empirically evaluates the impact of an online supply chain performance measurement system, “NTDeliver,” providing understanding of the value of information sharing towards the success of global health programs. Methods Retrospective secondary data were extracted from NTDeliver, which included 1,484 shipments for four critical medicines ordered by over 100 countries between February 28, 2006 and December 31, 2018. We applied statistical regression models to analyze the impact on key performance metrics, comparing data before and after the system was implemented. Findings The results suggest information sharing has a positive association with improvement for two key performance indicators: purchase order timeliness (β = 0.941, p = 0.003) and—most importantly—delivery timeliness (β = 0.828, p = 0.027). There is a positive association with improvement for three variables when the data are publicly shared: shipment timeliness (β = 2.57, p = 0.001), arrival timeliness (β = 2.88, p = 0.003), and delivery timeliness (β = 2.82, p = 0.011). Conclusions Our findings suggest that information sharing between the NTD program partners via the NTDeliver system has a positive association with supply chain performance improvements, especially when data are shared publicly. Given the large volume of medicine and the significant number of people requiring these medicines, information sharing has the potential to provide improvements to global health programs affecting the health of tens to hundreds of millions of people.
Introduction
Public-private partnership programs provide medicines for preventive chemotherapy (PC) through mass drug administration (MDA) campaigns to more than one billion people annually. The programs are sustained by large-scale donations from major pharmaceutical companies in support of the World Health Organization's 2012 "Roadmap to Implementation," which outlined global strategies and 2020 targets to control, eliminate, and eradicate Neglected Tropical Diseases (NTDs) [1]. While significant progress was made towards these 2020 targets, the WHO has recently released a new NTD roadmap with 2030 targets, which many pharmaceutical manufacturers have committed to continuing to support [2]. As of January 2020, 15 billion doses of medicines were donated towards these PC-NTD programs [3]. The donations from pharmaceutical companies are what makes these the world's largest and most successful public health programs [4]. MDA campaigns are comprised of once or twice-a-year treatment with one or more medicines at the community level that bring together a number of stakeholders, requiring considerable coordination as they typically involve treating hundreds of thousands to millions of patients in endemic regions within entire countries over the course of days or weeks [5].
has a component requiring a login, the majority of the data can be found via the "Public Dashboards" options on the NTDeliver link and this data does not require a log-in. From the "Public Dashboards" option, "Search all orders," should be selected which will then route to this link: https://www. ntdeliver.com/report/search-all-orders?locale=en From there, the shipment data can be downloaded for the years and regions indicated in the study. Note that this data will not include the purchase order (PO) date or go signal dates, these two additional data points can be found by crossreferencing the purchase order number to the data found in the publicly-accessible country pages, which are accessed via this link: https://www. ntdeliver.com/country/ The logistics involved to make these medicines available to support MDAs is both critical and complex, due to the many stakeholders and partnerships involved in meeting the targeted treatment date. Fig 1 provides an overview of the processes involved and performance measures associated with each link in the supply chain. With the considerable resources and coordination involved within a narrow timeframe, inefficiencies caused by fragmented data systems and a lack of visibility to supply chain performance have resulted in substandard performance for on-time delivery of medicine to in-country central medical stores. Sometimes delivery targets may lag as much as 40% below the WHO target for 80% of all shipments to be delivered at least one month before the planned MDA date [6]. Delivery delays result in waste, increased program costs, and delayed or even completely missed MDAs, leaving communities susceptible to infections and disease recrudescence [7].
To improve the efficiency and performance of NTD supply chain, the NTD Supply Chain Forum was established in 2012 [3]. The NTD Supply Chain Forum is comprised of NTD supply chain experts from the WHO, pharmaceutical companies, nongovernmental organizations, donor organizations, ministries of health, and logistics providers [3]. Subsequently to the formation of this forum, "NTDeliver," a centralized information system, was launched in 2016 by the NTD Supply Chain Forum to share data from various partners along supply process chain as means of facilitating performance information sharing [8]. Through NTDeliver, all stakeholders in the supply chain-and even the general public-may access performance metrics on all shipments of the four medicines that participate in data sharing through NTDeliver.
Information sharing in the context of supply chains-"the extent to which crucial and/or proprietary information is available to members of the supply chain"-is an integral aspect of performance management and the sharing of accurate and timely information has been linked to supply chain performance improvements [9,10]. Many studies have been conducted on the value of information sharing to improve supply chain performance in the private sector. Information sharing has been proven to have a range of benefits, from improved resource utilization to reduced cycle time between order and delivery [11]. These benefits stem from the increased transparency that enable risks to be anticipated and shared among supply chain partners, which strengthens coordination to achieve optimal operational performance [11][12][13]. Largely, this body of literature with empirical studies investigates the value of information sharing in a commercial supply chain focused on a dyadic relationship, primarily between two partners, a buyer and a supplier [14].
Despite the growing importance of supply chain initiatives in the global NTD agenda, there is limited research dedicated to exploring measures to improve NTD supply chain performance nor the impact of information sharing. This limitation may be an indicator that performance measurement and management systems have not been widely developed and systematically implemented as part of the overall humanitarian supply chain strategy [15]. Only Korpoc et al's 2015 research on the impact of the NTD "first mile" processes (the segment of the NTD supply chain covering up to delivery to central medical stores) on MDA timeliness acknowledges this area of NTD supply chain performance measurement by identifying the need for performance indicators and outlining suggested metrics [7]. Furthermore, the recent COVID-19 pandemic has raised the profile both of the criticality of publicly sharing timely data in the global health domain and the topic of assuring robust supply chains to meet global health goals [16][17][18][19][20][21]. Thus, the timing could not be better to study information sharing and supply chain management in the wider global humanitarian health context.
This study seeks to evaluate empirically the impact of information sharing via NTDeliver on supply chain performance-improvements which ultimately contribute to achieving the global NTD targets. We examine the following two research questions: 1) what is the impact of information sharing through NTDeliver on the performance of the NTD PC medicines donation supply chain? and 2) what is the impact on the performance when country-level data are made publicly accessible? We use data obtained from the NTD Supply Chain Forum and implement regression models on these research questions. We find that information sharing has a positive association with improvement to two performance indicators of the NTD supply chain: purchase order timeliness and delivery timeliness. Furthermore, when country-level information sharing is made publicly accessible, a positive association is observed primarily on three downstream indicators: shipment timeliness, arrival timeliness, and delivery timeliness.
Study design and scope
We used retrospective data from NTDeliver that are routinely collected from and managed by supply chain partners supporting delivery of PC medicines from pharmaceutical manufacturing facilities to central medical stores. Permission was granted to use this data by the NTD Supply Chain Forum. The data are derived from vetted, existing data sources managed by NTD supply chain partners, such as: • WHO Preventive Chemotherapy and Transmission databank • Data provided to WHO country offices by the countries' Ministry of Health through the joint application process to request donations • Purchase orders raised by the WHO headquarters • Shipping documents generated by logistics service providers, in partnership with pharmaceutical donors The data represent shipments of four medicines from four manufacturers to treat three different diseases, accounting for almost 11.5 billion doses of PC medicines to 103 recipient countries covering 1,484 total shipments from February 28, 2006, to December 31, 2018. The data are refreshed and uploaded from these various sources daily [22].
While there are numerous medicines for NTDs donated by numerous pharmaceutical manufacturers for the NTDs, this research's scope focuses on PC medicines donations managed by the WHO through the "joint application package" (JAP) established in 2013, which supports an integrated review and subsequent reporting on medicines usage [23]. The JAP streamlines the application for donation of multiple medicines, especially as medicines are co-administered where diseases are co-endemic [23]. These PC medicines include: diethylcarbamazine citrate, albendazole, mebendazole, and praziquantel [24]. This focus is justified by the considerable volume of medicines, the unique nature of this supply chain that includes WHO involvement, the importance of PC to achieve NTD targets, and accessibility of data through NTDeliver. There are opportunities to improve processes across the supply chain, but the focus of this research will be on the segment of the supply chain that entails delivery from pharmaceutical manufacturing facilities to central medical stores, also referred to by partners as the "first mile" [7]. We chose to focus on the first mile due to the WHO drive to improve on-time delivery to central medical stores, the accessibility of relevant data, and opportunity to leverage information sharing among the many partners involved in this segment. Central medical stores (CMS) (most commonly utilized in Africa, Asia, and Latin America) serve as a warehouse and administrative facility that receives, stores, and manages medical supplies for national health programs and initiatives and are generally leveraged for humanitarian stock [6,25]. Improving the on-time delivery of PC medicine to the CMS is critical to the downstream in-country distribution to program sites where they are needed [6]. While the last mile (which encompasses drugs transport from central medical stores to district-level stores then on to MDA distribution) certainly involves many challenges that contribute to MDA delays, issues with the first mile have shown in practice and research to have a downstream effect on the last mile and ultimately on MDA timeliness [3,7]. Thus, alleviating first mile issues will be a step in the right direction to improve MDA timeliness.
Variables
The variables for this analysis are various key performance indicators (KPIs), which are actively reviewed through the NTD Supply Chain Forum and of interest to the supply chain partners, including the WHO. The most critical KPI is delivery timeliness, by which the WHO evaluates performance of this first mile of the NTD supply chain [6]. Both the independent and dependent variables were created from the data in the system, reflecting the KPIs and benchmarks standards tracked by the NTD Supply Chain Forum partners. Table 1 summarizes the key variables in the analysis. Important cofactors, incorporated via control variables, were included in the analyses to explore whether the relationship of the independent and dependent variables is skewed or invalidated by other factors. The WHO region was also included as a control since performance may vary according to the destination (shipment routes and customs clearance processes vary according to the destination) and to account for any regional-level improvement initiatives that could also explain performance improvements. Controlling for region may be considered more meaningful than controlling for country since regional-level improvements are more likely to impact the results as regional efforts likely impact all countries under their purview and therefore a great scope of the data sample. Yet, there is value to control for country logistics factors. While the NTD Supply Chain Forum has limited visibility of the countrylevel logistics since the focus is on the first mile, we can incorporate an external assessment of country logistics factors as a control variable. We believe the World Bank international Logistics Performance Index (LPI) benchmark tool is a relevant indicator to help control for such factors. This score measures performance of the logistics supply chain within a country and provides a qualitative evaluation by trading partners of the country's logistics performance [26].
The controls for medicine and disease were included with consideration for the fact that the medicines are produced by different manufacturers for different disease programs, which may lend itself to some variability in the supply chains. Mode of shipment was also
WHO region
Categorical variable which identifies the WHO region the ordering country is associated with; regions include Regional Office for Africa (AFRO), Regional Office for the Americas (AMRO), Regional Office for the Eastern Mediterranean (EMRO), Regional Office for Europe (EURO), Regional Office for Southeast Asia (SEARO), Regional Office for the Western Pacific (WPRO)
International Logistics Performance Index (LPI)
Continuous variable which provides the score from the benchmark tool that the World Bank manages to measure performance of the logistics supply chain within a country, providing a qualitative evaluation by trading partners of the country's logistics performance
Medicine type
Categorical variable which identifies the specific donated medicine ordered; four medicines included: albendazole (ALB), diethylcarbamazine citrate (DEC), mebendazole (MEB) and praziquantel (PZQ); each specific medicine is associated with one pharmaceutical manufacturer, but may be used to treat more than one disease
Disease treated
Categorical variable which identifies the disease(s) the medicine is used to treat; three diseases included: lymphatic filariasis (LF), schistosomiasis (SCH), or soil-transmitted helminthiases (STH)
Order size
Categorical variable which identifies number of tablets ordered; segmented into three groups: �10M; <10M and �1M; <1M incorporated as a control since the shipment speed varies between air, land, and sea. Lastly, order size was also included even though the supply chain process is the same regardless of order size; larger orders could take more time to prepare and smaller orders are often shipped by air, making this also a necessary factor to control. Details on the coding of the control variables can be found in S1 and S2 Tables. Connecting back to the full process shared in Fig 1, Fig 2 provides a diagram illustrating the process flow in a summary view and the components in/out of scope of the study, along with reference to any targeted timeline benchmarks as relevant.
Statistical analysis
A quasi-experiment design, using a "one-group pretest-posttest design without control group," was chosen as the research was initiated a few years after the launch of NTDeliver. In addition, NTDeliver was implemented in a real-world application that did not roll out the system in a phased approach, barring any ability to conduct randomization. This design was used to leverage historical data available on performance to understand the impact of this intervention. An ordinary least squares (OLS) regression model was used to review the relationship between implementing NTDeliver and its impact on delivery timeliness and other KPIs. Furthermore, review of the data confirms that the normality of the error distribution assumption for OLS regression is met. Q-Q plots were used to verify that most data are on a distribution lying on approximately on a straight line. While go signal timeliness did not show a straight trend, the central limit theorem enables the normality assumption to be met in the case of a "sufficiently large sample," for which the literature generally notes a sample of >50 would be "robust to violation of the normality assumption" [27]. This variable had over 200 samples, thus meeting the central limit theorem condition. While both medicine and disease type are included as control variables, these variables are not found to be significantly correlated since some medicines treat more than one disease and therefore is not a 1:1 mapping; hence, multi collinearity is not a concern. Control variables and robustness checks were applied to strengthen the validity of the results. We considered p values less than 0.05 to be statistically significant.
The shipment data extracted from the system cover orders made through December 31, 2018 and was therefore segmented in two groups to address the first research question: 1) shipments with POs raised prior to the implementation of NTDeliver (February 28, 2006-August 31, 2016) for the "pre" NTDeliver group; 2) shipments with POs raised after the implementation of NTDeliver (September 1, 2016-Dec 31, 2018) for the "post" NTDeliver group. Only the data in the "post" group were used to answer the second research question regarding the impact on shipment performance of making country-level data publicly accessible. The data in the "post" group were split into two groups, with consideration to February 1, 2018, as the implementation date of this publicly accessible data: 1) "Post 1" = 0 for shipments with a PO date prior to February 1, 2018 but after August 31, 2016; 2) "Post 2" = 1 for shipments with a PO date equal to or later than February 1, 2018 but earlier than January 1, 2019.
Results
We first study the impact of information sharing through NTDeliver on shipment performance within the NTD PC medicines supply chain. The data collected included 1,484 total shipments, with 1,068 shipments classified in the "pre" NTDeliver group and 416 in the "post" group. As noted, pairwise deletion was used in cases where data were missing, which accounts for the differing number of observations between variables. Table 2 summarizes the regression results illustrating the bivariate associations between the implementation of the NTDeliver system and NTD supply chain KPIs. Complete regression results are included in Table A3 in the S3 Table. Notably, delivery timeliness, the KPI considered most important to measure supply chain performance, shows a positive and significant association with a p-value of 0.027. PO timeliness also demonstrates positive and significant results supporting the hypothesized direction Conversely, the results for go signal timeliness appear to support the null hypothesis. While go signal timeliness is highly significant (p<0.001), even though the coefficient value is positive, this actually indicates a negative relationship with information sharing due to the calculation method for the variable (calculated as the difference between requested and actual go signal date). The coefficient suggests an increase in the difference between the go signal request and approval of about 40 days. Because there is no perceivable standard for establishing this request date and it is defined per request of the pharmaceutical manufacturer and any supporting partners, additional analysis provides further insight into how the go signal timeliness calculation may have changed after NTDeliver was implemented.
These results provided indicate that the difference between PO date and go signal request date has significantly decreased, since the coefficient is negative and therefore indicate changes to how the request date was defined (Table 3).
Next, we study the impact of making country-level data publicly accessible and particularly promoting this data access to country program managers, as training sessions were also provided for country program managers to publicize the release of this data and provide education on effective usage. The hypothesis is that extending information sharing may have a positive impact on the downstream processes that are actively displayed in the public country pages, which provides country managers with shipment, arrival, and delivery statuses. Table 4 provides the analysis results to answer the second research question regarding the impact of making country-level data publicly accessible.
The results show three variables with significant, positive association: shipment timeliness, arrival timeliness, and delivery timeliness-consistent with the hypothesis that these variables would be impacted since they are actively displayed in these publicly accessible pages. In the main regression results, shipment timeliness and arrival timeliness did not show any significance from the information sharing. In these results, shipment timeliness is significant at a pvalue of 0.001 and with a substantial coefficient of 2.57. Arrival timeliness is significant with a p-value of 0.003 and a coefficient of 2.88. Lastly, the important delivery timeliness performance indicator is also significant with a p-value of 0.011 and a coefficient of 2.82.
We also conducted further analyses to check the robustness of the results (see S4 Table). First, we re-ran the regression accounting for a lag in impact from information sharing. The analysis revealed that that all dependent variables, except arrival timeliness, remain significant and generally consistent with the main results when accounting for a six-month theoretical "lag time to benefit," considered as the time between implementing the intervention and observing improved outcomes [28]. Additionally, we conducted a "double pretest" to test the validity of the "pre" group as comparison. This "double pretest" was used as a validity check to ensure that merely "history" and/or "maturation" is not the reason for the differences between the pretest and posttest results, rather than independent variable [29]. The pretest group was divided and compared for significant differences in performance to assure any differences in performance within the pretest group are minimal and/or less than the difference between the pretest and posttest groups. The groups were divided with roughly the same number of shipments in each group, with one group comprised of shipments with POs raised between 2006-2013 and the other with POs raised between 2014-2016 August. The results from this analysis indicated that only PO timeliness had a positive, significant difference between these two pretest groups. No other dependent variables have an observable, significant difference in performance.
Discussion
Lack of coordination and limited transparency are two top issues in humanitarian supply chains [30]. Although existing literature on for-profit supply chains suggests addressing the issues though information sharing, it is unclear whether and how information sharing can improve performance in the non-profit humanitarian context [10][11][12][13]. In this paper, we examined the potential impact of information sharing on humanitarian supply chain performance. Our analysis is the first to undertake an empirical study evaluating performance and information sharing both in the context of the NTD supply chain and in the broader humanitarian space. While most existing literature focuses on information sharing in for-profit supply chains, which is typically focused on sharing information between the buyer and supplier, this paper contributes to the existing literature and addresses the gap by investigating the impact of sharing information publicly for non-profit supply chains. The results of our study demonstrate the value of investment in supply chain performance measurement and information sharing towards the success of global health partnerships and such initiatives may be implemented in the broader context of humanitarian programs. We find that information sharing is positively associated with the timeliness of several key stages in NTD supply chain, i.e., PO timeliness, arrival timeliness, shipment timeliness, and the key success measure for the NTD supply chain-delivery timeliness. Delivery timeliness, arguably the most critical KPI to measure first mile performance, appears to have a positive and significant association with information sharing. The analysis showed that in the post implementation phase, on average, completed POs were submitted one month earlier. In addition, go-signals were issued expeditiously after POs were issued. Improvements in these two metrics possibly contributed to the observed results of earlier delivery of medicines to the CMS, suggesting that information sharing as a result of the NTDeliver may be a factor.
Furthermore, information sharing appears to be possibly more impactful when information is released publicly and particularly promoted to country program managers, compared to when it is shared only with the supply chain partners (i.e., WHO and pharmaceutical donors). A significant positive association was evident for three variables upon this information extension: shipment, arrival, and delivery timeliness Neither shipment nor arrival timeliness met the threshold for significance in the first set of results, but both variables meet this threshold in these additional results. Delivery timeliness continues to meet the threshold for significance in these results, but also with a coefficient substantially larger than in the first set of results. While there is clearly an impact from the implementation of NTDeliver without the country pages, the addition of the country pages appeared to extend the impact to downstream processes after PO creation and the go signal. As previously noted, this result was hypothesized since the country pages specifically provide access to status from the point of shipment. Thus, it is logical that we did not see in the results significant improvements to upstream KPIs such as PO timeliness and go signal timeliness, since they are not featured in these country pages. These results suggest that it may be more effective to extend information sharing to stakeholders responsible for performance across the entire supply chain.
The robustness checks also showed that even if the information sharing effect took time to make an impact, the performance still improved and can be associated more confidently with the implementation of NTDeliver. Furthermore, although there is no "control group" in our research design, we conduct a "double pretest" to investigate if other observed time-varying variables may contribute to the significant results. Comparison with performance before the information sharing was implemented suggests that the supply chain performance does not simply improve over time, with exception to one performance indicator. Only PO timeliness indicates a significant positive change over time during the period before NTDeliver was implemented. The reason for this change in PO timeliness may be attributed to a process change coinciding around the timing of the second pretest group defined in this analysis, which included medicines ordered from 2014 onward: the JAP was implemented by the WHO in 2013 to standardize processes to support an integrated application submission and review for donated medicines [23]. This JAP process most positively impacts the PO process as it promoted more coordination between the various levels of WHO offices to assure timely applications and order fulfillment for donated medicines [23].
This research has some limitations that may naturally inform future research. The quasiexperimental design used lacks a control group and random assignment since NTDeliver was implemented for all PC medicine donations managed through the NTD Supply Chain Forum [31]. Although we used a "double pretest" robustness check to verify our results, future research could study the impact of information sharing in a controlled setting with randomization incorporated in the design. Also, modifications to the design could entail incorporating qualitative research using interviews and/or surveys to investigate whether stakeholder behavior changes may be drivers for observed performances changes. Furthermore, there is a growing desire for financial donors to understand the impact of investments from an outcomes perspective, especially in the interest of funding effective health innovations that offer value for money [32].While our research results certainly helps to validate the positive association between information sharing and supply chain performance, further research on how the supply chain performance improvements in this first mile result in a reduction in delayed and/or missed MDAs would provide more perspectives on linking the delivery timeliness improvements to the number of additional individuals reached. Also, incorporating any data pertaining to the last mile aspect of the NTD supply chain in NTDeliver might help to improve those downstream processes, which remains a priority, given the last mile's impact on efficient and effective distribution and planning for MDAs [33]. Lastly, with respect to the current global health climate, the COVID-19 pandemic had a significant impact on NTD programs, with the WHO recommending postponing MDAs to respect public health measures that advocate for physical distancing to slow the spread of the virus [32]. Further research is needed to gain insight on new challenges from these disruptions to understand the impact by region and how information sharing may help to mitigate such disruption and support managing uncertainty for global health campaign supply chain planning during a pandemic.
Our results have practical implications for NTD supply chain management practices. As the deadline approaches for achieving the 2030 targets set out in the new WHO roadmap and the relevant NTD goals in target 3.3 of the Sustainable Development Goals, there is a high degree of confidence that these results affirm that investment in supply chain information sharing is a critical to ensuring success. In fact, the new WHO roadmap dedicates an entire section to "Access and logistics," in which supply chain management priorities for improvement are outlined under the umbrella concept that "effective supply chain management is vital to ensuring access to quality-assured NTD medicines and other products" [34]. Given the relationship between first mile supply chain performance and timeliness of MDAs, investment in supply chain information sharing is worthwhile to support improvements to NTD program management [7].
Furthermore, the findings imply additional benefits of information sharing when extending information sharing to a broader audience, particularly focusing on program managers. Incorporating visibility to upstream data, such as attaching country applications or tracking the regional office approval date, may improve these processes as well. Such upstream processes have been noted as potentially impacting delivery timeliness by the WHO HQ-such as the fairly significant time taken for the regions to review applications-and further incorporation of these processes in NTDeliver may benefit the end-to-end NTD supply chain [7].
Given the significant volume of medicines and the number of people requiring these medicines, the research implications have the potential to impact global health programs affecting the health of tens to hundreds of millions of people. The research supports that, even in absence of financial remuneration, information sharing contributes measurable supply chain improvements and supports investing further in performance measurement in humanitarian supply chains. Beyond the NTD space, data transparency is generally viewed as a challenge with country governments citing national sovereignty and privacy in refusing to release data for public consumption [35]. Positive results from extending the information sharing argue in favor of the value and benefits of information sharing in the global health space. With the profile and importance of the supply chain continues to elevate in humanitarian programs, especially those in the healthcare space, there will be an opportunity to invest further in such performance measurement tools to bring more evidence-based approaches to decision making. This has significant potential to promote accountability and coordination resulting in goals achieved and improved health outcomes. As the global health aid landscape is becoming more focused on driving measurable performance and impact from investments, these findings support investing in supply chain systems and commitment to data transparency.
Supporting information S1 | 2021-06-06T13:21:48.867Z | 2021-05-12T00:00:00.000 | {
"year": 2021,
"sha1": "6d3f01d67b432c5a878f7bbb2ffe9ca0850a8ca6",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0009523&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d31787e3404013dcbbcfb3b66de07a80c8f1711",
"s2fieldsofstudy": [
"Business",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine",
"Business"
]
} |
219060275 | pes2o/s2orc | v3-fos-license | The Covid-19, Epidemiology, Clinic and Prevention
In December 2019, a cluster of pneumonia cases, due to a newly identified β-coronavirus, occurred in Wuhan, China (Covid-19, or SARS-CoV-2). This was a zoonotic coronavirus breakout, that allowing human-to-human transmission, raised global health concerns [1]. On 26 February 2020, the rate of new cases begin to decline in China, but the tendency changed outside China, where new cases occurred, such as in Italy, South Korea and Iran; and for the first time the number of new cases outside China surmounted those reported in China [2]. After China, Italy had the second largest number of Covid-19 casefatality rate [3]. Unfortunately, the infection spread also to all other European countries. Covid-19 is also spreading in US, mainly a high concentration in New York City, with a higher fatality rate. Other countries such as Iran, Turkey, Canada, South Korea, Brazil, Israel, have also unfortunately experienced a large spread of the infection. African countries are at particular risk because of the density of the communities and insufficient diagnostic and therapeutic capacities [4]. According to the European Centre for Disease Prevention and Control (ECDC), since December 31, 2019 and as of April 3, 2020, >1 000 000 cases of Covid-19 have been reported, including 51,515 deaths, and the number is increasing every day.
DIAGNOSIS
Two risk factors for Covid-19 are: having traveled to an area with community infection in the 14 preceding days, or having had tight relationships with infected people.
Imaging
In symptomatic patients, absent pleural effusions and asymmetric peripheral ground glass opacities represent some of the clinical features that can be observed on radiographs and CT [14]. As shown by a Chinese study that compared CT and polymerase chain reaction (PCR), CT imaging is more sensitive and faster than PCR, but less specific [15].
Viral Testing
The first diagnosis of Covid-19 can be made based on the symptoms. Subsequently, CT imaging or reverse transcription PCR (RT-PCR) performed on infected secretions have to provide definitive confirmation [15,16]. Different RNA testing protocols exist for SARS-CoV-2 [17] that can be performed on respiratory or blood samples [18].
COMPLICATIONS
ITU patients had raised PCR, prothrombin and D-dimer levels on admission relative to non-ITU patients. A raised troponin [hypersensitive-troponin I (hs-cTnI)] was detected in about 10-20% of patients, possibly suggestive of virus-associated myocarditis. Systemic manifestations are frequent in Covid-19 infection, similar to other viral diseases [13,19]. Complications included acute respiratory distress syndrome (20-30%) and secondary infection (10-20%). The overall mortality rate is 5-15% of hospitalized patients with a preponderance of older males (aged > 60 years) with comorbidities (obesity, diabetes, hypertension, cardiovascular diseases, or COPD) [13]. Furthermore, in coronavirus infection the coagulopathy is associated with high mortality and high D-dimers that are a particularly important marker for the coagulopathy [20]. The disease progression is associated with a decrease in lymphocytes. A better result of the disease can be given by a higher cell count of total lymphocytes, and in the early stage of novel-coronavirus (2019-nCoV) infection, a vital factor directing disease progression may be immune response [21]. A study shows that in patients with SARS-CoV-2 infection distinct host inflammatory cytokine profiles are present, and it reveals that there is an association between Covid-19 pathogenesis and excessive cytokine releases (in Bronchoalveolar Lavage Fluid and Peripheral Blood Mononuclear Cells), such as CCL2/MCP-1, CXCL10/IP-10, CCL3/MIP-1A, and CCL4/MIP1B [22]. According to different data, severe patients have mild or severe cytokine storms, which is also an important cause of death. Therefore, to save patients with critical conditions, the treatment of cytokine storm is important. Interleukin-6 (IL-6) has a key role in cytokine release syndrome (CRS). A novel strategy to treat severe patients could be to block the signal transduction pathway of IL-6 [23].
THERAPIES
For the moment, concrete antiviral medications approved for Covid-19 do not exist, and studies that are testing the existing medications are going on. Oxygen therapy, intravenous fluids, and breathing support are necessary depending on the severity [24]. The use of steroids may decrease results, and it is controversial [25]. However, steroids are used in the ITU setting in patients with Acute Respiratory Distress Syndrome (ARDS).
Virally Targeted Agents
In a large spectrum of RNA viruses, viral RNA synthesis is blocked from nucleoside analogues. Favipiravir (T-705) is a guanine analogue approved for the treatment of influenza, able to inhibit viral RNA-dependent RNA polymerase (i.e. influenza, Ebola, etc.) and its activity against SARS-CoV-2 was reported from a recent study (see randomized trials evaluating the effectiveness of favipiravir plus interferon-α or baloxavir marboxil) [26]. Remdesivir (GS-5734) inhibits HIV reverse transcriptase. It has several activities against RNA viruses, including SARS and MERS, in cell cultures and animal models, and it has been investigated in a clinical trial for Ebola. Intravenous remdesivir (200 mg on day 1 and 100 mg once daily for 9 days) has been evaluated by two phase III trials started in early February 2020 in patients with SARS-CoV-2 (NCT04252664 and NCT04257656) [26]. Protease inhibitors such as disulfiram, lopinavir and ritonavir have shown some activities against SARS [26] and MERS and clinical trials designed to evaluate these compounds on Covid-19 patients are currently underway (for example, ChiCTR2000029539). In recent times, one of these trials did not show any benefit beyond standard care from lopinavirritonavir treatment among hospitalized adult patients with severe Covid-19 [27].
Host-targeted Agents
Pegylated interferon alfa-2a and -2b, which can boost the innate immune responses against HBV and HCV, may play a role also against SARS-CoV-2 (see clinical trial, ChiCTR2000029387). Many other compounds are under investigation for their potential activity against SARS-CoV-2. Chloroquine, which has been used classically to treat malaria and autoimmune disorders (i.e. rheumatoid arthritis and systemic lupus erythematosus), has shown in vitro antiviral activity also against SARS-CoV-2 and it is now under evaluation in an open-label trial (ChiCTR2000029609) [26]. Tocilizumab is a humanized monoclonal antibody against IL-6 receptor (IL-6R), and it prevents the binding with IL-6, which can trigger the cytokine storm in patients with severe Covid-19 [23]. Artificial intelligence has suggested AP2-associated protein kinase 1 (AAK1) disrupting drugs as potential inhibitors of viral entry into the target cells. For these reasons, Baricitinib, approved for rheumatoid arthritis [4], and ruloxitinib, an approved anti-inflammatory JAK1/2/TYK2 inhibitor, are under clinical evaluation for Covid-19 (in combination with mesenchymal stem cell infusion) [28].
Moreover, attempts to impair the binding of Sars-Cov-2 with its receptor on targeted cells, ACE2, and to block other spike proteins are under investigation. However, there is no indication to stop antihypertensive drugs in these patients [29]. Different degrees of coagulopathy have been described among patients who died from severe Covid-19 and high D-dimer levels emerged as a poor prognostic factor. Therefore LMWH, such as enoxaparin, are proposed by several scientific societies in the treatment of Covid-19 patients [30]. A small French study reported that Covid-19 patients treated with hydroxychloroquine have significant lower viral load or even complete viral clearance in the subsequent nasopharyngeal samples, especially if co-administered with azithromycin [31]. However, there are several limitations of these data and discordant results have been more recently described on this association [32]. Ivermectin has also been suggested to inhibit Covid-19 replication [33].
Passive Antibody Therapy
Convalescent plasma from recovered individuals represents a historical method to transfer neutralizing antibodies against this virus into affected and ill patients. For this reason, it has also been suggested for SARS and, lastly, for Covid-19 [34]. Further forms of passive immunization (e.g. using manufactured monoclonal antibodies) are under investigation [34].
PREVENTION
To prevent the diffusion of the infection, some measures have been recommended: people should stay at home, avoid gatherings, frequently wash hands with water and soap (at least for 20 seconds), and avoid touching the face, nose, eyes and mouth with unclean hands [35]. Social activities have been reduced by closing schools, reducing travel and public events, adopting distancing strategies on all occasions, including at least six feet distance (2 meters) between people [36]. The use of masks initially was recommended by World Health Organization (WHO) only in people with respiratory symptoms or taking care of patients with a suspect of infection [35]. Now wearing mask is a recommendation to all the population worldwide [37].
Personal Protective Equipment
The primary objective is to minimise the risk of diffusion of the virus, so precautions are to be taken, in healthcare personnel caring people with Covid-19, who perform procedures generating aerosol (e.g. intubation, or hand ventilations). The CDC recommends placing patients in Airborne infection isolation room (AIIR), besides standard precautions [38].
Vaccine
Different agencies have undertaken researches for the development of a vaccine, that is not available at the moment. Three vaccination strategies are under investigation. The first strategy aims to build a whole virus vaccine with inactive or dead virus, producing an immune response to an induced infection with Covid-19. The second strategy is to produce a vaccine with subunits of virus, sensitizing the immune system. SARS-CoV-2 and SARS-CoV to enter human cells use the ACE2 receptor [39]. The Sspike protein helps the virus introduction to the ACE2 receptor, being the focus of these researches. The third strategy is the use of a novel technique that creates nucleic acid vaccines (DNA or RNA) [40]. The first clinical trial, which started in March 2020 in Seattle, involving four volunteers, uses a vaccine containing a genetic code copied of the virus that is harmless [41].
CONCLUSION
The novel coronavirus 2019, which started as an outbreak in China in December 2019 has rapidly spread all over the world, such that on March 11, 2020, WHO declared this disease as pandemic. Given the fragile health systems in many countries, they may have serious difficulties to afford primary healthcare requirements for the current Covid-19 epidemic. The emergency that the world faces today demands that we develop urgent and effective measures to protect people at high risk of transmission. WHO has accelerated research in diagnostics, vaccines and therapeutics for this novel coronavirus [42]. | 2020-04-30T09:09:04.226Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "69a0ca5942b6a2c5163f6a2d7aa200e43511a7ed",
"oa_license": "CCBYNC",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7521034",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "e19322a4d71be6467f026c5c3e9aacddeee3bcf8",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
195066364 | pes2o/s2orc | v3-fos-license | In-Plane Behavior of Auxetic Non-Woven Fabric Based on Rotating Square Unit Geometry under Tensile Load
This paper reports the auxetic behavior of modified conventional non-woven fabric. The auxetic behavior of fabric was achieved by forming rotating square unit geometry with a highly ordered pattern of slits by laser cutting. Two commercial needle-punched non-woven fabric used as lining and the reinforcement fabric for the footwear industry were investigated. The influence of two rotating square unit sizes was analyzed for each fabric. The original and modified fabric samples were subjected to quasi-static tensile load by using the Tinius Olsen testing machine to observe the in-plane mechanical properties and deformation behavior of tested samples. The tests were recorded with a full high-definition (HD) digital camera and the video recognition technique was applied to determine the Poisson’s ratio evolution during testing. The results show that the modified samples exhibit a much lower breaking force due to induced slits, which in turn limits the application of such modified fabric to low tensile loads. The samples with smaller rotating cell sizes exhibit the highest negative Poisson’s ratio during tensile loading through the entire longitudinal strain range until rupture. Non-woven fabric with equal distribution and orientation of fibers in both directions offer better auxetic response with a smaller out-of-plane rotation of rotating unit cells. The out-of-plane rotation of unit cells in non-homogenous samples is higher in machine direction.
Introduction
Textiles are natural or synthetic polymer materials that are manufactured in the form of fibers, yarns or fabric and are used for clothing, interior design and many different technical applications. Conventional textile materials have a positive Poisson's ratio (ranging from 0.0 to 0.5). The auxetic materials exhibit a negative Poisson's ratio, which means that, under tension, they elongate both in the direction of loading and in the transversal direction and vice versa under compression loading, Figure 1 In the last few decades, research on auxetic textile materials has been focused on developing some enhanced properties such as the ability to form dome-shaped structures (synclastic curvature) when subjected to a bending load, indentation resistance, good vibration damping and shock absorption, ability to manage porosity and air permeability (under pressure), enhanced acoustic properties, lower stiffness, low density, higher formability, better compatibly with the body, size fitting, etc. [4][5][6][7]. Such materials have their potentials as technical textiles in the medicine, automobile, marine and aerospace, architecture, civil engineering, and footwear industries, etc., for filtration where the pore opening size can be controlled by tension, vibration damping, shock absorbency, smart bandages, dental floss with in-built drug release, compression hosiery, seat cushion material, fastening devices, reinforcements in advanced composites, padding material for better forefoot pressure relief in high-heeled shoes, as well as apparel textiles (maternity dresses, bra cups, leggings, children's wear during the period of growth, support garments, etc.) [4][5][6][7][8][9][10][11][12][13][14].
The auxetic properties of textiles can be reached at fiber, yarn and fabric levels by changing their structure. Comprehensive reviews of auxetic fibers, yarns and fabric are given in the literature [12,[15][16][17][18]. There are two possible ways to induce auxetic properties in two-dimensional (2D) fabric: • by using auxetic fibers/yarns in the conventional process of weaving, knitting, braiding and nonwoven production, or • by inducing special (auxetic) structures (geometry) in the conventional process of weaving, knitting, and non-woven production using conventional fibers/yarns.
The latter way to induce auxetic fabric offers low cost and continuous usage of conventional manufacturing equipment. Some auxetic 2D geometries have already been developed using traditional textile technologies: • re-entrant geometry (hexagonal geometry in the case of knitted and woven fabric [5,6,18], double arrowhead [19] and rhombus-shaped geometry [16] in the case of warp knitted fabric); • rotating square unit geometry for knitted fabric [5]; • foldable geometry for knitted and woven fabric [4,5,12,20].
Auxetic re-entrant geometry has also been developed by combining three-dimensional (3D) printing with traditional weft knitting technology to form a multi-material system with enhanced mechanical (auxetic behavior) and porous properties [21].
The majority of research is aimed at the development and prediction of the in-plane auxetic behavior of knitted and woven auxetic fabric, while there is a substantial lack in development of the auxetic non-woven fabric, as well as a lack of predicting the out-of-plane behavior, which is completely ignored, despite the fact that auxetic materials do exist in 3D form [15]. The auxetic composite's out-of-plane behavior was analyzed in [22], where it was shown that the out-of-plane auxeticity of the composite is dependent on the in-plane properties of the honeycomb. Verma et al. [23] reported that a heat compression protocol used on needle-punched commercial non-woven fabric induces an out-of-plane auxetic response. More precisely, tested heat-compressed needlepunched non-woven fabric have shown an increase in thickness when stretched, especially at strains lower than 30%. The observed Poisson's ratio at 5% strain was −7.2 and −6.6 for two tested samples, In the last few decades, research on auxetic textile materials has been focused on developing some enhanced properties such as the ability to form dome-shaped structures (synclastic curvature) when subjected to a bending load, indentation resistance, good vibration damping and shock absorption, ability to manage porosity and air permeability (under pressure), enhanced acoustic properties, lower stiffness, low density, higher formability, better compatibly with the body, size fitting, etc. [4][5][6][7]. Such materials have their potentials as technical textiles in the medicine, automobile, marine and aerospace, architecture, civil engineering, and footwear industries, etc., for filtration where the pore opening size can be controlled by tension, vibration damping, shock absorbency, smart bandages, dental floss with in-built drug release, compression hosiery, seat cushion material, fastening devices, reinforcements in advanced composites, padding material for better forefoot pressure relief in high-heeled shoes, as well as apparel textiles (maternity dresses, bra cups, leggings, children's wear during the period of growth, support garments, etc.) [4][5][6][7][8][9][10][11][12][13][14].
The auxetic properties of textiles can be reached at fiber, yarn and fabric levels by changing their structure. Comprehensive reviews of auxetic fibers, yarns and fabric are given in the literature [12,[15][16][17][18]. There are two possible ways to induce auxetic properties in two-dimensional (2D) fabric: • by using auxetic fibers/yarns in the conventional process of weaving, knitting, braiding and non-woven production, or • by inducing special (auxetic) structures (geometry) in the conventional process of weaving, knitting, and non-woven production using conventional fibers/yarns.
The latter way to induce auxetic fabric offers low cost and continuous usage of conventional manufacturing equipment. Some auxetic 2D geometries have already been developed using traditional textile technologies: • re-entrant geometry (hexagonal geometry in the case of knitted and woven fabric [5,6,18], double arrowhead [19] and rhombus-shaped geometry [16] in the case of warp knitted fabric); • rotating square unit geometry for knitted fabric [5]; • foldable geometry for knitted and woven fabric [4,5,12,20].
Auxetic re-entrant geometry has also been developed by combining three-dimensional (3D) printing with traditional weft knitting technology to form a multi-material system with enhanced mechanical (auxetic behavior) and porous properties [21].
The majority of research is aimed at the development and prediction of the in-plane auxetic behavior of knitted and woven auxetic fabric, while there is a substantial lack in development of the auxetic non-woven fabric, as well as a lack of predicting the out-of-plane behavior, which is completely ignored, despite the fact that auxetic materials do exist in 3D form [15]. The auxetic composite's out-of-plane behavior was analyzed in [22], where it was shown that the out-of-plane auxeticity of the composite is dependent on the in-plane properties of the honeycomb. Verma et al. [23] reported that a heat compression protocol used on needle-punched commercial non-woven fabric induces an out-of-plane auxetic response. More precisely, tested heat-compressed needle-punched non-woven fabric have shown an increase in thickness when stretched, especially at strains lower than 30%. The observed Poisson's ratio at 5% strain was −7.2 and −6.6 for two tested samples, respectively. Bhullar et al. [24] developed non-woven fabric, where auxetic geometry was tailored using laser micromachining on a polycaprolactone microfiber and a polycaprolactone sheet.
The auxetic behavior can be also achieved by the material's internal structure geometry, which changes as a mechanism under applied loads. Grima et al. [25][26][27] proposed rotating rigid unit cells in the form of squares, triangles and rectangles, connected together at selected vertices by hinges ( Figure 2). respectively. Bhullar et al. [24] developed non-woven fabric, where auxetic geometry was tailored using laser micromachining on a polycaprolactone microfiber and a polycaprolactone sheet. The auxetic behavior can be also achieved by the material's internal structure geometry, which changes as a mechanism under applied loads. Grima et al. [25][26][27] proposed rotating rigid unit cells in the form of squares, triangles and rectangles, connected together at selected vertices by hinges ( Figure 2). This paper reports on a study of using needle-punched technology and laser cutting (in order to form the geometry of rotating squares) for fabricating auxetic non-woven fabric. The comparison analysis between non-auxetic and auxetic non-woven fabric behavior under quasi-static tensile load and the determination of Poisson's ratio are demonstrated and discussed.
Materials and Methods
The simplest way to induce the in-plane auxetic behavior of conventional needle-punched nonwoven fabric is to form rotating unit cells with a highly ordered pattern of slits by using laser cutting. The needle-punched non-woven fabric investigated in this study was a commercial Silon fabric (obtained from Konus-Konex, Slov. Konjice, Slovenia), which is a synthetic leather used for heel grip, insole and lining material (in the footwear industry), as well as lining material in the manufacturing of belts. The idea to use auxetic material for inner parts of shoes lies in the possibility to design shoes, which will be able to enlarge their size in the case of swollen feet. In this case, it is not only the outer fabric that should have auxetic properties, but also the inner parts of the shoe. Two needle-punched non-woven fabric, referred to as SL-1 and SL-2 from here on, were investigated. The basic structural characteristics of the tested fabric are given in Table 1. As mentioned above, Silon fabric is a nonwoven fabric made by a conventional procedure of web forming on a card line, which is then reinforced on the basis of needle-punching technology and finally finished using splitting and buffing techniques. The geometry of the applied rotating square unit cells of the same pattern is given in Table 2. The influence of the two rotating square unit cell sizes, e.g., 1.25 × 1.25 cm and 0.625 cm × 0.625 cm, connected at selected vertices by 2-mm long hinges was investigated for each tested fabric.
It should be mentioned that the two different geometries of the rotating unit cells involved in this study (bigger and smaller) are not scaled versions. The thickness of the slits and the size of the This paper reports on a study of using needle-punched technology and laser cutting (in order to form the geometry of rotating squares) for fabricating auxetic non-woven fabric. The comparison analysis between non-auxetic and auxetic non-woven fabric behavior under quasi-static tensile load and the determination of Poisson's ratio are demonstrated and discussed.
Materials and Methods
The simplest way to induce the in-plane auxetic behavior of conventional needle-punched non-woven fabric is to form rotating unit cells with a highly ordered pattern of slits by using laser cutting. The needle-punched non-woven fabric investigated in this study was a commercial Silon fabric (obtained from Konus-Konex, Slov. Konjice, Slovenia), which is a synthetic leather used for heel grip, insole and lining material (in the footwear industry), as well as lining material in the manufacturing of belts. The idea to use auxetic material for inner parts of shoes lies in the possibility to design shoes, which will be able to enlarge their size in the case of swollen feet. In this case, it is not only the outer fabric that should have auxetic properties, but also the inner parts of the shoe. Two needle-punched non-woven fabric, referred to as SL-1 and SL-2 from here on, were investigated. The basic structural characteristics of the tested fabric are given in Table 1. As mentioned above, Silon fabric is a non-woven fabric made by a conventional procedure of web forming on a card line, which is then reinforced on the basis of needle-punching technology and finally finished using splitting and buffing techniques. The geometry of the applied rotating square unit cells of the same pattern is given in Table 2. The influence of the two rotating square unit cell sizes, e.g., 1.25 × 1.25 cm and 0.625 cm × 0.625 cm, connected at selected vertices by 2-mm long hinges was investigated for each tested fabric. hinges remained the same for the small unit cells to avoid premature failure in case of too week hinges.
Fifteen test samples of each needle-punched non-woven fabric were first cut in the machine and fifteen in the cross-machine direction, with the overall dimensions of 50 ± 0.5 mm × 250 ± 0.5 mm (width × length). The machine direction means the direction of fabric forming, e.g., the length of fabric roll, while the cross-machine direction means the width of the fabric roll. Five plus five samples of each needle-punched non-woven fabric were then modified by inducing the pattern of slits by laser cutting to form rotating unit cells of two sizes. The samples were then subjected to the conditioning for 48 hours before testing. Quasi-static tensile measurements were performed using the Tinius Olsen testing machine H10KT (Tinius Olsen Ltd., Redhill, United Kingdom) with flat-faced clamps and a 1000-N load cell, following the ISO EN 9073 standard. The conditions at tensile testing were as follows: gauge length-150 mm, constant rate of extension-100 mm/min, and standard atmosphere. The maximum breaking force and elongation at break were recorded for all samples, and then the average values were calculated, expressed in N/5 cm and %, respectively.
To determine whether there were any differences among the samples according to the breaking force and elongation at break regarding the type of material, the direction of the material taken from the roll fabric and the type of geometry, and to test the null hypotheses (there are no differences between the groups regarding the upper mentioned factors), analysis of variance procedure (ANOVA) was performed using IBM SPSS 22 statistical software package (IBM Corporation, New York, NY, United States). The selected value of significance level for this procedure was 0.05 (or 95% confidence level).
The Poisson's ratio of all tested samples was determined by using a video image recognition methodology as an engineering value. A video analysis software based on Accord.NET framework for scientific computing was developed for that purpose. The width of the samples was measured in time, which was determined from the video frame rate. The sample location on the video was determined for each video image (frame) by using a template matching object tracker of the movable clamp of the testing machine. The image filtering was used to segment the samples from the background based on a Canny edge detector [28]. The width of the sample was then measured in pixels from the segmented image and converted to the transversal strain. The time-dependent evolution of the Poisson's ratio was finally computed as the ratio of the measured transversal and longitudinal strain. Table 3. Fifteen test samples of each needle-punched non-woven fabric were first cut in the machine and fifteen in the cross-machine direction, with the overall dimensions of 50 ± 0.5 mm × 250 ± 0.5 mm (width × length). The machine direction means the direction of fabric forming, e.g., the length of fabric roll, while the cross-machine direction means the width of the fabric roll. Five plus five samples of each needle-punched non-woven fabric were then modified by inducing the pattern of slits by laser cutting to form rotating unit cells of two sizes. The samples were then subjected to the conditioning for 48 hours before testing.
Fabric Structure Analysis
Quasi-static tensile measurements were performed using the Tinius Olsen testing machine H10KT (Tinius Olsen Ltd., Redhill, United Kingdom) with flat-faced clamps and a 1000-N load cell, following the ISO EN 9073 standard. The conditions at tensile testing were as follows: gauge length-150 mm, constant rate of extension-100 mm/min, and standard atmosphere. The maximum breaking force and elongation at break were recorded for all samples, and then the average values were calculated, expressed in N/5 cm and %, respectively.
To determine whether there were any differences among the samples according to the breaking force and elongation at break regarding the type of material, the direction of the material taken from the roll fabric and the type of geometry, and to test the null hypotheses (there are no differences between the groups regarding the upper mentioned factors), analysis of variance procedure (ANOVA) was performed using IBM SPSS 22 statistical software package (IBM Corporation, New York, NY, United States). The selected value of significance level for this procedure was 0.05 (or 95% confidence level).
The Poisson's ratio of all tested samples was determined by using a video image recognition methodology as an engineering value. A video analysis software based on Accord.NET framework for scientific computing was developed for that purpose. The width of the samples was measured in time, which was determined from the video frame rate. The sample location on the video was determined for each video image (frame) by using a template matching object tracker of the movable clamp of the testing machine. The image filtering was used to segment the samples from the background based on a Canny edge detector [28]. The width of the sample was then measured in pixels from the segmented image and converted to the transversal strain. The time-dependent evolution of the Poisson's ratio was finally computed as the ratio of the measured transversal and longitudinal strain. Figure 3 shows the tensile strength relationships of both original (non-auxetic) non-woven fabric in the machine (MD) and cross-machine directions (CMD), while the maximum values are listed in Table 3. The SL-1 fabric is obviously more homogenous since it exhibits comparable properties in machine and cross-machine directions, while the properties of sample SL-2 are quite different. The SL-1 fabric has approximately the same breaking force in machine and cross-machine directions, while the elongation at break is higher in the cross-machine direction. This implies that fiber orientation is a little lower in this direction. The fibers in SL-2 fabric are obviously more heterogeneously distributed since the tensile strength in the machine direction is over 2 times higher than in the cross-machine direction, while the elongation at break is almost 3.5 times larger in the cross-machine direction in comparison to the machine direction.
Auxetic Behaviour Analysis
The comparison between two different auxetic geometries regarding the behavior of auxetic non-woven fabric (ANF) under tensile load is presented in the form of tensile strength relationships for the machine and cross-machine directions in Figure 4. The results show that auxetic samples with a larger unit cell size (1.25 cm) break at higher force and elongation in comparison to the auxetic samples with a smaller unit cell size (0.625 cm). Again, sample SL-1 exhibits comparable properties in the machine and cross-machine directions; while sample SL-2 shows much higher values of breaking strength and elongation in the machine direction in comparison to the cross-machine direction. The SL-1 fabric is obviously more homogenous since it exhibits comparable properties in machine and cross-machine directions, while the properties of sample SL-2 are quite different. The SL-1 fabric has approximately the same breaking force in machine and cross-machine directions, while the elongation at break is higher in the cross-machine direction. This implies that fiber orientation is a little lower in this direction. The fibers in SL-2 fabric are obviously more heterogeneously distributed since the tensile strength in the machine direction is over 2 times higher than in the cross-machine direction, while the elongation at break is almost 3.5 times larger in the cross-machine direction in comparison to the machine direction.
Auxetic Behaviour Analysis
The comparison between two different auxetic geometries regarding the behavior of auxetic non-woven fabric (ANF) under tensile load is presented in the form of tensile strength relationships for the machine and cross-machine directions in Figure 4. The results show that auxetic samples with a larger unit cell size (1.25 cm) break at higher force and elongation in comparison to the auxetic samples with a smaller unit cell size (0.625 cm). Again, sample SL-1 exhibits comparable properties in the machine and cross-machine directions; while sample SL-2 shows much higher values of breaking strength and elongation in the machine direction in comparison to the cross-machine direction. Table 3 shows the average values with standard deviations of breaking strength and elongation at break for the non-auxetic (NF) and auxetic non-woven (ANF) tested fabric in the machine and cross-machine directions. Table 3 shows the average values with standard deviations of breaking strength and elongation at break for the non-auxetic (NF) and auxetic non-woven (ANF) tested fabric in the machine and cross-machine directions.
Breaking Strength and Elongation Analysis
The results of the ANOVA analysis (Tables 4 and 5) show that there are statistically significant differences between the groups of samples regarding the type of material (SL-1 and SL-2), the direction of material taken from the roll (MD and CMD) and the type of geometry (samples with a rotating unit size of 1.25 cm and samples with a rotating unit size of 0.625 cm); the value of significance level is lower than 0.001, so all these factors have a statistically significant effect on the breaking force and elongation at break of the tested samples. The results show a large reduction in breaking force due to laser cutting, which was used to introduce the auxetic geometry into the fabric. The reduction in tensile strength in samples with a rotating unit size of 1.25 cm is approx. 94 and 92% in comparison to the original tensile strength in the machine direction for samples SL-1 and SL-2, respectively. The reason for the lower breaking force can be attributed to the massive reduction in the specimen cross-section area due to the induced cuts. Slightly larger is the reduction in the tensile strength for samples with a rotating unit size of 0.625 cm (96 and 98% for SL-1 and SL-2 fabric, respectively). The average reduction in the original tensile strength in the cross-machine direction is approx. 95% for both samples. The analysis of elongation at break due to the introduction of auxetic geometry shows different behavior regarding the testing direction and homogeneity of fiber orientation in both directions. On inducing auxetic geometry, the elongation at break increases in the machine direction, while it decreases in the cross-machine direction. SL-1 and SL-2 fabric show the same behavior in machine direction: on inducing auxetic geometry, the breaking elongation is increased by approx. 70 or 11% for samples with a rotating unit size of 1.25 and 0.625 cm, respectively. Here, the SL-2 fabric, which is less homogenous, shows a much higher increase in elongation in comparison with the SL-1 fabric. In the cross-machine direction, the SL-1 and SL-2 fabric again show similar (albeit opposite) behavior: on inducing auxetic geometry, the elongation at break is reduced by approx. 41 or 52% for samples with a rotating unit size of 1.25 and 0.625 cm, respectively. Here, the SL-2 fabric, which is less homogenous, again shows a much higher decrease in elongation at break in comparison with SL-1 fabric.
Poisson's Ratio Evaluation
A clear auxetic behavior of the samples can be observed from Figures 5-8, which represents the relationship between the Poisson's ratio and longitudinal strain for the individual samples and corresponding fabric deformation at different longitudinal strains during tensile testing in the machine direction. All relationships show a positive Poisson's ratio at low longitudinal strains, which is a consequence of the initial alignment of the samples, clamped into the upper and lower clamps without preloading. By increasing the longitudinal strain, the square unit cells start to rotate in plane around the hinges, thus inducing the overall in-plane auxetic behavior. Some out-of-plane unit cell rotation was observed at high longitudinal strains, causing a decrease in the Poisson's ratio.
A comparison of auxetic behavior between two different geometries of SL-1 fabric (SL-1-1.25 and SL-1-0.625; see experimental average relationships in Figures 5 and 6) shows an obvious difference: the geometry with a rotating cell unit size of 0.625 cm exhibits a larger negative Poisson's ratio (NPR) across the entire range of the longitudinal strain. This means that SL-1-0.625 samples expand more in the lateral direction. It was observed during testing that the rotation of SL-1-0.625 unit cells occurred only in-plane, contributing to a larger lateral extension, while SL-1-1.25 unit cells also started to rotate out-of-plane at strains larger than 15%. This phenomenon reduces lateral extension and eventually even leads to a positive Poisson's ratio (see the range of the longitudinal strains between 20 and 40% and fabric deformation at 28% of the longitudinal strain in Figure 5). The highest average NPR is achieved at approx. 10% (−0.6) and 14% (−0.8) of the longitudinal strain for geometry with a 1.25 and 0.625 cm rotating unit size, respectively, regardless of the testing direction. It is also worth mentioning that samples with a rotating unit size of 0.625 cm exhibit auxetic behavior (negative Poisson's ratio) until the rupture of the samples. From the images in Figures 5 and 6, it can be observed that near the breaking point, the slits are being deformed in such a way that they form an empty unit cell of the same size as the rotating unit cell. The large increase in the stiffness can be noted in the region of 20-30% of the longitudinal strain from Figure 4, i.e., a higher force is needed for deformation. From Figures 7b and 8b, it can be observed that the squares rotate up to 20-30% of the longitudinal strain. Here, it seems that the deformation is mainly caused by the structural properties of the samples, i.e., the auxetic patterns of cuts, whereas above 30% of the longitudinal strain (rotating units reach a rotation angle of 45 • ), the rotation mechanism cannot support further deformation. Therefore, the deformation is mainly influenced by the mechanical properties of the connections between the rotating units, e.g., the mechanical properties of the material itself. Here, a much higher force is needed for deformation, until the connections between the rotating units start to break and a maximum breaking force is detected-see Figures 5 and 6. the Poisson's ratio is positive for machine direction, while the average Poisson's ratio for the crossmachine direction is still negative or near-zero. During the tensile testing of SL-2-1.25, it was observed that the auxetic units also started to rotate in the out-of-plane direction (normal to the sample surface) at approximately 38% of the longitudinal strain, thus reducing the lateral extension of the sample in the machine direction-see Figure 7. The out-of-plane rotation of cell units was not observed in SL-2-0.625-see Figure 8. The comparison of auxetic behavior between two different geometries of SL-2 (SL-2-1.25 and SL-2-0.625; see experimental average relationships in Figures 7 and 8) also shows an obvious difference: the geometry with a rotating cell size 0.625 cm exhibits a much higher NPR across the complete range of the longitudinal strain. The highest average NPR is achieved at approx. 7% (−0.5) and 14% (−0.6) of the longitudinal strain for geometry with a rotating unit size of 1.25 and 0.625 cm. respectively, regardless of the testing direction. Both geometries also show differences in the NPR in both directions due to different fabric homogeneity in the machine and cross-machine direction. In the case of SL-2-0.625, the difference becomes obvious above 30% of the longitudinal strain, where the NPR in the cross-machine direction is higher in comparison with the machine direction, while in the case of SL-2-1.25, the difference is already obvious above 15% of the longitudinal strain. Here, even the Poisson's ratio is positive for machine direction, while the average Poisson's ratio for the cross-machine direction is still negative or near-zero. During the tensile testing of SL-2-1.25, it was observed that the auxetic units also started to rotate in the out-of-plane direction (normal to the sample surface) at approximately 38% of the longitudinal strain, thus reducing the lateral extension of the sample in the machine direction-see Figure 7. The out-of-plane rotation of cell units was not observed in SL-2-0.625-see Figure 8. From the results, it is obvious that fiber orientation in the fabric and the geometry of the induced auxetic structure both have an important influence on the auxetic behavior of non-woven fabric.
Conclusions
The main conclusions from the auxetic behavior analysis of non-woven fabric with two different rotating unit geometries are the following: • laser cutting, which was used to induce auxetic geometry into non-woven fabric, causes a significant reduction in breaking force; therefore, their application is restricted to low tensile loads; • tested non-woven samples with induced rotating unit cell geometry with a rotating unit size of 0.625 cm exhibit a higher negative Poisson's ratio (up to −1.0) during tensile loading through the entire longitudinal strain range until rupture; • non-woven fabric with equal distribution and orientation of fibers offer a better auxetic response with a smaller out-of-plane rotation of unit cells; • the out-of-plane rotation of unit cells in non-homogenous fabric is higher in the machine direction. This study has shown that auxetic behavior could be induced in conventional textile materials (non-woven fabric) by forming rotating unit cells with a highly ordered pattern of slits by using laser cutting. In this way, the textile materials can be transformed into more value-added metamaterials with a negative Poisson's ratio. However, there is still a need to further explore the possibilities From the results, it is obvious that fiber orientation in the fabric and the geometry of the induced auxetic structure both have an important influence on the auxetic behavior of non-woven fabric.
Conclusions
The main conclusions from the auxetic behavior analysis of non-woven fabric with two different rotating unit geometries are the following: • laser cutting, which was used to induce auxetic geometry into non-woven fabric, causes a significant reduction in breaking force; therefore, their application is restricted to low tensile loads; • tested non-woven samples with induced rotating unit cell geometry with a rotating unit size of 0.625 cm exhibit a higher negative Poisson's ratio (up to −1.0) during tensile loading through the entire longitudinal strain range until rupture; • non-woven fabric with equal distribution and orientation of fibers offer a better auxetic response with a smaller out-of-plane rotation of unit cells; • the out-of-plane rotation of unit cells in non-homogenous fabric is higher in the machine direction.
This study has shown that auxetic behavior could be induced in conventional textile materials (non-woven fabric) by forming rotating unit cells with a highly ordered pattern of slits by using laser cutting. In this way, the textile materials can be transformed into more value-added metamaterials with a negative Poisson's ratio. However, there is still a need to further explore the possibilities regarding different loading conditions, geometries in the form of oriented patterns of slits, as well as quasi-random patterns, which do not contain elements of symmetry.
Author Contributions: P.D.D. participated in the writing of the original draft, and the conceptualization of the research and execution of tensile testing with sample preparation. N.N. collaborated in tensile testing recording | 2019-06-20T13:11:20.337Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "d95b2fa4696f9f669662f396d28ad71833c1fe55",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/11/6/1040/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d95b2fa4696f9f669662f396d28ad71833c1fe55",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
118335810 | pes2o/s2orc | v3-fos-license | The mass of the tau neutrinos
We have shown previously that the mass of the muon neutrino can be determined from the energy released in the decay of the pi (+-) mesons, and that the mass of the electron neutrino can be determined from the energy released in the decay of the neutron. We will now show how the mass of the tau neutrino can be determined from the decay of the D(s)(+-) mesons.
Introduction
As we have shown with the standing wave model of the stable mesons and baryons [1] it follows from the decay of the π ± mesons that the mass of the muon neutrinos ν µ andν µ must be m(ν µ ) = m(ν µ ) = 47.5 meV/c 2 . We have also found in [1] that it follows from the decay of the neutron that the mass of the antielectron neutrinoν e should be m(ν e ) = 0.55 meV/c 2 . We have to correct this value whose calculation was based on the assumption that the neutron, whose mass is ≈ 2K ± , is the superposition of a K + and a K − meson. However, our investigation of the spin of the mesons and baryons [2] has shown that such a superposition does not produce spin s = 1/2, as is required. On the other hand, the superposition of two K 0 mesons has spin s = 1/2; actually the neutron must be the superposition of a K 0 and aK 0 meson, because of conservation of strangeness in the strong interaction that created the neutron. According to the standing wave model the superposition of a K 0 and aK 0 consists of neutrinos only and their oscillation energy. The neutrinos are arranged in a cubic lattice, each cell containing ν µ ,ν µ , ν e ,ν e neutrinos. The total number of the neutrinos in the neutron is 4N, with N = 2.854·10 9 , twice as many neutrinos as if the neutron were a superposition of K + and K − . That means that, when the neutron decays via n → p + e − +ν e , N antielectron neutrinosν e share the difference ∆ of the energy in the rest mass of the neutron and the energy in the rest mass of the proton ∆ = 1.29332 MeV. After the rest mass of the electron, also emitted in the decay of the neutron, is subtracted from ∆, and ∆ − m(e)c 2 = 0.782321 MeV is divided by N, not by N/2 as in [1], we find that the mass of the antielectron neutrino must be m(ν e ) = 0.275 meV/c 2 . (1) From the decay of the antineutronn →p + e + + ν e follows in principle that the mass of the electron neutrino should be m(ν e ) = 0.275 meV/c 2 , or that m(ν e ) = m(ν e ).
The mass of ν τ
We will now determine the mass of the τ neutrino m(ν τ ) from the decay of the D ± s mesons in a way which is analogous to the way how we have determined the mass of ν µ . The D ± s mesons decay via e.g.
where τ + is the positive τ meson or lepton. The decay of D − s has the conjugate particles on the right hand side of (2). The 6.4 percentage of this mode of decay is, by a small margin, the most frequent of the leptonic modes of decay of D ± s . The subsequent decay of τ ± is given by e.g.
with the antitau neutrinoν τ . The percentage of this mode of decay of τ ± is likewise only one of the very many modes of decay of τ ± .
If it is true, as we have postulated in [1], that the particles consist of the particles into which they decay, then it follows from Eqs. (2,3) that the D ± s mesons consist of ν τ andν τ neutrinos, plus the ν µ ,ν µ , ν e ,ν e neutrinos in the π ± mesons in Eq.(3). The cells of the lattice of the D ± s mesons contain 6 types of neutrinos, not 4 types as in the π ± mesons. The cubic lattice used in the standing wave model can, however, be retained if we consider, instead of a simple cubic lattice as the NaCl lattice, a body-centered cubic lattice in which a particle different from the particles in the corners of the simple cubic cell sits at the center of each cell of the lattice. This seems to accomodate only 5 neutrino types, whereas we have found that there must be 6 neutrino types in the cells of the D ± s mesons. However, because of conservation of lepton numbers during the creation of the D ± s mesons it is necessary that a number of antitau neutrinos equal to the number of tau neutrinos is present in the lattice. The antitau neutrinos can easily be accomodated in a lattice of body-centered cells in which the center particles are alternately tau neutrinos and antitau neutrinos. As explained in [1] there must be N = 2.854·10 9 neutrinos in the π ± lattice, and consequently there are N/4 simple cubic cells in π ± . The number of cells in the π ± lattice and the lattice of the neutron seem to be the same, because the radii of the π ± mesons and the proton are, within the accuracy of the measurements, the same, and we assume that the size of the proton and neutron are the same. It appears that the superposition of a K 0 and ā K 0 meson creating a neutron does not change the size of the lattice of these particles, or the number of their cubic cells. We will therefore assume that the superposition of a proton, an antineutron and a π 0 meson creating the D ± s mesons, with m(D ± s ) = 0.978·( m(p) + m(n) + m(π 0 ) ), does not change the number of the cells in the lattice either. In other words we assume that the size of the proton or neutron is the same as the size of the D ± s mesons. If there are N/4 body-centered cells in the D ± s meson then there must be N/8 tau neutrinos and antitau neutrinos each in the lattice of D ± s . The energy ∆ released in the decay D + s → τ + + ν τ (Eq. (2)) is given by If this energy originates from the rest mass of all ν τ neutrinos, respectively from allν τ neutrinos, in the decay of D ± s then it follows, with the number of ν τ orν τ neutrinos being N/8, that m(ν τ ) = m(ν τ ) = 536.8 meV/c 2 ≈ 0.54 eV/c 2 . (5) Since in the decay of D + s only a single tau neutrino is emitted one must wonder why the energy ∆ released in the decay should be equal to the sum of the rest masses of all tau neutrinos in D + s . The first indication that ∆ is not the energy carried by a single tau neutrino in the D + s meson comes from the magnitude of ∆ which amounts to practically 10% of the energy in the rest mass of D + s . This is incompatible with the basic tenet that there must be, according to Fourier analysis, a continuum of frequencies in a body created in a high energy collision of 10 −23 sec duration, which does not make it possible that a single neutrino out of 10 9 neutrinos has a rest mass plus an oscillation energy amounting to 10% of the rest mass of D ± s . That the source of ∆ is the rest masses of all ν τ , respectivelyν τ , neutrinos can be inferred from the disappearance of one of the two neutrino types in the secondary decays following the primary decay of D ± s . To be specific, the ν τ in the decay of D + s (Eq.(2)) does not appear neither in the decay of τ + (Eq. (3)) nor in the subsequent decay of π + which is in also (3), nor in the decay of the µ + meson which follows. With the primary decay of D + s , (Eq.(2)), the tau neutrinos seem to have been eliminated, which will certainly be the case when the energy of the rest masses of all ν τ has been consumed by ∆. This process is analogous to the disappearance of one type of muon neutrino after the decay of the π ± mesons, as discussed in [1], where we have shown that the oscillation energy of all neutrinos in π ± is conserved in the decay, where therefore the energy in the emitted neutrino can come only from the sum of the rest masses of all neutrinos of the type of neutrino emitted in the decay.
Finally we want to show that the body-centered lattice of the D ± s meson leads also to the correct spin s(D ± s ) = 0. We have shown in [2] and [3], e.g. in context with the spin of the π ± mesons s(π ± ) = 0, that the spin of the electric charge carried by the π ± mesons is canceled by the sum of the spin vectors of all neutrinos in the lattice. Of all the spin vectors of the O(10 9 ) neutrinos in the cubic lattice of the π ± mesons only the spin vector of the central neutrino remains, which then cancels the spin vector of the electric charge. The same applies for the body-centered lattice of the D ± s mesons. Of the spin vectors of all neutrinos in the D ± s lattice only the spin vector of the central neutrino, in this case of a ν τ orν τ neutrino, remains which cancels the spin vector of the electric charge of D ± s . Consequently s(D ± s ) = 0, as it must be.
Conclusions
Making use of the decay of the D ± s mesons D ± s → τ ± + ν τ (ν τ ) we can show that the mass of the tau neutrino must be m(ν τ ) ≈ 0.54 eV/c 2 . We also find that m(ν τ ) = m(ν τ ). | 2019-04-14T03:13:18.818Z | 2003-09-03T00:00:00.000 | {
"year": 2003,
"sha1": "cc5b774075d0d57afecc5da2b7f53cab0c00ea26",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cc5b774075d0d57afecc5da2b7f53cab0c00ea26",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
209101553 | pes2o/s2orc | v3-fos-license | Infinite Energy Creates Better Life
As the world moves towards sustainability, renewable energy development is driven in most countries as an alternative offer to the use of non-renewable and polluting fossil fuels. One such alternative is solar energy, which is the versatile technology used to harness the sun’s energy and make it usable. It refers also to capturing the energy from the sun, collected elsewhere (i.e. the Earth) and subsequently converting it into electricity. Due to the non-constant nature of solar energy, collector and storage unit components are required to have a functional solar energy generator. The collector simply collects the radiation that falls on it and converts a fraction of it to other forms of energy (electricity or heat). The storage unit is required to store excess energy during the periods of maximum productivity and release it when the productivity drops. The proposed study is aimed to explore the feasibility of solar energy as an infinite energy to be restored and used as it needs. The merit of this study lies in the fact that solar energy creates better life (such as cost reduction, resource availability, eco-friendly. and sustainability) should it is harvested, stored and utilized considerably in a holistic way. Moreover, it may enhance independency of electricity supply, particularly in remote areas where access to power supply are limited. It is anticipated that the study will not only create a further interdisciplinary research platform for fundamental studies on solar energy harvesting, storage, and utilization, but also a research that leads to technology transfer to industry as it is lauded as a renewable and an inexhaustible fuel source, environmentally friendly due to its nature of pollution – and often noise – free.
Introduction
Low carbon economic growth approach as a model of green economic development has become the main focus in many states' policies, particularly in anticipating the depletion of non-renewable natural resources. As non-renewable natural resources in particular fossil based oil is nearing its limit, alternative extraction method such as fracking might cause environmental damage and negative impacts of climate change.
Climate change studies as caused by global warming has been peer reviewed and agreed upon will negatively affect the world at large. Although the present state of knowledge regarding regional and
Literature Review
Many countries today strategically pursue development and economic natural resources diversification to overcome the current circumstances. One of the focus is on solar energy. Solar energy is a renewable energy source that can generate electricity, provide hot water, heat and cool a house and provide lighting for buildings. The sun has produced energy for billions of years. Sunshine is an infinite resource and investment in the technology that harnesses its energy as a general-purpose technology for development and export could be a priority of a progressive government [4]. Solar energy is the solar radiation that reaches the earth and it has several major advantages when compared with other sources, as it is distributed, though unequally, to every location on the globe. The resource is abundant, to the extent that many countries already harvested it [5].
Solar energy offers solution to the problem of scarcity resources, as it still has 6.5 billion years of life according to NASA. Solar energy can potentially play a very important role in providing most of the heating, cooling and electricity needs of the world [6]. Indeed, in a rather less time, solar technology in several countries has evolved to compete with conventional power generation sources.
Today solar sources provide around 10% of the energy used worldwide, but in the developing countries their share is still of the order of 40%. This contribution could start growing again, thanks to progress in solar technology and the pressure of recurrent energy and environmental crises related to fossil fuels and nuclear power [2]. In just a few decades, this will be a major part of a sustainable energy system for the world. In a matter of speak, solar energy will never stop shining.
The current study entails the exploration of energy alternative solutions, specifically solar energy to benefit most people in the world. Solar storage and harvesting have potentially been making contribution to alternative solution in providing energy for remote areas, anticipating the problem of non-renewable energy, pollution and highly cost. In addition, it is also a sustainable energy and can lead to a green economy.
Methodology
This study is a collection and analysis of data and information about solar energy harvesting, storage and utilization run by Universitas Prof. Dr. Moestopo (Beragama) in collaboration with MRS-INA parsed selected local and international literature. The study includes a causal analysis of innovation information contained in various journal and reference books.
Results and discussion
The New York Times on April 4, 1931 made the headline "Use of solar energy is near a solution", quoted the German solar energy scientist, Dr. Bruno Lange, as precursor that 80 years later electricity was supplied to millions of people in the world from renewable energy. Today, the whole world is aware of the limitations of fossil fuels and their effects on the environment as a cause of global warming. For this reason, the world states its readiness to accelerate the transition to a low-carbon economy.
The Government of Indonesia itself through the website of the Directorate General of New and Renewable Energy and Energy Conservation (EBTKE) launched the Low Carbon Economic Development Strategy on the occasion of the third annual meeting of the Low Carbon Emission Development Strategies (LEDS Forum) in Yogyakarta, attended by more than 250 forum participants consisting of government officials, experts, and representatives from international institutions, NGOs, and businesses identifying policies and activities that can increase economic growth, create jobs and other priorities in various countries in Asia, through a low carbon economic approach to leading green economic development.
Solar power as an energy is radiation from the sun that is converted into heat or electricity. It is also an energy that is always renewable and does not require fuel, is clean and environmentally friendly, and suitable for tropical regions such as Indonesia.
Solar energy creates clean, renewable power from the sun and benefits the environment. Generating electricity with solar power instead of fossil fuels can dramatically reduce greenhouse gas emissions, particularly carbon dioxide (CO2). Greenhouse gases, which are produced when fossil fuels are burned, lead to rising global temperatures and climate change. Climate change already contributes to serious environmental and public health issues in some parts of the world, including extreme weather events rising sea levels. By going solar, we can reduce demand for fossil fuels, limit greenhouse gas emissions, and shrink our carbon footprint.
Moreover, solar energy results in very few air pollutants, reduce nitrous oxides, sulphur dioxide, and particulate matter emissions, all of which can cause health problems. In another word, solar power results in fewer cases of chronic bronchitis, respiratory and cardiovascular problems, and lost workdays related to health issues.
Individual off-grid Photovoltaics, solar application as a stand-alone power source for various applications such as CCTV and communication stations can also be implemented in public sector. For areas like flat lands, forests and islands, which do not receive power, owing to their remote location, solar power is certainly a blessing in disguise. Remote and rural areas can now take advantage of power to initiate different development projects in their areas. Consequently, education and medicinal facilities can also be increased in these areas by the introduction of solar power.
Street lighting, general lighting systems for roads, parks, red lights, can be upgraded using more advanced technology that use energy efficient LED lights with independent electricity sources from solar power as well as solar panel roads. The U.S. Department of Energy predicted in 2009 that by the year 2030, solar power would produce almost 0.5 percent of the grid electricity generated by renewable sources. In comparison, the department said that, in 2007, solar power provided only about 0.3 percent of the electricity from renewable sources. Overall, renewable sources-including solar power-were expected to account for about 15 percent of the electricity produced for the grid in 2030 [7].
Overall, the sun gives off far more energy than we will ever need. The only limitation is our ability to convert it to electricity in a cost-effective way and environmentally friendly for the benefit of human's welfare. Solar power is also scalable. This means that it can be deployed from industrial scale or it can be used to power a single household. When it is used on a small scale, extra electricity can be stored in a battery or fed back into the electricity grid.
Storage on batteries will reduce electricity costs, increase efficiency, and support the green economy and can also be considered as a business growth opportunity.
The advantages of batteries as storage is due to their ability to store large amounts of solar energy in a cheap and efficient manner. A study in the Journal of Sustainable Finance & Investment predicts that a combination of battery storage and renewable energy can make fossil fuels obsolete.
The breakthrough in battery technology came from Professor John Goodenough of the Cockrell School of Engineering at the University of Texas at Austin, who helped create lithium-ion batteries. Goodenough's lithium battery technology is said to have energy density three times higher than the current battery market. That means: higher energy density, extra long-lasting power, longer life, and reduced overall costs [8].
Lithium ion battery has become the basis of the huge market for cellular phones and laptop computers, and these mobile communication market continues to grow at a rapid rate, supported by the demand all over the world [9]. Even so, intensive efforts are still under way to further improve battery technology. The main target of the effort is not only the automobile industry by achieving higher energy density, but also the energy storage market supplementing environmentally friendly power source such as solar energy and wind turbine [10].
At present batteries are produced to fulfil the increasing demand as more and more mobile electronic and electric devices ranging from mobile phones to electric vehicles are entering into our life [11]. A national consortium of lithium ion battery has been started since 2016 to develop the lithium ion battery modules for solar street lamp [12,13].
To simplify our daily life, there are several things we can benefit with solar energy. Solar battery charger, calculator and watch with natural sunlight to power itself. Indoor lighting through LEDs (light emitting diodes) which are highly energy efficient, can be connected to a battery charged with solar generation system. Outdoor lighting and heating water for domestic, commercial or industrial use, water pumps, air pumps, water heaters, generators, refrigerators, air conditioners, fans, gas stoves, and many other appliances can benefit from solar energy produced power.
Consumers are not interested in intricacies of battery technology. The proliferation of battery technologies comes from widespread usage of electricity consuming devices. Past battery technology such as nickel cadmium make way for lithium because of the higher energy density it offers compared to nickel based battery. As such, due to higher energy density thus electricity capacity from lithium battery, technology vendors are now able to create handheld computers that have telephony ability we now call smartphones. A bright future in science and technology relating to advanced batteries can only be expected through continuing basic and applied research on lithium material its uses [10].
Not just consumers, purchasers of battery are also international agencies, governments and their agencies, and companies. This market will continue to expand as decision makers in these institutions become convinced of the reliability and cost-effectiveness of photovoltaics compared to the alternatives in an increasing number of applications [14].
The main applications include systems for lighting, refrigeration, radio and TV, water pumping and general village electrification, includes all applications where the individual is both purchaser and user, e.g. in a remote home or farm, caravans, boats or the leisure industry.
In June 2001, the European Council of Gothenburg (Sweden) added an environmental dimension to the Lisbon Agenda, constituting the European plan for sustainable development. Under this new framework, public policies in Europe should adopt a long-term vision to deal with issues such as ratification of the Kyoto Protocol and promotion of renewable energies. In this regard, the European Union (EU) has set a binding target for renewable energy at 20% of the EU's total energy needs by 2020 [15].
Conclusions
The study found very interesting findings that majority advanced countries have considered solar energy usage as part of their domestic policy towards sustainable and green economy due to its less polluted nature. The result of the current study also verify that through a proper system of solar storage and harvesting, one can reach a better life in terms of the availability, easy access, cost effective, abundant, sustainable and green.
Further, solar energy can be regarded as an abundant near infinite energy resource and is on the verge of a massive boom. Together with wind energy, it directly challenges the incumbent dominant forms of traded energy, fossil and nuclear. We are already seeing the beginnings of divestment of fossil fuelled energy by influential investors, as fears grow about stranded assets. Humanity needed alternative energy resources which the solar energy makes it possible. We need battery as energy storage and conversion of solar energy as an infinite energy to make our live better, reduce electricity costs, increase efficiency, and support the clean environment.
Researchers has come up with interesting findings of current study which includes 1) impact of solar energy usage for a better life, 2) strong positive impact of cost effectiveness has been found in study, 3) as an infinite energy, solar storage can be used to remote areas, 4) environmentally friendly as it is non-polluted energy. As for practical implication, the innovative and better storage system of solar energy is expected by many countries to cover widely energy needs. The right storage system is necessary as it can bring a better life. | 2019-11-14T17:06:54.453Z | 2019-11-12T00:00:00.000 | {
"year": 2019,
"sha1": "b81f81eac191b35933f93f72159814c9099e70e0",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/553/1/012064",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a52bf37587cb8aa82fc95e901fc893e36c5a2a55",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Environmental Science"
]
} |
38108152 | pes2o/s2orc | v3-fos-license | Suppurative mediastinitis secondary to Burkholderia cepacia in a patient with cystic fibrosis.
Burkholderia cepacia is an important opportunistic pathogen among patients with cystic fibrosis (CF); it is associated with deterioration of lung function, poor outcome following lung transplantation and increased mortality. Fever, an elevated white blood cell count, weight loss and an often fatal deterioration in pulmonary function characterize a particular clinical course, termed "Cepacia syndrome". The present case report describes a 40-year-old man with CF who developed Cepacia syndrome complicated by suppurative mediastinitis, from which B cepacia was isolated. Despite optimal medical and surgical therapy, this patient succumbed to his illness. Those caring for patients with CF should be aware of this potentially catastrophic complication of B cepacia infection, especially in the setting of Cepacia syndrome.
C ystic fibrosis (CF) is the most common fatal genetic disorder among Caucasian people. Burkholderia cepacia is an important opportunistic pathogen among CF patients. B cepacia is a diverse group of bacteria with heterogeneous outcomes in CF patients. B cepacia infection has been associated with deterioration of lung function, poor outcome following lung transplantation and increased mortality in this patient population (1)(2)(3)(4)(5). In CF patients, B cepacia typically proceeds along one of three clinical pathways: colonization; infection with an acceleration in the rate of decreasing pulmonary function; or acute deterioration associated with necrotizing bronchopneumonia, sepsis and, frequently, acute respiratory failure and death (1)(2)(3). The latter clinical presentation is often referred to as 'Cepacia syndrome'.
Cepacia syndrome, as originally described by Isles at al (3), is characterized by sustained fever, an elevated white blood cell count (WBC), weight loss and often fatal deterioration in pulmonary function. B cepacia bacteremia and radiographic changes consistent with bronchopneumonia are also recognized as a part of this relatively uncommon acute clinical presentation (6)(7)(8).
We present the case of a 40-year-old man with CF who developed Cepacia syndrome complicated by suppurative mediastinitis, from which B cepacia was isolated.
CASE PRESENTATION
A 40-year-old man with known CF presented to the emergency department (ED) complaining of productive cough, shortness of breath and pleuritic chest pain that was made worse by lying supine. The patient also had fever, chills and night sweats for approximately two weeks. One month before presenting to the ED, the patient was seen by his family doctor for what was thought to be a mild pulmonary exacerbation; he was subsequently treated with oral telithromycin. The patient completed a 14-day course but, due to persistent respiratory symptoms, was restarted on a second course of the same antibiotic by his family doctor.
In the ED, a physical examination showed that the patient had a respiratory rate of 24 breaths/min, oxygen saturation of 96% by pulse oximetry on room air and a temperature of 37.5°C. The patient had a relatively normal body habitus (body mass index of 22.9 kg/m 2 ). Chest auscultation revealed diffuse, bilateral coarse inspiratory crackles. A laboratory examination showed a WBC of 8.7×10 9 /L; a chest radiograph showed chronic changes consistent with diffuse bronchiectasis related to CF and right paratracheal soft tissue density ( Figure 1). The patient was admitted to hospital and started on intravenous antibiotics, including tobramycin, piperacillin/tazobactam, and meropenem. Antibiotic choices were based on the most recent sputum culture and sensitivity results.
The patient was diagnosed with CF at five years of age. As an adult, he suffered from progressive chronic obstructive lung disease, pancreatic insufficiency, chronic sinusitis and recurrent pulmonary exacerbations. He was regularly followed by the adult CF clinic at the authors' institution. Over the past few years, he typically experienced four to five pulmonary exacerbations per year that required in-hospital treatment for two to three weeks. He had not been febrile with any prior acute respiratory illness. He was known to be colonized with multidrug-resistant B cepacia since as early as 1994, which was routinely cultured from sputum when clinically stable and during pulmonary exacerbations.
On admission, spirometry showed the patient had a forced expiratory volume in 1 s (FEV 1 ) of 1.25 L (33% predicted), a forced vital capacity (FVC) of 2.39 L (52% predicted) and an FEV 1 /FVC ratio of 52%. The most recent pulmonary function tests, performed while he was clinically stable six months earlier, revealed an FEV 1 of 1.45 L (38% predicted), an FVC of 2.99 L (65% predicted) and an FVC/FEV 1 ratio of 48%. He had shown a gradual deterioration of pulmonary function over the previous years; at 32 years of age, he had an FEV 1 of 2.21 L (55% predicted), an FVC of 3.50 L (73% predicted) and an FVC/FEV 1 ratio of 63%.
A sputum culture taken on the day of admission to hospital had a heavy growth of B cepacia that was sensitive only to meropenem in the antibiotic panel tested. The patient remained clinically stable, but showed no improvements one week after his admission, at which time his temperature spiked to 39.2°C. Repeat blood cultures and chest radiographs were obtained. Blood cultures grew B cepacia that was sensitive to meropenem and had intermediate sensitivity to ceftazidime. The infectious disease consultant recommended discontinuing piperacillin/tazobactam and adding ceftazidime to the antibiotic regimen.
The chest radiographs showed marked enlargement of soft tissue density in the right paratracheal region and development of airspace changes in the right upper lobe (Figure 2). An enhanced computed tomography (CT) scan of the thorax showed compression of the superior vena cava and bowing of the trachea by a large paratracheal mass measuring 4.5 cm × 5 cm, with heterogeneous fluid-like attenuation ( Figure 3). Consistent with CF, extensive changes were also observed in both lungs (predominantly in the upper lobes), including cystic changes, diffuse cylindrical bronchiectasis and areas of tree-in-bud involvement with mucoid impaction in the periphery of both lungs (Figure 4). A presumptive diagnosis of suppurative mediastinitis was made and thoracic surgery services were consulted.
Twelve days after admission, primarily for diagnostic purposes, the patient underwent a bronchoscopy with a bronchoalveolar lavage and cervical mediastinoscopy under general anesthesia. Thoracic surgeons drained a large, pus-filled mediastinal cavity. Specimens taken from the mediastinum and the bronchoalveolar lavage fluid both had a heavy growth of B cepacia, which was sensitive to meropenem and ceftazidime. Pathological examination of the surgical specimen showed fibrotic connective tissue with a prominent mixed inflammatory cellular infiltrate. No evidence of granulomas or malignancy was found. Special stains for mycobacteria and fungal microorganisms were negative.
The patient quickly recovered from the surgery and defervesced over the next week ( Figure 5). However, 11 days after the cervical mediastinoscopy, his maximum daily temperature rose again to 39.5°C. The patient complained of chills, rigors, Under general anesthesia, the right pleural space was entered via a standard posterolateral incision at the fourth intercostal space. Turbid pleural fluid suggested that the mediastinal abscess may have already ruptured into the pleural space. A mass lesion was apparent involving the anterior and apical segments of the right upper lobe. The lung was densely adherent medially. After careful dissection at the level of the superior mediastinum, free pus was suctioned from an inflammatory mediastinal mass. Microbiological and pathological tissue samples were obtained. The chest was irrigated with saline. Three chest tubes were placed -one in the mediastinal cavity through the anterior chest wall and two in the right pleural space along the diaphragm. Finally, a bronchoscopy was performed for pulmonary toilet and to collect further microbiological samples.
The patient remained sedated, intubated and mechanically ventilated, and was transferred to the intensive care unit. During the seven days he spent in the intensive care unit, he remained febrile despite attempts to actively cool him. The antibiotic regimen was altered to ceftazidime, meropenem and colistin based on results of multiple combination bactericidal testing on isolates from blood and the mediastinum. The patient transiently improved and was extubated on postoperative day (POD) 3; however, he was reintubated on POD 6 due to increasing respiratory distress, fatigue and chest discomfort that impaired performance of chest physiotherapy. On POD 7, he required vasopressor support for low blood pressure. A repeat CT scan of the thorax showed persistence of the mediastinal mass and worsening pulmonary opacities. It became increasingly apparent that the patient would not recover from this acute illness; following a family meeting, life-supporting therapies were withdrawn and the patient died shortly thereafter.
DISCUSSION
In the 1940s, an unknown pathogen was presumed to be causing soft rot among vegetation, predominantly onions ('onion rot'). When this pathogen was isolated in 1947 by Burkholder, it was named cepacia, meaning 'of or like onion'. It was known as Pseudomonas cepacia or Pseudomonas multivorans until 1992, when it was reclassified as the heterogeneous group of bacteria known as Burkholderia cepacia (5,9).
B cepacia is a diverse class of bacteria comprised of several species that make up the B cepacia complex. There are currently nine phylogenetically distinguishable species or genomovars (10). Not all of the genomovars are phenotypically distinguishable; for example, genomovars I and III cannot be distinguished, nor can genomovar VI and Burkholderia multivorans. Genomovar III is the predominant species that infects individuals with CF in Canada, accounting for 80% of cases; B multivorans is responsible for approximately 9% (11). However, in Canada, B multivorans has rarely been isolated outside of British Columbia. The prevalence of B cepacia is highest in Ontario and eastern Canada, with approximately 25% of CF patients in eastern Canada being infected (11). Unfortunately, the genomovar identity of our patient's strain of B cepacia is unknown. Given the predominance of genomovar III in eastern Canada, and the linkage between genomovar III and Cepacia syndrome, it is highly likely that genomovar III was the culprit organism in the present case.
Hypersecretion of mucus and luminal mucostasis associated with CF appears to predispose these individuals to colonization with B cepacia (12). Two genetic elements of B cepacia have been identified as virulence factors. The cblA gene encodes for the cable pilus of the bacterial structure that binds to tracheobronchial mucin, and a second appendage, the mesh pili, may facilitate adherence to cells and possibly interfere with clearance of secretions (12,13). A second genetic marker, the B cepacia epidemic strain marker, is a negative transcriptional regulator that encodes for a protein of unknown function (10,11). These genes are found predominantly in genomovar III and, specifically, in a highly transmissible epidemic strain of apparent enhanced virulence, the Edinburgh/Toronto 12 (ET12) strain, which has been linked to Cepacia syndrome (9)(10)(11).
In addition to adherence to respiratory epithelium, B cepacia produces both elastase and collagenase, which facilitates invasion of respiratory epithelium (10,13). B cepacia has been found in the epithelium of terminal and respiratory bronchioles (12). Presumably, the production of these proteolytic enzymes is responsible for the transcellular movement of B cepacia, which eventually may result in bacteremia.
Besides CF, B cepacia has been isolated in patients with chronic granulomatous disease (CGD) (5,10,14,15). In CGD, oxidative phagocytosis is disabled; B cepacia appears to be highly resistant to nonoxidative phagocytosis, leaving CGD patients particularly susceptible to B cepacia infections. It has been postulated that an imbalance between oxidative and nonoxidative phagocytosis in CF may contribute to patients' susceptibility to B cepacia colonization and infection (10).
The virulence of B cepacia also relates to endotoxin and antimicrobial resistance. Endotoxin and its induction of tumour necrosis factor-alpha have been shown to play a role in the pathogenesis of B cepacia infections (9). The cell envelope of B cepacia has a unique lipopolysaccharide structure that renders it resistant to aminoglycosides, and it produces a Bush group 4 beta-lactamase that is not inhibited by clavulanic acid (10,13). It is often susceptible to carbapenems, cephalosporins, quinolones and trimethoprim-sulfamethoxazole. Multidrugresistant B cepacia was consistently recovered from our patient, and this fact likely contributed to the poor clinical response to medical management.
Cepacia syndrome has avoided definitive definition but, in the past 50 years, the term has come to represent an acute deterioration secondary to B cepacia. It is typically characterized by sustained fever, an elevated WBC, necrotizing bronchopneumonia with rapid deterioration of pulmonary function, and septicemia (6,9,16). Cepacia syndrome is currently viewed as untreatable, despite the use of aggressive antibiotic regimens (9). Some CF centres have advocated the additional use of immunomodulatory agents, but evidence is lacking for this approach (9). Our patient developed radiographic evidence of bronchopneumonia late in his course, when the mediastinal infection was discovered. To our knowledge, no case of suppurative mediastinitis secondary to B cepacia infection has previously been described. Chaparro et al (17) described two CF patients colonized with B cepacia who died after lung transplantation; subsequent autopsies revealed localized lung abscesses. As early as 1984, Isles et al (3) described lung microabscess formation in autopsies of CF patients who had died of B cepacia infections. In 2000, Belchis et al (15) also described confluent areas of microabscess in patients without CF who had developed B cepacia pneumonia (15).
In our patient, the mediastinal infection was likely an extension of the infectious process originating in the pulmonary parenchyma. Treatment of mediastinitis typically involves appropriate antimicrobial coverage and drainage with a tube thorocostomy or percutaneous drainage (18). The location of our patient's abscess made a percutaneous approach to drainage less favourable. The patient responded poorly to the antimicrobial therapy. A thoracic surgeon performed a thorough debridement of necrotic and infected tissue. Unfortunately, given the intimate relationship of the infected inflammatory material to vital mediastinal structures, it was impossible to remove all necrotic tissue. Ultimately, the patient died of acute respiratory failure and hemodynamic collapse, likely due to Cepacia syndrome complicated by suppurative mediastinitis.
The present report describes a patient colonized with B cepacia who developed Cepacia syndrome and suppurative mediastinitis. Despite optimal medical and surgical therapy, this patient succumbed to his illness. Those caring for CF patients should be aware of the potential for this catastrophic complication of B cepacia infection, especially in the setting of Cepacia syndrome. | 2018-04-03T04:12:55.510Z | 2006-05-01T00:00:00.000 | {
"year": 2006,
"sha1": "fcd445ab9532d9a87a43309c414a4bdd4e1bc5da",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/crj/2006/495720.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "fc6c42bf4f0b246b639f36e39af818acdc61cdd9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202831216 | pes2o/s2orc | v3-fos-license | Decreased Expression of P 16 Indicates the Postoperative Poor Prognosis of Esophageal Squamous Cell Carcinoma Patients
Background: Expression of P 16 gene that is the key regulatory protein of the cell cycle has been linked with the prognosis of Esophageal squamous cell carcinoma patients. Materials and method: By immunohistochemistry, we examined the expression status of P 16 of 110 esophageal squamous cell carcinoma patients on the tissue microarrays (TMAs). The nuclear staining intensity was calculated by immunoreactivity score ranging from (0-12) and split them into two groups: No-expression & Overexpression group. Result: Postoperatively median follow-up period of our study was 70 months. Down-regulation of P 16 expression pointedly predicted decreased 5-year overall survival (P=0.001) and progression-free survival, which is statistically significant & demonstrated by Kaplan-Meier estimates using the log-rank test. Hence, P 16 protein acts as an independent prognostic factor for overall survival and progression-free survival that demonstrated by multivariate Cox-regression analysis (HR=0.046 with 95% CI 0.006-0.333, P=0.002 and HR=0.064 with 95% CI 0.009-0.466, P=0.005 respectively OS & PFS). Conclusion: P 16 is a promising biomarker that is down regulated in ESCC patients and prognostic indicator for poor survival postoperatively.
Patient population and data collection
We allocated tissue samples of esophageal squamous cell carcinoma from 135 patients who received subtotal esophagectomy and esophagogastric anastomosis with regional lymph node dissection. These surgeries done and tissue samples collected from Qilu Hospital of Shandong University at year of 2010 and 2011. Out of 135 patients we included 110 patients as our study population because remaining 25 patients were lost to follow up. ESCC diagnosis confirmed though the pathological examination. In inclusion criteria we include patients didn't have any chemotherapy or radiotherapy or immunotherapy preoperatively, all tumor stage, any lymph node status. We exclude younger age group, more than 25 years age group we include. From our hospital database we collected baseline clinical and investigational data, for instance age, gender, smoking, drinking habit, degree of differentiations, TNM staging, tumor stage, lymph node status, number of dissected lymph node, and so on. For TNM staging we follow "American Joint Committee on Cancer Stating Manual (7 th edition, 2010)". The research design & sample collection was done according to our institutional protocols that approved from "Ethics boards of Qilu Hospital of Shandong University". The written informed consent was acquired from all our study population.
Follow up
Our study populations were followed up till death or for at least 5 years from their date of surgery. All the patients followed-up regularly; during follow-up physical examination & imaging studies performed every 3 months during 1 st 2 years after receive surgery and every 6 months during the 3 years through 5 years. If necessary, routine radiological examination and esophagoscopy were performed. Remaining indicators allocated from the database for in-patients or the tumor registry for outpatients of Qilu Hospital of Shandong University.
Immunohistochemistry
All the fresh specimens were collected from the patients during surgery. We used 10% formalin for fix the fresh specimen that embedded in paraffin. The tissues collected from the Pathology Department of Qilu Hospital of Shandong University in the year of 2010-2011. All the allocated tissues were cut 4mm serial sections manner. After that, the tissue sections retrieved by 10mM citrate buffer, then de-paraffinization by Xylene & rehydration. We incubated the tissue sections in 3% H 2 O 2 using methanol at least 20 minutes at the room temperature. After that all the samples again incubated with primary anti-CDKN2A monoclonal antibody ab108349 (1:150, Abcam, Cambridge, MA, USA) overnight in the high humid chamber at 4 °C. Next morning, again incubated the slides for 30 minutes at 37 °C by biotinylated secondary antibodies and streptavidin-peroxidase complex. At the end of the procedure, the slides were counterstained by 3,3'-diaminobenzidine solution with hematoxylin. Then fixed the slides with coverslip by natural balsam. We incubated our tissue sections with PBS instead of primary antibody to directed to negative controls. The slides were examined under a light microscope after drying off those. Scoring independently done by two investigators. The scores that were contradictory resolute by investigators steadily and scored those. For determining the staining intensity, we calculate by "Immunoreactivity score" (IRS) system. The scoring parameter as, 0=no staining; 1=weak staining; 2=moderate staining; and 3=strong staining. The final result calculated though multiplication
Study endpoints
Our primary end point was overall survival (OS) that defined as "the time from the date of surgery to death or the last date of follow-up". As well as secondary end point was progression0free survival (PFS) defined as "The local progression of disease from the date of surgery to local or distant progression of disease". The local recurrences include regional lymph node metastasis or primary site recurrence.
Statistical analysis
All the statistical analyses were performed operating SPSS statistics version 23.0 software (SPSS Inc. Chicago, IL). Association of P 16 and clinical parameters were analyzed by Chi-square test or Fisher's exact test. To determine the prognostic value, we performed univariate analysis by Kaplan-Meier method as well as to define OS & PFS we performed log-rank test. To determine independent prognostic factor, we analyze variables by Multivariate Cox-Proportional Regression analysis. All the tests were two-side. P-values studied as significant when P<0.05.
Staining array
We examined CDKN2A protein expression of ESCC by immunohistochemistry (IHC) method of tissue microarray where we include 110-FFPE tissue samples ( Figure 1). After IHC it showed around 75-85% tissues were weak expressed or no expression. Within all of our study population 88 patients (80%) cancer tissue samples showed low expression or no expression.
Clinico-pathological features of ESCC patients
A total 135 patients met the criteria of our study. Out of that 110 patients include in our study due to 25 patients lost during follow up. The median age group is 65 years, at the time of diagnoses age ranging from 25 to 86 years, where 40 patients were female, and 70 patients were female. The median follow-up duration was 70 months; ranging from 1-120 months. We include all fourtumor stage, out of that stage T1 20 cases (10.9%), stage T2 43 cases (39.09%), stage T3 35 cases (31.81%), and stage T4 20 cases (18,18%) recorded. As well as lymph node staging grouped as no positive lymph node (N0), N1 stage, N2 stage, and N3 stage that respectively included 44 cases (40%), 24 cases (21.81%), 29 cases 26.36%), and 13 cases (11.81%) in our study. Also, degree of differentiation of tumor, well differentiated tumor 32 cases (29.09%), moderately differentiated tumor 31 cases (28.18%), and poorly differentiated 47 cases (42.72%) analyzed in our study. There were significant correlations in between CDKN2A (P 16 ) protein expression and baseline characteristics of our study population of esophageal squamous cell carcinoma, which observed by bilateral X 2 test. In the 12 cases of stage T1 50% overexpressed and 50% non-overexpressed group, as well as in the stage T2, stage T3, and stage T4 respectively 36 cases (83.7%) out of 43 cases, 27 cases (77.1%) out of 35 cases, and 19 cases (95%) out of 20 cases were weakly expressed or not expressed. Also, we analyze lymph node and seen 31 (70.5%) N0 cases out of 44 cases, 17 (70.8%) N1 cases out of 24 cases, 27 (93.1%) N2 cases out of 29 cases, and 13 (100%) N3 cases out of 13 ESCC cases. And both variables are statistically significant and act as independent marker for disease diagnoses and treatment assessment. The baseline characteristics of 110 ESCC patients were summarized in Table 1. Kaplan-Meier analysis and log-rank test of P16 for OS of 110 patients. Low P16 protein expression significantly predicted decreased OS.
B.
Kaplan-Meier analysis and log-rank test of P16 for PFS. Low P16 protein expression was significantly associated with decreased PFS.
C.
Kaplan-Meier analysis and log-rank test of T stage, in accordance of low expression of P16 with OS. Stage T3, T4 shows poor OS with low expression of P16 in compare with stage T1, T2.
D.
Kaplan-Meier analysis and log-rank test of T stage, in accordance of low expression of P16 with PFS. Stage T3, T4 shows poor PFS with low expression of P16 in compare with stage T1, T2.
E.
Kaplan-Meier analysis and log-rank test of N stage, in accordance of low expression of P16 with OS. Stage N2, N3 shows poor OS with low expression of P16 in compare with stage N0, N1.
NACS.000554. 3(1).2019
We seen P 16 lost its expression capacity in ESCC patients and it has effect on survival and recurrence of disease. From our total study population 60 patients were died during follow up, out of them 59 (98.3%) patients were lost of expression which is statistically significant (P=0.001) as well as 68 (98.6%) recurrent cases were from no expression group out of 69 cases which also statistically significant (P=0.001). The median survival month for our research population was 42 months (average 6-78 months). But within the baseline criteria's age, smoking, degree of differentiation was not significant, though gender & drinking habit were significant (P<0.05). Also, out of 22 overexpressed cases 21 (95.45%) cases were alive. To determine the correlation and prognostic importance we analyze our research data by Kaplan-Meier estimates using logrank test to accomplish univariate analysis. Therefore, our analysis showed that low expression of P 16 significantly down-regulated 5-year OS (26.4%, P=0.001) and 5-year PFS (18.2%, P=0.001) ( Table 2; Figure 2A & 2B). Moreover, Regression analysis by Multivariate Cox-Regression method determined P 16 down regulation act as an independent prognostic factor for OS (HR= 0.046, 95% CI=0.006-0.333); P=0.002) and PFS (HR=0.064, 95% CI=0.009-0.466; P=0.005) ( Table 3). The Kaplan-Meier analysis also displayed that among the analyzed baseline indicators, conventional prognostic factors including tumor stage (P=0.001), lymph node (P=0.002), and degree of differentiation (P=0.001) were statistically significant in association with OS (Table 2; Figure 2C & 2E) as well as the tumor stage (P=0.001), lymph node status (P=0.001) and degree of differentiation (P=0.001) were statistically significant in association with PFS (Table 2; Figure 2D & 2F). The indicators that were statistically significant we further analyze them by multivariate cox-regression analysis due to determine the independent prognostic factor. In multivariate analysis, tumor stage in association with OS and PFS respectively Hazard ratio 1.57 with 95% confidence interval 1.19-2.06 & P value 0.001 and Hazard ratio 1.69 with 95% confidence interval 1.25-2.27 & P value 0.001. As well as the lymph node status in association with OS & PFS were respectively P=0.004 & P=0.001, which are statistically significant and act as promising prognostic indicator (Table 3).
NACS.000554. 3(1).2019
Discussion P 16 is a cell cycle regulatory protein that encoded in CDKN2A gene in chromosome 9 (9p21.3), comprised of 3 exons, and codes for a 16 kDa protein [8]. In the cell cycle regulation this protein act as a tumor suppression gene and in the inactivated state it has association with tumorigenesis [9]. Promoter methylation habitually responsible for this inactivation. In some cancer still it is unclear of the prognostic effect of P16. This protein also known as MTS1, INK4a, CDKN2, and CDK4I. P 16 resides the cycline kinase D1-CDK4/6 complex, that liable for the phosphorylation of the protein Rb, initiating arrest of cell cycle at stage G1 [10].
In the normal human cell, it is highly expressed and control the uncontrolled cell growth, while in the cancerous cell its expression is epigenetically repressed in approximately 30-40% [11][12][13]. As well as strong cytoplasmic staining of the protein detected in some cancers with or without HPV (human papilloma virus) infectionassociated cancers, for instances cervical cancer, endometrial cancer, oral cancer, nasopharyngeal cancer, skin cancer, thyroid cancer, colorectal cancer and so on [14][15][16][17]. P 16 is the major gene in the cell cycle regulation by its expression status it engaged in reverse regulation of cell proliferation and cause uncontrolled growth [18]. Several studies have exposed that down regulation of this gene expression causes the loss of its negative effects in CDK4/ CDK6, as a result malignancy, abnormal cell proliferation and rapid development of tumor occur [19,20].
In many cancers we have seen this phenomenon. The prognostic and diagnostic prominence of this biomarker in the esophageal cancer has been assessed for several years, though the outcomes remain debatable. All the findings explained the importance of P 16 expression status in the prognostic value determination, that the high expression of this protein shows positive clinical outcome and low expression show negative clinical outcome with decreased 5-year overall survival and recurrence. Fujiwara et al. reported that "hypermethylation of P 16 gene promoter correlates with loss of P 16 expression that results in poorer prognosis in esophageal squamous cell carcinomas" [21]. So, result of this research work directed that CD133 immunoreactivity is a promising biomarker for the prognosis of ESCC. As well as, this CD133 may a major role in P 27 & P 16 in ESCC [22]. Another study demonstrated that P 16 protein low expression associated with decreased overall survival [23]. However, one other work directed the researchers a linear relationship between up regulation of CDKN2A protein and the severity of histological transformation of mucosa ultimately causes malignant transformation. That was the controversial among many studies [24]. Yang Past decades many studies done regarding P 16 in ESCC showed different results. Taghavi et al. [26] demonstrates the P 16 hypermethylation is the principal mechanism of P 16 protein low expression and plays an important role in ESCC development. DNA methylation of P 16 silencing the gene varies among the different group of tumors. Still the role of expression of P 16 protein remains elucidated in many cancers. In Gastric cancer, we found that DNA methylation is responsible for P 16 transcriptional silencing and causes neoplastic transformation [27].
To our best knowledge, our study is the first report that examining the expression status of P 16 protein in esophageal squamous cell carcinoma patients to determine the postoperative prognostic status. And reveal that, low expressed CDKN2A protein in cancerous tissues had decreased 5-year overall survival and also recurrence of disease. As well as the patients with up regulated P 16 are mostly alive (>95%). Hence, there is one limitation of our study that we had small study population and also lost some patient during follow up. Further large study group may be bringing more positive outcome in this study plan and more impact on diagnosis and treatment of ESCC.
Conclusion
Weak or no expression of P 16 protein is significantly correlated with poor prognosis of esophageal squamous cell carcinoma patients. It has also correlation with positive lymph node and advance tumor stage. Therefore, this protein acts as a promising biomarker for postoperative prognosis and determines the further treatment protocols. | 2019-09-17T02:47:58.069Z | 2019-07-30T00:00:00.000 | {
"year": 2019,
"sha1": "8ae06d0d7790cce0d7073c8f5ae70f0117683f05",
"oa_license": null,
"oa_url": "https://doi.org/10.31031/nacs.2019.03.000554",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6b273ff4c01b32442efbc1c8e1fb5cbbe69b735e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260927219 | pes2o/s2orc | v3-fos-license | Randomised controlled trial of perinatal vitamin D supplementation to prevent early-onset acute respiratory infections among Australian First Nations children: the ‘D-Kids’ study protocol
Introduction Globally, acute respiratory infections (ARIs) are a leading cause of childhood morbidity and mortality. While ARI-related mortality is low in Australia, First Nations infants are hospitalised with ARIs up to nine times more often than their non-First Nations counterparts. The gap is widest in the Northern Territory (NT) where rates of both acute and chronic respiratory infection are among the highest reported in the world. Vitamin D deficiency is common among NT First Nations neonates and associated with an increased risk of ARI hospitalisation. We hypothesise that perinatal vitamin D supplementation will reduce the risk of ARI in the first year of life. Methods and analysis ‘D-Kids’ is a parallel (1:1), double-blind (allocation concealed), randomised placebo-controlled trial conducted among NT First Nations mother–infant pairs. Pregnant women and their babies (n=314) receive either vitamin D or placebo. Women receive 14 000 IU/week or placebo from 28 to 34 weeks gestation until birth and babies receive 4200 IU/week or placebo from birth until age 4 months. The primary outcome is the incidence of ARI episodes receiving medical attention in the first year of life. Secondary outcomes include circulating vitamin D level and nasal pathogen prevalence. Tertiary outcomes include infant immune cell phenotypes and challenge responses. Blood, nasal swabs, breast milk and saliva are collected longitudinally across four study visits: enrolment, birth, infant age 4 and 12 months. The sample size provides 90% power to detect a 27.5% relative reduction in new ARI episodes between groups. Ethics and dissemination This trial is approved by the NT Human Research Ethics Committee (2018-3160). Study outcomes will be disseminated to participant families, communities, local policy-makers, the broader research and clinical community via written and oral reports, education workshops, peer-reviewed journals, national and international conferences. Trial registration number ACTRN12618001174279.
Introduction Globally, acute respiratory infections (ARIs) are a leading cause of childhood morbidity and mortality. While ARI-related mortality is low in Australia, First Nations infants are hospitalised with ARIs up to nine times more often than their non-First Nations counterparts. The gap is widest in the Northern Territory (NT) where rates of both acute and chronic respiratory infection are among the highest reported in the world. Vitamin D deficiency is common among NT First Nations neonates and associated with an increased risk of ARI hospitalisation. We hypothesise that perinatal vitamin D supplementation will reduce the risk of ARI in the first year of life. Methods and analysis 'D-Kids' is a parallel (1:1), double-blind (allocation concealed), randomised placebo-controlled trial conducted among NT First Nations mother-infant pairs. Pregnant women and their babies (n=314) receive either vitamin D or placebo. Women receive 14 000 IU/week or placebo from 28 to 34 weeks gestation until birth and babies receive 4200 IU/week or placebo from birth until age 4 months. The primary outcome is the incidence of ARI episodes receiving medical attention in the first year of life. Secondary outcomes include circulating vitamin D level and nasal pathogen prevalence. Tertiary outcomes include infant immune cell phenotypes and challenge responses. Blood, nasal swabs, breast milk and saliva are collected longitudinally across four study visits: enrolment, birth, infant age 4 and 12 months. The sample size provides 90% power to detect a 27.5% relative reduction in new ARI episodes between groups. Ethics and dissemination This trial is approved by the NT Human Research Ethics Committee (2018-3160). Study outcomes will be disseminated to participant families, communities, local policymakers, the broader research and clinical community via written and oral reports, education workshops, peer-reviewed journals, national and international conferences. Trial registration number ACTRN12618001174279.
INTRODUCTION Background and rationale
Acute respiratory infections (ARIs) remain the greatest global cause of childhood morbidity and mortality. 1 The largest burden occurs among socioeconomically disadvantaged populations. 1 In Australia, First Nations children are hospitalised with ARIs up to nine times more often than other children. 2
WHAT IS ALREADY KNOWN ON THIS TOPIC
⇒ Vitamin D deficiency (<50 nmol/L) is common among Australian First Nations infants at birth and linked to an increased risk of acute respiratory infection (ARI) hospitalisation in the first year of life. The most recent systematic review of clinical trials suggests that vitamin D supplementation could significantly reduce the risk of childhood ARI.
WHAT THIS STUDY ADDS
⇒ 'D-Kids' is the first randomised controlled trial (RCT) to evaluate vitamin D supplementation as a novel strategy for reducing early-onset ARIs among Australian First Nations infants. This study extends the best local and international evidence, implementing a practical weekly vitamin D dose over known high-risk period of both vitamin D deficiency and ARI. Importantly, this RCT will monitor vitamin D levels through pregnancy and infancy and determine whether supplementation impacts the paediatric immune response and the acquisition of respiratory pathogens.
Open access
The gap is widest in the Northern Territory (NT) where, despite government funded healthcare, high vaccine coverage and almost universal early breast feeding, 3 4 rates of early-onset pneumonia (20% hospitalised in first year), 4 otitis media (OM) (90% at age 6 months) 5 and chronic suppurative lung disease (1 in 68 in Central Australia) 6 7 are among the highest reported in the world. Further, the burden of ARI hospitalisation in the NT has changed little over the last two decades. 3 8 Slow progress in improving health services and addressing the social determinants of health drives the need for novel, effective, community endorsed, evidence-based interventions to reduce the burden of ARIs in the region. Oral supplementation of vitamin D, integral to immune function, 9 10 is one such intervention that could reduce infants' susceptibility to ARIs. 11 12 Vitamin D is a steroid-like molecule generated predominantly on exposure of skin to sunlight with the remainder coming from the diet. 13 Cutaneously generated (vitamin D 3 ) or ingested vitamin D (vitamin D 2 or D 3 ) is hydroxylated by the liver into 25-hydroxy-vitamin D (25OHD) which is considered the best measure of vitamin D status. The active hormonal form of vitamin D, 1,25 hydroxyvitamin D3 (1,25OH 2 D 3 ) is produced by the kidney or locally by specialised cells of other systems as required. Both 25OHD and 1,25OH 2 D 3 are transported in the circulation by the vitamin D binding protein. 14 Active vitamin D regulates gene expression via the vitamin D receptor (VDR). 15 Conventionally, vitamin D is known for its role in regulating calcium metabolism though the effects of vitamin D and its metabolites are much broader. 16 Most immune cells express both the VDR and the enzymes (1α-hydroxylase) necessary to locally convert circulating 25OHD into active 1,25OH 2 D 3 . As such, immune cell responses are affected by the availability of circulating 25OHD. On the activation of innate immune cells, 1,25OH 2 D 3 is involved in the modulation of over 200 human genes (~1% of the total) 17 including those involved in pathogen sensing and clearance (eg, ↑ cathelicidin), and control of inflammatory responses (eg, ↓IFNγ, ↑IL10). 9 Vitamin D also influences adaptive immunity by modulating dendritic cell (DC) and T cell phenotypes, promoting a tolerogenic T-helper 2 response and inducing the expansion of regulatory T cells. 18 The net effects of vitamin D against infection appear to be simultaneous promotion of antimicrobial activity, and control of excessive inflammatory and adaptive immune responses.
Vitamin D deficiency (<50 nmol/L) is prevalent in many populations globally and has been repeatedly linked to an increased risk of childhood ARI. [19][20][21][22][23] Importantly, pregnancy is a period of increased demand for vitamin D. Levels tend to decline towards term [24][25][26][27] and breast milk offers a relatively poor source of vitamin D. 28 As such, vitamin D deficiency is common at birth and it can take up to 6 months for infants to achieve sufficiency. 24 27 Our prospective cohort study in northern Australia demonstrated that mean cord blood 25OHD levels were 48% lower than maternal 25OHD levels at 32-week gestation, resulting in a high rate of neonatal deficiency (44% <50 nmol/L). 24 Importantly, this study found that mean cord blood vitamin D was significantly lower among infants who were subsequently hospitalised with an ARI in the first year of life than in those who were not (37 nmol/L v 56 nmol/L; p=0.025). 24 To address deficiency in pregnancy, doses of 2000-4000 IU/day from 12 to 16 weeks gestation have been shown to be safe and effective. 29 For infants with deficiency, supplementation with 400 IU (for levels between 30 and 50 nmol/L) to 1000 IU (for <30 nmol/L) per day for 3 months is recommended depending on the severity of deficiency. Infant doses up to 1000 IU/day 27 and bolus doses of 100 000 IU 30 have been used safely. It remains unclear whether vitamin D supplementation in pregnancy and infancy can reduce the burden of early-onset infant ARIs.
A 2017 meta-analysis of individual-level data from 10 933 participants in 25 randomised controlled trials (RCTs) 12 and subsequent 2021 update of aggregate data over 48 000 participants in 46 RCTs 31 found that vitamin D supplementation reduced the odds of experiencing at least one ARI episode by 12% (OR 0.88 (95% CI 0.81 to 0.96)) 12 and 8% (OR 0.92 (95% CI 0.86 to 0.99)), 31 respectively, compared with placebo. This reduction was found despite considerable population (ethnicity, age, socioeconomic status, baseline vitamin D) and design (vitamin D dosage/duration and outcome measure) heterogeneity. Importantly, the data available at the commencement of our study suggested both daily and weekly vitamin D dosing were effective against ARI (OR 0.81 (95% CI 0.72 to 0.91)). 12 More recently, the largest effects have been seen in those receiving daily supplementation of ≥400 IU (OR 0.70 (95% CI 0.55 to 0.89)). 31 In general, children 1-15 years of age appear to receive the most benefit (OR 0.71 (95% CI 0.57 to 0.93)) and large bolus doses at monthly or greater intervals have been ineffective. 12 31 A New Zealand RCT found that compared with placebo controls, perinatal vitamin D supplementation (mothers from 27-week gestation to birth, infants from birth to age 6 months) maintained infant vitamin D status >50 nmol/L at birth and in infancy 27 and significantly reduced ARI primary care presentations among infants (relative risk reduction of 12% (≥1 ARI, 99% vs 87%,) and incidence rate reduction of 37.5% (4.0 vs 2.5 ARI/child/year)) in the 12-month postsupplementation (infant age 6-18 months). 11 Overall, few of the reviewed RCTs dosed weekly (6/46) or during the perinatal or neonatal period (4/46) 12 31 where deficiency is common. 24 26 27 As far as we are aware, none have done both.
Objectives
The primary aim of our 'D-Kids' RCT is to determine whether practical weekly vitamin D supplementation (compared with placebo) of mothers (from 28 +0 to 34 +6 Open access weeks gestation until birth) and their infants (from birth until age 4 months) reduces the incidence of ARI (hospital or primary care presentations) among highrisk Australian First Nations infants during their first 12 months of life.
The secondary aims are to determine whether the supplementation strategy above: (A) reduces vitamin D deficiency at birth and infancy; (B) enhances neonatal immune responses and (C) reduces the prevalence of nasal respiratory pathogens in infancy.
Hypotheses
We hypothesise that perinatal vitamin D supplementation will reduce the incidence of ARIs (vs placebo); maintain vitamin D levels >50 nmol/L at birth and throughout infancy; promote optimal immune responses to pathogen challenge; and reduce the frequency of pathogens detected in the nose.
Trial design and setting
'D-Kids' is a parallel (1:1), double-blind, allocation concealed, randomised, placebo-RCT of weekly perinatal vitamin D supplementation (figure 1). The trial is being conducted among First Nations families in urban and remote communities of Australia's NT. The NT is sparsely populated with approximately 229 000 residents spread across 1.4 million km² (0.16 people per km²). 32 Approximately 70% of NT First Nations families reside in remote communities. 32 Recruitment started in February 2019 and the study is scheduled for completion by the end of 2024.
Patient and public involvement
The 'D-Kids' trial was built on respectful engagement and longstanding relationships with Australian First Nations families, hospitals and healthcare centres. Prior to study conduct, we sought the views of First Nations community councils and parents regarding the trial intervention and design acceptability. Our small prestudy survey of local families indicated a willingness to participate in such a study (24/25, 96%). Formal approval for this RCT was received from all communities involved and we have implemented a First Nations Reference Group model of governance. Importantly, the study team includes First Nations investigators, clinicians, health practitioners and trainees.
Eligibility criteria
Inclusion criteria Pregnant First Nations women with a current gestation of 28 +0 to 34 +6 weeks, aged 17-40 years (inclusive) residing in a participating community and intending to do so until their infant reaches 12 months of age. Eligibility includes dichorionic diamniotic (DCDA) twin pregnancies and previously enrolled mothers.
Exclusion criteria
Enrolment in other research that could influence the outcomes of this study, monochorionic diamniotic twin pregnancies, current use of prescribed vitamin D in pregnancy >400 IU/day (or equivalent), current self-supplementation with vitamin D >400 IU/day, current use of illicit or cytotoxic drugs (excluding marijuana and alcohol), antenatal hypercalcaemia in this pregnancy (serum calcium >2.8 mmol/L or a urinary calcium:creatinine ratio >1 on two occasions), uncontrolled thyroid disorders, chronic kidney disease (≥stage 4), known anaphylactic allergy or a history of or current kidney/bladder stones.
Additional criteria
Obstetrician approval is necessary for mothers with a history of >2 preterm births at <34 weeks gestation, DCDA twin and other high-risk pregnancies. Paediatrician approval is sought for ongoing infant study participation if born <36 weeks, admitted to the intensive or special care units, discharged on vitamin D 400 IU/day (Pentavite) or found to have a congenital anomaly. Babies born <36 weeks gestation and entering the special or intensive care nursery will not Open access receive any study medicine until cleared to do so by the special care clinical team.
Intervention
Eligible mother-infant pairs are randomised with equal probability to receive either weekly vitamin D (coconut oil plus cholecalciferol) or placebo (coconut oil only).
Both liquid active and placebo study medicines are manufactured and supplied by Ddrops, Ontario, Canada. Ddrops recommends storage in an upright position between 5°C and 30°C though the product has passed long term stability tests at 40°C. Stock medicine bottles and prefilled syringe doses (capped and placed in a labelled envelope) were both stored in the Menzies School of Health Research (MSHR) pharmacy at ambient temperature (approximately 21°C) prior to dosing. Mothers who received doses for self-administration were instructed to keep them in a secure, unrefrigerated location inside their house.
The maternal vitamin D dose is 14 000 IU/week (equivalent to 2000 IU/day); the infant dose is 4200 IU/week (equivalent to 600 IU/day). Mothers vitamin D or placebo commences at enrolment (28 +0 to 34 +6 weeks gestation) and continues until delivery (table 1, figure 1). Infant vitamin D or placebo commences at birth and continues until 4 months of age. Infants discharged from hospital with a recommendation to take oral vitamin D (400 IU/day) receive a reduced vitamin D dose of 3000 IU/week (equivalent to 430 IU/day).
Study medicines are self-administered orally via prefilled syringes (mother: 0.4 mL; infant 0.2 mL). though study staff assist as requested. Doses are ideally taken 7 days apart. A dose delayed more than 11 days is considered a missed dose and the next dose taken as scheduled. There is no catch up for missed doses and >3 consecutive missed doses is considered a protocol deviation.
Adherence
Participant retention and adherence to the protocol are facilitated through regular contact via telephone, short message service, clinic or home visits. Contact is weekly during dosing (starting 7 days from previous dose) then monthly thereafter until the final study visit. A maximum of three consecutive days of contact are attempted for each scheduled visit or dose. Where possible, ingestion of study medicine is supervised by clinic, hospital or study staff. Unsupervised, self-administered doses are verified by phone. When dosing is complete the remaining volume of the used bottle is recorded, and it is stored securely and separately from the unused medicines. Empty dosing syringes are not collected.
Relevant concomitant care 'D-Kids' is a pragmatic trial. Outside the eligibility criteria all concomitant care and interventions will be allowed Open access unless a specific contraindication to vitamin D therapy arises and discontinuation is recommended.
Criteria for discontinuing
Participants can withdraw from the study at any time: either entirely (no further medication, visits or collection of information) or partially (withdraw from medication and/or study visits). Partial withdrawals continue to contribute passive follow-up data (ie, medical record reviews in line with the primary study outcomes).
Primary outcome
The primary outcome is the incidence of ARI episodes receiving medical attention in the first 12 months of the infant's life. ARI episodes are identified via electronic medical records using established methods. 33
Secondary outcomes
Pneumonia, bronchiolitis and OM (incidence) will be analysed as specific ARI subgroups (online supplemental material 1). Hospitalisation, antibiotic and oxygen therapy will be used as measures of severity. Circulating 25OHD concentrations are measured in maternal blood at baseline (≤34 +6 weeks) and at birth, in cord blood, and in infant blood at birth, 4 and 12 months, using high-performance liquid chromatography (HPLC) as described previously. 36 37 Blood samples for vitamin D analysis include both plasma and dried blood spots (DBS). DBS-based 25OHD concentrations will be adjusted for haematocrit as necessary. 38 Vitamin D levels <50 nmol/L are considered deficient.
The prevalence of nasal pathogens will be determined via nasal swabs, collected from infants at ages 4 and 12 months. Swabs will be screened for key respiratory bacteria (Streptococcus pneumoniae, non-typeable Haemophilus influenzae (NTHi)) and viruses (RSV) using WHO culture 39 and standardised RT-PCR methods, respectively. 40 41 Tertiary outcomes Immune function will be assessed through analysis of whole blood, isolated cord blood mononuclear cells (CBMCs), 42 plasma, breast milk and saliva samples. Immune cell populations will be characterised at birth (cord blood, CBMCs) and infant age 4 months (venous blood) using flow cytometry. 43 CBMC-mediated immune responses to in vitro pathogen challenge will be assessed as previously described. 42 Immune markers will be measured directly in infant plasma and saliva samples at birth and age 4 months using ELISA. 42 Maternal pertussis vaccine induced antibodies will be measured in cord blood plasma and breast milk collected at birth using ELISA. 44 Infant pneumococcal conjugate vaccine (PCV) induced antibodies will be measured in saliva and blood samples at birth and 4 months using ELISA. 45 Given the explanatory and mechanistic nature, immunological outcomes will be presented separately to the main outcomes.
Future outcomes
Where volumes allow, blood aliquots will be stored in RNA preservative (RNA later, Invitrogen) for further characterisation of immune effector genes. Circulating calcium levels (via routine pathology services), breast milk vitamin D levels (via an adapted HPLC method) 38 and polymorphisms in key vitamin D pathway genes 46 (eg, the vitamin D receptor gene, VDR) will also be evaluated. Stool samples are also collected (OMNIgene, DNA Genotek) from a subset of participants for gut microbiome analysis. 47 48 Additional funding has been secured (NHMRC 2014930) for follow-up of infant lung function to age 6 years.
Recruitment strategies
'D-Kids' partners with local NT hospitals, community healthcare and pathology services. These services facilitate recruitment (and follow-up) through notification of pregnant women receiving care at their site. Once identified the study team engage potential participants at antenatal care visits.
Study visits
Enrolment, study intervention, follow-up (table 1) and biological sampling (table 2) are achieved through four main study visits.
Visit 0, screening: Families are approached during antenatal appointments (20 +0 to 34 +6 weeks gestation) at participating hospitals and community healthcare clinics and screened for interest and basic eligibility (maternal and gestational age, ethnicity, community of residence).
Visit 1, enrolment: Pregnant mothers are eligible for enrolment at 28 +0 to 34 +6 weeks gestation. At this visit, we provide a detailed explanation of the study rationale and requirements using plain language study material (including a participant information sheet, pictorial flipchart and consent form). First Nations team members translate materials as necessary. Interested families are formally invited to participate in the study. With written informed consent and confirmation of eligibility, mothers are enrolled and randomly allocated to the intervention or placebo arm. A baseline blood sample is collected Open access prior to supervised administration of the first dose of study medicine.
Visit 2, birth: At delivery, cord blood is collected by the attending obstetricians or midwives and transported to the laboratory for processing within 24 hours. Study staff visit mothers and their babies within 14 days of birth. The visit coincides with the end of the maternal supplementation. Blood and breast milk (at least 3 days post partum) are collected from mothers. Blood and saliva are collected from neonates. Infant study medicine is commenced, and the first dose is administered. Visit 3, infant age 4 months: Families are visited when infants are 3.5-6 months of age, coinciding with the end of the infant supplementation period (16th week post partum). An infant blood, nasal swab and saliva sample are collected. Visit 4, infant age 12 months: The final study visit occurs between infant ages 11-18 months. An infant blood and nasal swab are collected. Families receive a small thankyou gift for their contribution.
Routine data are collected at each visit, including personal, household and community demographics, medical and vaccination history, smoke exposure, time spent indoors and use of vitamin D or other supplements, breastfeeding status and infant growth metrics. Serious adverse events (SAEs) and clinical outcomes are monitored via maternal and infant medical records (hospital and primary care) throughout follow-up.
Sample size
Our sample size of 314 (figure 1) is expected to provide 90% 'intention-to-treat' and 80% 'per-protocol' study power to detect a 27.5% relative reduction 11 in new ARI episodes between those receiving vitamin D supplementation (2.54/year) compared with placebo (3.50/year). Power calculations are based on local ARI rates, 3 8 49 14% non-compliance, 3% lost to follow-up and assume the outcome data fit a negative binomial distribution (dispersion factor, k=0.4). 50 Our RCT's predicted effect size is consistent with a similar New Zealand study by Grant et al. 11 We expect a low drop-out rate because the primary outcome is informed by passively collected medical record data for each child.
Assignment of interventions
Sequence generation Sequential randomisation codes were computergenerated using permuted blocks (two blocks of differing sizes), stratified by community (urban and remote). The allocation ratio within these strata is 1:1 (vitamin D: placebo). There is one random allocation per motherinfant pair.
Allocation concealment
An independent study statistician generated the randomisation codes. Medication bottles are labelled by an independent clinical pharmacist from Royal Darwin Hospital. Labels incorporate a clear specification for mother or infant to avoid mix-up. Sequentially labelled mother and infant bottle pairs are stored in strata-specific boxes at room temperature in a secure facility. Active and placebo study medicine bottles and administration syringes are identical. Liquid medicines are indistinguishable in appearance and taste.
Allocation implementation
Good Clinical Practice (GCP) trained research nurses or First Nations health practitioners allocate the study medicine to each mother-infant pair by selecting the next sequentially labelled (prerandomised) study medication from the appropriate stratification group. The allocation Open access sequence number is recorded by the research team on the data collection form (DCF), the database and in the participant's medical record. Infants receive the same allocation as their mother.
Blinding
The study is double-blinded. All investigators, participants, carers, hospital and clinic staff are blinded to the treatment group until completion of study follow-up. Unblinding is permissible if the independent data safety monitoring committee (iDSMC), principal investigator or appropriate qualified delegate is compelled by evidence of a safety concern.
Data collection, management and analysis Data collection methods All study visit and clinical data are recorded by research nurses on standardised paper-based DCFs. Protocol adherence (deviations/violations) are continuously monitored and documented by study staff. Established methods are used to document episodes of respiratory infection from medical records. [33][34][35] Collection and processing times, and quality measures are recorded for all biological samples. A GCP compliant protocol is employed if corrections are made to paper-based records.
All study documents are securely retained at MSHR.
Sample collection
Established methods are being used for the collection and processing of blood, breast milk, nasal, stool and saliva samples. 4
Data management
Data are entered by the research nurses into a REDCap electronic data capture (EDC) tool with web-based interface, 52 53 hosted by MSHR at Charles Darwin University.
Laboratory data are entered by the laboratory scientist. The EDC tool includes built-in validation ranges to facilitate accurate data entry. An EDC audit trail maintains a record of entries and changes made. Logic and 10% data checks (EDC cross-checked with DCFs) are performed regularly. Error rates greater than 1% precipitate 100% checks of targeted data fields and further 10% checks of all data.
Statistical analyses
The RCT is being conducted and reported according to CONSORT (Consolidated Standards of Reporting Trials) guidelines. 54 A detailed statistical analysis plan will be developed by the investigators and study statisticians prior to the unblinding and final analysis. The primary 'intention-to-treat' analysis will compare the incidence of infant ARI (episodes/child/12 months) between active and placebo groups using a negative binomial regression model and producing an estimate of the incidence rate ratio (IRR) with 95%CIs. The model will include terms to account for repeat mothers and twins. There will be no imputation for missing data. Supplementation efficacy will be defined as (1−IRR)×100.
Secondary analyses (clinical) will include between group comparisons of (1) the proportion of children having any ARI episode <12 months (using a generalised linear model with effects presented as risk ratios), (2) time to first ARI presentation (using Cox regression with effects presented as HRs) and (3) the incidence of ARI subgroup outcomes (pneumonia, bronchiolitis and OM using negative binomial regression with effects presented as IRRs). Sensitivity analyses will include those (1) with hospitalised outcomes and (2) clinical outcomes requiring antibiotics. A per-protocol analysis (excluding those who missed >3 consecutive doses) will also be conducted for all clinical outcomes.
Secondary analyses (biological) will compare betweengroup differences in (1) circulating vitamin D levels: mean plasma 25OHD at each visit and breast milk 25OHD at birth (Student's t-test), (2) microbiological outcomes: proportions of nasal swabs positive for S. pneumoniae, NTHi and RSV using χ 2 tests and reporting risk ratios with 95% CIs and (3) immune function measures: median immune cell population counts in cord blood at birth and infant blood at 4 months; median circulating inflammatory marker levels in cord blood at birth and infant blood at 4 months; median CBMC population counts and CBMC challenge-induced cytokine levels (Wilcoxon rank-sum test); geometric mean concentrations of IgG and IgA to pertussis vaccine antigens in maternal plasma and breast milk; geometric mean concentrations of IgG and IgA PCV serotype antigens (five types) in infant saliva and plasma at age 4 months (Student's t-test). All data will be analysed using Stata Statistical Software: Release V.17 (StataCorp) and GraphPad Prism V.9 (GraphPad Software, USA).
Open access
For all analyses, p values <0.05 will be considered statistically significant. No adjustments will be made for multiple comparisons.
Monitoring
Data monitoring and harm assessment An iDSMC was established prior to any participant recruitment. The committee includes experienced study independent statisticians, clinicians, endocrinologists and epidemiologists. SAEs (hospital admissions) and other AEs are monitored by study staff through hospital visits, participant contact and regular review of participant medical records. Each SAE is reported to an independent medical monitor (an experienced paediatrician not on the iDSMC) to review causality. SAEs deemed as potentially related to the study are reported to the iDSMC and Human Research Ethics Committee (HREC) within 72 hours. At regular intervals (<6 months), the iDSMC assess the totality of information relating to participant recruitment and retention, protocol deviations and violations, AEs and SAEs. Unblinding can be requested where concerns arise. With compelling evidence, the iDSMC can recommend cessation of the trial. Given vitamin D is low risk, there are no predefined stopping rules or planned interim analyses. A First Nations Reference Group oversees cultural aspects of the study.
DISCUSSION
Our 'D-Kids' RCT is the first to evaluate perinatal vitamin D supplementation against medically attended ARIs among Australian First Nations infants during the first year of life. The sample size of 314 infants is expected to provide 90% power to detect a 27.5% relative reduction in new ARI episodes between groups. Importantly, our mechanistic outcomes will further characterise the effects of vitamin D supplementation on infant immune function and respiratory microbiology.
This 'D-Kids' trial has several strengths. It addresses ARIs, an unmet health need, particularly among socioeconomically disadvantaged children. The study premise is guided by vitamin D and ARI data specific to the target population 3 24 and the design incorporates key strengths of previous trials 12 27 31 and guidelines 55 including dose timing, concentration and frequency and potential effect size. Our study also addresses important knowledge gaps regarding the effectiveness among neonates and of practical weekly dosing regimens, and integrates immunological and microbiological outcomes. The weekly maternal (equivalent 2000 IU/day) and infant supplementation doses (equivalent 600 IU/day) were chosen to maintain vitamin D sufficiency (>50 nmol/L) throughout the active study period, taking into account previously published local infant vitamin D levels at birth. 24 Importantly, there have been no SAEs associated with vitamin D supplementation in RCTs, 12 31 as such, the balance of risk versus reward of a vitamin D strategy is likely to be highly favourable.
Of note, recent evidence suggests daily dosing is most effective clinically. 31 Further, there is emerging biological evidence to suggest bolus dosing is ineffective due to upregulation of 24-hydroxylase (converts 25OHD and 1,25OH 2 D 3 into less active 24-hydroxylated products) and FGF23 (inhibits 1α-hydroxylase which is necessary to locally convert circulating 25OHD into active 1,25OH 2 D 3 ) that act to balance the vitamin D response. 56 However, these studies 57 58 compare large bolus doses >150 000 IU with daily doses and the biological effect of moderate weekly doses (<14 000) remains unclear. One meta-analysis suggests that daily dose equivalents of 2000 IU or less in individuals with circulating 25OHD<100 nmol/L are unlikely to induce the same down regulation in vitamin D function as large bolus doses. 59 Our study will contribute valuable data on the clinical and immunological effects of weekly vitamin D dosing in infants.
There are also many challenges. The trial is being conducted throughout a pandemic. While the D-Kids trial has never officially paused, lockdowns, isolation and quarantine measures designed to reduce the spread of SARS-CoV-2 have restricted face to face contact, interaction with medical facilities and general travel, impacting participant recruitment and follow-up, and reducing the rates of ARI. As of December 2022, 184 infants had been recruited to the study. Several strategies have been implemented to mitigate slow recruitment including the addition of new study sites (notably Alice Springs in Central Australia), self-administered dosing, clinic assisted follow-up visits, and recruitment of twins and previously enrolled mothers. Notably, while self-administered dosing mimics the real-world setting, adherence (phonebased reporting) becomes more difficult to accurately monitor. With pandemic measures easing in 2023, we are optimistic about achieving our recruitment target.
Feasible interventions such as vitamin D supplementation show considerable potential against ARI but more evidence is required. Our study outcomes will make an important contribution to clinical practice, the medical literature and could have profound implications for disadvantaged populations where ARIs are common.
DISSEMINATION Registration
'D-Kids' is registered with the Australia and New Zealand Clinical Trial Registry: http://www.ANZCTR.org.au (ACTRN12618001174279). The current protocol version is 2.1 (last updated 16 September 2022).
Protocol amendments
All protocol modifications are reported to the NT HREC for review and approval. Trial registries are regularly updated as required. Investigators, the iDSMB and other stakeholders, including participating community health Open access services, are advised of important protocol amendments, such as those that may impact on participant safety, scientific validity, scope or ethical rigour. Substantive protocol amendments are agreed by the 'D-Kids' investigator team and are approved by HREC before implementation. Minor administrative amendments are documented in notes to file.
Consent
Only appropriately trained staff conduct informed consent. Information is provided to the mother in written, verbal and pictorial formats, with language translation where requested. The study is explained to expectant mothers face to face, and they are provided with sufficient time to ask questions, discuss and consider participation of themselves and their child with relevant others and obtain further study details prior to signing and dating the informed consent form. The consent process includes explanations of all elements of consent according to GCP, the Declaration of Helsinki, National Health and Medical Research Council (NHMRC) requirements and according to local requests to ensure cultural safety (as recommended by the First Nations Child Health Reference Group.
Additional consent is sought from parents or guardians to use participant data and biological specimens for future research relating to child health respiratory studies. Options to refuse each or all requests are provided. Participants are also asked if they would like to be contacted about future research studies.
Confidentiality
All identifiable information on study participants is retained in password-protected files and locked cabinets at study sites. Access to this information is only provided to authorised study staff, unless required by legislative or regulatory agencies and the HREC. No identifying information will be included in study reports. Clinical specimens are labelled with the participant randomisation number only and will be destroyed as per the NHMRCbased ethics statement.
Declaration of interests
MSHR (NT, Australia) is the trial sponsor. Study investigators working at MSHR and partnering institutes are solely responsible for the design, conduct and reporting of this RCT. The investigators and protocol authors declare no conflicts of interest. The trial is funded by the NHMRC. The study medicine and placebo are manufactured and supplied free of charge by Ddrops, Ontario, Canada. Neither Ddrops or the NHMRC had/will have any role in the trial study design, conduct, analysis or reporting.
Access to data
The final trial dataset will be under the custody of the trial sponsor, MSHR, NT, Australia. The principal investigator, study statistician and senior data manager at MSHR will have access to all study data. Third party access to the final anonymised dataset will require written requests to be approved by the HREC, iDSMC, study investigators and the director of MSHR.
Ancillary and post-trial care All participants have access to ancillary care from their usual healthcare provider (local community health centre). Trial participants will be insured and indemnified by the MSHR for their involvement in the study. Injury due to study procedures will be considered trial related.
Dissemination policy
Trial results will be communicated in aggregate to participant families and their communities via written and oral presentations. Trial results will then be published in peer-reviewed international journals, presented at relevant national and international conferences, and reported to local policy-makers (eg, NT and Australian government, Therapeutic Goods Administration, First Nations Reference Group). Results will be disseminated regardless of the magnitude or direction of effect. There will be no publication restrictions. Applications for third party access to deidentified trial data will be considered by study investigators if appropriately justified and compliant with ethical and privacy policies. Contributors MJB designed and wrote the trial protocol and analysis plan. SJP, AK, PSM, HD'A, TS and ABC contributed to the design and analysis plan. VP, ML and JN guided the protocol logistics and GCP compliance. MJB and ASB drafted the manuscript. All named authors contributed to subsequent drafts and approved the final manuscript.
Funding The trial was funded by a 5-year NHMRC of Australia project grant (1138604).
Competing interests None declared.
Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Methods section for further details.
Patient consent for publication Not applicable.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement No data are available. This is a protocol paper, and therefore, there are no associated data.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. | 2023-08-17T13:04:42.502Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "5c0d826853942d959599865e16ec88a9d4281bcc",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "99b0d5fe5d915874eb75144ee3a466aeef848df0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247836561 | pes2o/s2orc | v3-fos-license | Electronic textiles for energy, sensing, and communication
Summary Electronic textiles (e-textiles) are fabrics that can perform electronic functions such as sensing, computation, display, and communication. They can enhance the functionality of clothing in a variety of convenient and unobtrusive ways, thus have garnered significant research and commercial interest in applications ranging from fashion to healthcare. Recent advances in materials science and electronics have given rise to variety of e-textile components, including sensors, energy harvesters, batteries, and antennas on flexible and breathable textiles substrates. In this review, we discuss recent advances in the development of e-textiles for energy, sensing, and communication. In addition, we investigate challenges in the integration of components to realize e-textile systems, and highlight opportunities enabled by innovations in materials science, engineering, and data science.
INTRODUCTION
Wearable technologies allow digital tools to be conveniently and unobtrusively integrated into our everyday lives. Electronic textiles (e-textiles) represent an important example that takes advantage of clothing as a platform for sensing, actuation, display, communication, energy harvesting, energy storage, and computation. Whereas earlier e-textile were designed based on simply attaching conventional electronic components attached onto clothing, recent advances in material science and electronics have enabled e-textiles that are able to perform a wide variety of electronic functionalities while being flexible and breathable. Such e-textiles have gained significant attention both in the industry and academia and have been demonstrated for a broad range of applications, including Internet of things (IoT), artificial intelligence (AI) (Matijevich et al., 2020), body motion tracking (Chun et al., 2018;Kim et al., 2019b), gaming (Zhou et al., 2018), pressure mapping (Lim et al., 2020), rehabilitation, healthcare Li et al., 2018a;Teferra et al., 2019), smart wearables (Carneiro et al., 2020;Ferná ndez-Caramé s and Fraga-Lamas, 2018;Gong et al., 2019), and smart garments (Castano and Flatau, 2014;Ou et al., 2019;Yin et al., 2018b). E-textile-related technologies have been drawing great attention from researchers and most of the review articles on e-textiles are from the point of view of materials or methods of fabrication Yong Zhang et al., 2021;Zhang et al., 2021). In this article, we present the key components needed to build independent e-textile systems and review recent progress in the development of e-textiles by their functionality: sensing, communication, and energy harvesting and storage, with emphasis on limitations and opportunities for their integration into functional systems. E-textile systems require several key components to perform basic functions with sufficient level of autonomy, including sensors for data acquisition, energy sources for system power supply and regulation, communication modules for data transmission and interfacing, and reliable interconnections that connect different modules into an integrated system. Figure 1 shows a person wearing a smart running suit that is equipped with various textile-based components. Here, the blue arrows indicate the transmission of data captured by different textile-based sensing elements. Specifically, the physical and bio/chemical data are transmitted from the sensors via conducting elements (e.g., conductive threads) to a wireless communication hub, that then send data wirelessly to a computing unit for further analysis. The red arrows indicate how the independent smart suit is powered, using either energy harvesters or energy storage devices. These components (sensor, energy harvester/storage, and communication devices as well as connection) assembly into an independent smart e-textile system, and is discussed in detail in the following sections. the small size of the sensor, it can be weaved into fabrics and even a glove as shown in Figure 2A . By stretching the textile, the sensor inside the fabric is elongated, thus deforming the PDMS hollow tube and displacing the liquid metal inside, which resulted in an increase of electric resistance of the sensor. In another example, a dual-core capacitive microfiber sensor was fabricated . This microfiber sensor is comprised of a dual-lumen elastomeric microtube filled with liquid metallic alloy, which enables continual strain perception even after being completely severed. As shown in Figure 2B (Yu et al., 2019, p.), the microfiber sensors were sewn into a fabric glove, enabling the glove to capture the gesture of the hand and monitor respiration rate based on capacitive change of DC/C 0 .
Rather than using intrinsically conductive fibers, a number of studies focused on coating conventional textile fibers with conductive materials. For example, a wearable silk fabric based on carbonizing the pristine silk fiber was reported for stretchable strain sensing . The sensor showed a maximum strain of 520% and withstood 6,000 tensile cycles at 100% strain. Similar carbonization method was used in other studies on e-textiles. These include piezoresistive-type MoS 2 -coated carbonized silk fabric pressure sensor (Lu et al., 2020), polyaniline and carbon nanotubes-coated Au/nylon fiber (Zhao et al., 2020a), conductive graphene-based E-textile via bubble-exfoliation method and dip coating . This popular dip coating method was used to produce resistive e-textiles by simply coating a conductive layer onto the surface of a fabric (Bi et al., 2018;Cai et al., 2017;Li et al., 2019;Lian et al., 2020;Yang et al., 2018a;Yang et al., 2020;Yang et al., 2018b). Another ''all textile-based'' strain sensor used an elastic fabric as a substrate with conductive yarn as the resistive sensing component weaved into the substrate in different patterns (Park et al., 2019). The sensor is reported to be able to detect bending and rotation of human joints. Other than resistive and capacitive textile-based sensors, triboelectric nanogenerators technologies can also be used for pressure sensing e-textiles. A machine-washable and breathable pressure sensor based on triboelectric nanogenerators was introduced . Two types of yarns were designed: Cu-coated polyacrylonitrile (denoted as Cu-PAN) yarns and parylene-coated Cu-PAN (denoted as parylene-Cu-PAN) yarns. When the Cu-PAN yarns and parylene-Cu-PAN yarns are in contact with each other in a fabric, a voltage signal is produced when there is a contact area change due to applied external pressure. The study also showed that the yarns can be knitted into a piece of pressure sensing fabric glove using a knitting machine as shown in Figure 2C .
Apart from mechanical force/pressure-detectable e-textile sensors, e-textiles have also been developed to sense other physical parameters in the environment. For example, environmental humidity was reported to be detectable with a flexible humidity capacitive sensing system, in which the sensor part is composed of two copper wires with a layer of yarns in between as the dielectric layer, as illustrated in Figure 2D . As the yarns absorb the moisture in the atmosphere, the permittivity of the dielectric increases, leading to increase in capacitance of the textile sensor. The moisture sensing capability of e-textile can be further extended to moisture management. A double-sided synergetic Janus textile was developed for moisture/thermal management, as demonstrated in Figure 2E , with one side of the textile coated with hydrophilic polymer and the other side coated with hydrophobic polymer. The difference of moisture absorption resulted in difference in textile porous size, leading to thermal managing capability. Without having any coating on the fabric, another moisture-sensing e-textile was fabricated by simply twisting the silk yarns, turning the silk into an artificial torsional silk muscle (Jia et al., 2019). This sensor provided a reversible torsional stroke of 547 mm À1 when exposed to water fog and the coiled-and-thermoset silk yarns provide a 70% contraction when the relative humidity was changed from 20% to 80%. E-textile can also be designed to enable multiple sensing capabilities. A silk composite electronic textile combo sensor was designed for measuring both temperature and pressure . As shown in Figure 2F , the fiber sensor is composed of an external Ecoflex sealing layer, silk fibers encapsulated with CNTs and [EMIM]Tf 2 N as the thermal conductive middle layer, and with polyester fibers as the supporting core. Temperature change detected by the resistance of the silk fiber middle layer achieved a sensitivity of 1.23% C À1 , while the pressure sensed by the capacitance in between two contacted sensor fibers achieved sensitivity of 0.136 kPa À1 . By using chemical vapor deposition (CVD) method to deposit a trilayer graphene (TLG) on top of polypropylene (PP) textile, a carbonÀgraphene temperature sensing e-textile was reported to operate at voltage as low as 1.0 V (Rajan et al., 2020). , (B) dual-core capacitive microfiber sensor for e-textile application (Yu et al., 2019, p.), and (C) machine-knittable smart glove for pressure sensing . Moisture sensing: (D) yarn-type humidity sensor , and (E) smart Janus textile moisture/thermal management . Temperature sensing: (F) silk composite e-textile temperature sensor . Light sensing: (G) WS 2 quantum dots on e-textile as a wearable UV photodetector (Abid et al., 2020). Biofluid sensing: (H) biosensing textile platform for chloride ion and pH sensing (Possanzini et al., 2020), and (I) integrated e-textile sensor patch for real-time and multiplex sweat analysis (Abid et al., 2020). By coating the RGO and WS 2 on a piece of pure cotton textile, as shown in Figure 2G (Abid et al., 2020), they were able to place the smart fabric on the back of a finger for photodetection of a 405 nm illumination source and the photoresponsivity can reach up to 5.22 mA W À1 at 1.4 mW mm À2 power density.
Chemical/biochemical sensors
In addition to the great effort in developing textile-based physical sensors, many chemical/bio/electrochemical sensors were being developed for the sensing of chemical biomarkers and external environmental markers that are closely associated with our daily lives. Considering the fact that most of the e-textiles are in close contact with the human skin, textile-based sensors entail intimate contact that endows easy access to various biofluids. Figure 2H, a textile-based biofluidic sensor is produced from simple threads. After being coated with the conducting polymer poly(3,4-ethylenedioxythiophene):poly(styrene-sulfonate) (PEDOT:PSS), the thread is functionalized with nanocomposite and chemical-sensitive dye to detect chloride ion and pH level in sweat (Possanzini et al., 2020). Another flexible sweat analysis patch sensor was designed based on a silk-fabric-derived carbon textile for simultaneous detection of six health-related biomarkers in sweatglucose, lactate, ascorbic acid, uric acid, Na + , and K . As demonstrated in Figure 2I, the intrinsically nitrogen (N)-doped porous structured carbon textile (SilkNCT) was used (or combined with other components) as the working electrode of the electrochemical sensors . The sweat sensor patch was further integrated with signal collection and transmission components, making it possible to conduct real-time monitoring of biomarkers in sweat. Mask printing method was applied to fabricate another sweat-chemical-sensing system by printing thin layers of electrodes on the surface of a glove to measure diverse biomarkers of natural sweat, including zinc, ethanol, pH, and chloride (Bariya et al., 2020;Tang et al., 2021;Wang et al., 2018).
Shown in
While smart wearable sensors can sense chemicals in our biofluids and provide critical information regarding the health status of our body, they can also be applied for detection of foreign chemicals for environmental, forensic, or military applications. For example, a wearable electrochemical glove-based sensor was reported to conduct rapid and on-site detection of fentanyl, in order to prevent drug abuse (Barfidokht et al., 2019). In this device, the flexible electrochemical sensors were integrated on the fingertips of the glove using screen printing method. The electrochemical sensor for fentanyl is based on its irreversible oxidation on the composite electrode which consists of multi-walled carbon nanotubes (MWCNT) and ionic liquid (shown in Figure 2J) (Barfidokht et al., 2019). The glove sensor can detect fentanyl in both liquid and powder forms with a detection limit of 10 mM using square-wave voltammetry. Similarly, electrochemical sensors for other chemicals based on textile have been developed for the detection of nerve agents, pollutants, or explosives (Bandodkar et al., 2013;Goud et al., 2021;Malzahn et al., 2011). Chemicals under human skin can also be detectable with a bandage-based sensor with minimally invasive microneedles for skin melanoma screening (Ciui et al., 2018). In another study, a textile-based potentiometric electrochemical pH sensor was reported with thick film graphite composite as the sensitive electrode and Ag/AgCl as the reference electrode. Both electrodes were printed on cellulose-polyester blend cloth and the sensor was able to measure pH ranging from 6.0 to 9.0 (Manjakkal et al., 2019). The wearable electrochemical sensors can even detect tyrosinase (TYR) enzyme skin-cancer biomarker with the catechol substrate. As illustrated in Figure 2K, in the presence of TYR, catechol will be oxidized into benzoquinone, which can be detected amper metrically (Manjakkal et al., 2019).
Instead of sensing chemicals in liquid or solid forms, wearable sensors can also sense chemical in its gaseous form. Shown in Figure 2L is a colorimetric gas-sensing e-textile fabricated by applying optically responsive dyes on the thread substrate, before being put into the acetic acid for cleaning and PDMS for physical entrapment of the dye (Owyeung et al., 2019). Three types of dyes were tested: 5,10,15,20-Tetraphenyl-21H,23H-porphine manganese (III) chloride (MnTPP), methyl red (MR), and bromothymol blue (BTB) for sensing two volatile gases, ammonia and hydrogen chloride. The concentration of gases was tested from 50 to 1000 ppm. iScience Review another inevitable part of these sensors is the conductive component (carbon, eGaIn, etc.), which will allow electrical signal to pass within or on the surface of the substrate. Thus, we can see that some of the sensors share similar fabrication methods, especially (strain or pressure) sensors. Dip coating is one of the most popular ways to produce for sensors, as by using this method, one can easily coat a conductive/functional layer on top a normal piece of fabric or textile. However, with different functional components, specifications such as sensitivity, range, and cycle life of different sensors can differ a lot. Table 2 shows the compilation of different bio/chemical sensors. It can be seen that these types of sensors comparatively are more complex than physical sensors. In order to obtain the chemical sensing capability, electrodes are necessary to perform different level of chemical reaction on a flexible substrate. Thus screen printing or mask printing is very popular for being able to coat a tiny piece of electrode on the substrate.
It is noticeable that the novel 2D carbon-based materials significantly contribute to the fast development of e-textile sensors. Among these materials, Graphene, RGO, MWCNT, and Mxene are most commonly used by researchers. In e-textile sensors, 2D carbon-based materials are not only able to perform as the conducting or sensing component in a sensor but also the small size or low thickness characteristics ensuring the flexibility of the e-textile sensors.
ENERGY FOR E-TEXTILE SYSTEM
The operation of various e-textile sensing modules and the downstream data processing, transmission, and interfacing will have to rely on a compatible energy system. In a self-sustainable independent e-textile system, wearable energy harvesters scavenge energy from various sources and energy storage modules (Abid et al., 2020) regulate the harvested energy and enhance system reliability. In this section, we discuss the commonly employed strategies in developing various energy harvesters and storage devices, and the integration thereof. The specific requirements to meet the standards of an e-textile module and their current limitations are also summarized.
Textile-based energy harvesters
As the power source for an energy independent autonomous system, the performance of energy harvesters determines the admissible functionality of the system. To fully utilize the diverse sources of energy, energy harvesters based on different energy generation mechanisms have been developed, harvesting solar or thermal energy from the surrounding environment Ding et al., 2020;Elmoughni et al., 2019;Hashemi et al., 2020;Hinckley et al., 2021;Pu et al., 2016b;Wen et al., 2016Wen et al., , 2020, or the bioenergy associated from the human activities and metabolism (Bandodkar, 2017;Bandodkar and Wang, 2016;Dong et al., 2019;Jeerapan et al., 2016;Lund et al., 2018;Xiong and Lee, 2019;Zhang et al., 2015b). In general, we can classify the energy harvester commonly seen in e-textile systems into two types: the ones that harvest environmental energies, namely, solar cells that harvest via photovoltaic effect and thermoelectric generators (TE) that harvest by exploiting the Seebeck effect (Bell, 2008); and the ones that harvest from the human body itself, namely, piezoelectric nanogenerators (PENG) and triboelectric generators (TENG) that harvest biomechanical energy from the movements of human body, and biofuel cells (BFC) that generate electricity using microbial or enzymatic redox reactions fueled by metabolites in human biofluids (Dong et al., 2019;Jeerapan et al., 2020;Pang et al., 2017;Ryu et al., 2019). Other types of wearable energy harvesters have also been proposed based on motion-powered electromagnetic generators (Quan et al., 2015;Zhang et al., 2015a;Zhang et al., 2019), breathing-powered pyroelectric generators Thakre et al., 2019;Xue et al., 2017), or antenna-based electromagnetic radiation harvesters (Abadal et al., 2014), but are less relevant to the scope of this review and are not discussed here. In general, the fabrication of the textile-based energy harvesters can be differentiated into the yarn/wire/thread-based devices that constitute the e-textile system from a ''bottom-up'' approach, and the ones that are directly fabricated onto fabrics/textiles in a ''top-down'' approach. The material and characteristics of examples discussed below were summarized in Table 3.
The fabrication of textile-based solar cells requires extensive material and structural engineering to obtain the desired flexible and wearable form factors. Opposed to traditional silicon-based photovoltaic materials, textile-based solar cells that rely on novel organic photovoltaic (OPV), dye-sensitized, and perovskite materials can be fabricated by solution compatible processes due to their thin-film nature which enables flexibility (Hatamvand et al., 2020;Li et al., 2015;Liu et al., 2018;Qiu et al., 2016;Xu et al., 2020). As shown in Figures 3A, one of the most common strategies of integrating solar cells on textile is through the functionalization of fibers and yarns, which can thereafter be weaved or sewn into the fabric (Chen et al., 2016; TEs made of various organic and inorganic thermoelectric materials have been used for harvesting energy from the temperature gradient between human body and the surrounding environment. As the temperature gradient ranges from 5 to 20 K between the human body and the surrounding, a single cell can only generate an extremely low voltage. TEs using n-type and p-type thermoelectric materials can be connected in serial to increase voltage and power (usually in the order of 10-10 2 mV, 10 À1 -10 3 pW). The power of TE varies with the load, with its ideal load equal to the internal resistance of the device. In addition to the serial connection, the direction of each p-n junction must be parallel with the temperature gradient, thus the design of the harvester requires skillful spatial arrangement. As shown in Figures 3D, the p-type material (PEDOT:PSS) and the n-type material (Poly[Na(NiETT)]) were arranged in a hexagonal layout and connected with a Hilbert curve to reach high fill factor of 30%, and the interconnections were printed to sequentially connect the n-type and p-type thermoelectric materials in series (Elmoughni et al., 2019). Alternatively, the thermoelectric material can be extruded into fibers with segments of p-type materials (CNT) and n-type materials (PEI-CNT), which were weaved into textile to establish a hierarchical structure of p-n junctions and generate >80 pW power per square of textile (Figures 3E) (Ding et al., 2020).
PENG and TENG bioenergy harvesters scavenge energy produced by the human movements and metabolism, and hence do not rely on the external environment and can generate power on-demand. Invented by Wang et al., in 2006 (Wang andSong, 2006), PENGs scavenge energy from mechanical deformation of piezoelectric materials that induce charge separation within the material (Figures 3F) (Wang and Song, 2006). A wearable PENG based on inorganic materials such as ZnO and lead zirconate titanate (PZT), or organic materials such as poly(vinylidene fluoride-co-trifluoroethylene) PVDF-TrFE is able to generate nW-mW power with several to tens of V alternating voltage from daily human movements (Khan et al., 2012;Lee et al., 2012;Mokhtari et al., 2020;Wu et al., 2012). As an example, shown in Figures 3G, the piezoelectric PVDF which is melt-spun into microfibers and weaved into textile generates power from its bending, twisting, and pulling (Lund et al., 2018). TENGs harvest energy from the relative motion between two materials that have different electron affinities (Fan et al., 2012). The energy harvesting using TENG has a variety of configurations to harvest energy from vertical contact-separation or from lateral sliding, harvesting the charge movement between either one electrode and ground or between two electrodes ( Figures 3H) (Dong et al., 2019). As all materials have a certain affinity to electrons, the selection of materials iScience Review is rather unlimited, with common negative electrode materials selected from electron-rich materials such as PTFE, PVC, PE, PP, and PS, and common positive electrode materials selected from positively charged materials such as aluminum, nylon, and cellulosic materials (Fan et al., 2012). The triboelectric materials can be deposited onto flexible substrates such as textiles or fabricated into yarn-type materials that directly weaves into textiles (Figures 3I), and thus harvest energy from body movements Paosangthong et al., 2019;Wen et al., 2019). Similar to PENGs, the power generated from the TENGs are in alternating high voltage (tens to hundreds of volts) and requires regulation before the generated energy can be harvested and stored.
BFC, a promising wearable energy harvester, collects energy form metabolites in human biofluids, such as glucose, urea, alcohol, and lactate. As lactate has the highest concentration in sweat, BFCs based on lactate have been widely studied (Bandodkar and Wang, 2016;Chen et al., 2019;Jeerapan et al., 2020). The lactate-based BFCs rely on enzyme catalytic oxidation reaction to convert lactate into pyruvate on the bioanode, which is complemented by oxygen reduction reaction on the cathode catalyzed by Pt or BOx (Figures 3J) (Jeerapan et al., 2020;Jia et al., 2014;Yin et al., 2021a). As the BFC operates based on the availability of lactate in sweat, high-intensity exercise is usually required to generate significant amount of sweat. Different from PENG and TENG-based harvesters, the sweat can be stored in reservoirs or hydrogels for subsequent use, hence allowing energy harvesting even after movement stops. The BFC can be fabricated into yarn form factor, or printed onto textile substrates, and integrated onto shirts or garments to harvest energy from human perspiration ( Figures 3K-3L) (Jeerapan et al., 2016;Kwon et al., 2014).
Textile-based energy storage devices
The energy storage device on wearable e-textile systems can be generally classified into two types: batteries and supercapacitors, both relying on the storage of charges in electrochemical cells. In general, the battery stores energy based on the redox conversion of the anode and cathode materials or the intercalation and deintercalation of cations that shuttles between the anode and cathode hosts. The supercapacitors store energy based on surface reactions on capacitive and pseudocapacitive electrodes, and rely on high surface area materials for non-faradaic double-layer charge adsorption (e.g. CNT, graphene, and Mxene) and desorption and highly reversible redox materials (e.g. conductive polymers, Prussian blue analogs, and TMD) (Borenstein et al., 2017;Hu et al., 2020a;Ke and Wang, 2016;Manjakkal et al., 2020). The batteries feature high capacity and energy density, with slower reaction rate, whereas the supercapacitors support higher power density due to its high reaction rate, high cycle life, yet has lower energy density compared to batteries.
Wearable Li-ion batteries have been developed with good flexibility and stretchability endowed by structural innovations (Xu et al., 2013;Yin et al., 2018a). However, they are prone to overheating and explosion and are deemed less suitable for wearable applications. In contrast, Zn-based batteries are much safer, easy to fabricate, and have variability in form factors (Li et al., 2018b(Li et al., , 2018cMo et al., 2020;Parker et al., 2017). Paring with oxygen in the air or the oxides of Mn, Ag, and Ni as cathode, a wide selection of batteries that are flexible, stretchable, and wearable has been developed in printable, planar configuration or in wire or yarn configuration, readily to be integrated with wearable electronics (Figures 3M-3N) (Kumar et al., 2017;Li et al.,2018bLi et al., , 2018c. Printable Zn-based batteries have achieved areal capacity up to 54 mAh/cm 2 and current of up to tens of mA, demonstrating the ability to steadily power various kinds of microcontrollers and integrated systems with display and sensing functionalities (Yin et al., 2021c). High capacitance supercapacitors have also been developed that can supply instant high power to electronics and be rapidly recharged. Conductive polymer (e.g. PEDOT:PSS, PPy, and PANi) or 2D-material-coated yarns can be used to fabricate textile supercapacitors with hierarchical structures (Figures 3O) (Anasori et al., 2017;Qu et al., 2016;Sun et al., 2016;Xu et al., 2017). Similarly, such capacitive or pseudocapacitive materials can be formulated into printable inks to print on textile, which can be combined with special structures (e.g. serpentines) for structural stretchability or with elastomeric binders which endows intrinsic stretchability ( Figure 3P) (Pu et al., 2016a). Table 4 Summarizes the key characteristics, structure, and fabrication of selected examples as discussed above
Integration of textile-based energy devices
With the development of various textile-based energy harvesters and storage devices, integrating different kinds of energy devices is a promising method to achieve unprecedented performance. Specifically, integrating different energy storage mechanisms enables both high power density and high energy density (Forouzandeh et al., 2020;Zuo et al., 2017). As examples, Figures 4A and 4B show several kinds of textilebased battery-supercapacitor hybrid devices based on VO 2 and Ni-Co selenide, respectively (Sahoo et al., 2019;Wang et al., 2020a). These devices allow rapid charge and discharge due to the use of highly redoxreversible pseudocapacitive transition metal oxides and dichalcogenides, and able to maintain relatively high energy density.
Likewise, the hybridization of energy harvester has also been widely explored Lee et al., 2016;Li et al., 2020b;Ryu et al., 2019;Xu et al., 2021;Yin et al., 2021a). The integrations have been demonstrated on harvesters with similar working mechanisms, such as PENGs and TENGs (Song et al., 2018;Zhang et al., 2015b;Zhu et al., 2019), and harvesters with different working mechanisms, such as TENGs and photovoltaic materials Pu et al., 2016b). As Figures 4C-4D show, a textilebased hybrid harvester integrates solar cells and TENGs to scavenge energy from two different sources, which enhances the system reliability when one of the energy sources is unavailable Pu et al., 2016b).
To further enhance system reliability, energy storage devices are integrated with energy harvesters. Energy sources for wearable harvesters are highly irregular and uncontrollable, thus the storage units are required to store or output energy on-demand. Furthermore, the storage units are able to discharge at high current which the harvesters alone cannot supply, allowing utilization of high-power electronics on e-textiles. As shown in Figures 4E-4G, examples such as integrating solar cells, TENG, and BFC with supercapacitors on textile were explored, respectively (Chai et al., 2016;Lv et al., 2018;Pu et al., 2015;Wen et al., 2016). Such integration allows the energy harvested under sunlight or during movements to be stored for later use after the supply of sunlight, movements, or sweat stopped, hence extended the operation time of any electronics that may be powered by these harvesters. Implementing these concepts, many self-powered, autonomous systems that incorporate energy harvesters, storage, power management circuits, data acquisition, and transmission electronics have recently been reported Yin et al., 2021a;Yu et al., 2020). Many works utilize a similar power utilization system, which stores the energy generated in capacitors or supercapacitors, and releases such stored energy in pulses to power microcontrollers or system-on-chips to perform the data acquisitionprocessing-transmission cycle within a few hundred milliseconds. System powered by BFC arrays or TENG has been reported to transmit sensing data of glucose, urea, temperature, or pH of sweat to cell phones without any external power supply Yu et al., 2020). Alternatively, e-textile system that iScience Review combines several harvesters and storage devices has been explored, aiming to establish a microgrid-onshirt, and display the sensing result using electrochromic display directly, hence further removing the need for external mobile devices (Yin et al., 2021a;2021b). Currently, as the energy scavenged from the harvesters is still limited in microwatt range, the functionality of the integrated systems is rather limited, compatible to only open-circuit potentiometric-based sensors. The integrated system also relies on inconvenient power input, such as exercises or direct sunlight, thus impeding the practicality of the device. Further improvement in the increasing power of on-body harvester while reducing the requirement for energy input is needed to truly expand the practicality and reliability of such self-powered systems.
WIRELESS COMMUNICATION FOR E-TEXTILE SYSTEM
Over the past decades, developments in materials and fabrication methods have yielded a wide range of sensors that can be implanted in the body (Stuart et al., 2021), attached on the skin (Tricoli et al., 2017), and integrated into textiles (Hatamie et al., 2020) to acquire physiological signals. Textiles, as the second human skin, provide a unique platform for integrating wireless functionality ( (Sahoo et al., 2019;Wang et al., 2020a). Integrated hybrid harvester combining (C) wire-shaped solar cell and textile-based triboelectric generators, and (D) integration of triboelectric yarn and photovoltaic yarn into hybrid energy harvesting textile Pu et al., 2016b). Hybrid harvesting-storage devices integrating (E) yarn-based supercapacitors and photovoltaic cells, (F) triboelectric nanogenerator textile and batteries, and (G) printed biofuel cells and supercapacitor on textiles (Chai et al., 2016;Lv et al., 2018;Pu et al., 2015). Examples of self-powered systems combining (H) biofuel cells, capacitor, electrochemical sensors and Bluetooth modules, (I) triboelectric nanogenerator, capacitor, and electrochemical sensors with wireless modules, and (J) textile-based all-printed system integrating biofuel cells and triboelectric generators as harvester, supercapacitor as storage device, and sensors with displays controlled microcontroller Yin et al., 2021a;Yu et al., 2020 iScience Review 2020; Weng et al., 2020;Zeng et al., 2014), and eventually establishing a digital communication network that wirelessly interconnects these sensors with the digital world (Xie et al., 2020). Unlike direct wiring method that is widely used in clinical and research settings, such a wireless communication network enables continuous health monitoring without temporal and spatial restraint (Cao et al., 2009;Cui et al., 2019;Liang and Yuan, 2016). In this section, we will introduce the mechanism of wireless communication, integration of wireless module with textiles, textile antennas, and textile-based body sensor networks. We briefly summarize typical materials, fabrication methods, and features of textile-integrated wireless modules in Table 5.
Wireless communication transfers information between two or multiple devices through electromagnetic field (''RFID Handbook,'' 2010). Near-field communication (NFC) and Bluetooth are the most widely used approaches. In these wireless technologies, the reader antenna generates a time-varying magnetic field, which develops a time-varying electric field by electromagnetic induction, and mutual dependence of these time-varying fields generates a chain effect of electric and magnetic fields in space. In the near-field, at where the distance between the reader and the transponder is within the wavelength of electromagnetic field, wireless interconnection is achieved through inductive coupling ( Figure 5A). In the far field, such interconnection is established through backscatter coupling at where a small proportion of emitted electromagnetic field reflected by the responder is received by the reader antenna ( Figure 5B). The transponder microprocessor converts the data stream to switch on and off the load resistor connected with the antenna, which affects the inductive coupling or backscatter coupling and eventually transmit the data to the reader (''RFID Handbook,'' 2010).
Advancement on CMOS technology has enabled minimization of electronics and incorporation of wireless modules into tiny chips with millimeter dimensions. Integrating embedded chips and passive components directly on textile remains a challenge and requires innovation on electronic materials and fabrication methods. Alternatively, a flexible printed circuit board (PCB) is used to assemble all electronics and then physically attached or adhered on textile Niu et al., 2019). The wireless module can be further connected with sensors wirelessly or wired. In the wireless approach, the sensor is part of the passive LC circuit and converts the sensing signal to resonant frequency shift or magnitude variation ( Figure 5C) (Nie et al., 2019;Niu et al., 2019). As there is no physical connection, the sensor can not only be on textile (Nie et al., 2019) but also on skin and even implanted in deep tissue (Boutry et al., 2019;Niu et al., 2019;Yeon et al., 2019). The LC circuit can be free of fragile silicon-integrated circuits and completely soft, offering a conformal skin-mimicking interface. However, the inductive coupling between the wireless module and sensor may be affected by surrounding environment such as moisture, human touch, and relative motion, and thus eventually affect data accuracy ''RFID Handbook,'' 2010). The wired method is generally used to connect the wireless module with textile-integrated sensors ( Figure 5D) (Kassal et al., 2018;Mishra et al., 2018). These textile-integrated systems are wearable version of bench-top devices and able to employ most conventional methods such as electrochemical, electrical, and optical measurement to detect various kinds of physiological and biochemical signals (Kassal et al., 2018).
Even though minimization makes wearable electronics to be less obtrusive for users, miniaturizing antennas, the critical component for wireless communication, generally deteriorates antenna performance. Instead, textile antenna, which is composed of a textile conductive element and another textile material acting as substrate, is a promising candidate for constructing unobtrusive wearable communication network ( iScience Review material enables antennas to be thin, lightweight, flexible, robust, inexpensive, and easily integrated into a garment, thus making the textile antennas comfortable for wear and durable for long-term usage ( Figure 5E) Xu et al., 2019). Antenna performances such as radiation pattern, gain, resonant frequency, and bandwidth are significantly affected by material characteristics (Brebels et al., 2004;Koski et al., 2014;Salvado et al., 2012). For instance, antenna bandwidth and efficiency are significantly affected by permittivity and thickness of the dielectric substrate. In general, textiles present a very low dielectric constant with relative permittivity close to one as they are very porous materials. As the porous structure can be easily deformed by bending and stretching, and can facilitate air exchange with moisture under the effect of environmental temperature and humidity, the textile's permittivity may change dynamically and result in unstable antenna performance. The textile conductive threads generally have much lower electrical conductivity compared with metal tracks, resulting in high power loss and low antenna efficiency (Salvado et al., 2012). Innovation in material, fabrication process, and antenna design could enable textile antenna performance similar to that of conventional metal antenna and even maintain performance under various circumstances such as mechanical deformation and harsh environmental factors ( Figure 5F) (Kiourti and Volakis, 2015;Lilja et al., 2012;Wang et al., 2012Wang et al., , 2014. Wireless body sensor network which simultaneously record signal from multiple anatomical locations can enhance the utility and reliability of the sensors in broad applications ranging from vital signs monitoring to fitness tracking (Yang, 2014). Conventional wireless body sensor networks rely on radio-based technologies, such as Bluetooth, and require each sensor node to be separately powered, typically using rigid batteries or bulky energy harvesters. These components limit the degree of skin conformability and user comfort that can be achieved and require periodic replacement or availability of specialized energy sources for long-term function. In addition, the radiative nature of data transmission results in vulnerabilities to eavesdropping and necessitates the use of cryptography techniques to address privacy concern (Yang, 2014). To overcome these shortcomings, textile-based wireless body sensor network has been developed.
Near-field-enabled clothing relies on inductive coupling to establish wireless power and data connectivity around the human body ( Figure 5G) (Lin et al., 2020). Specifically, the near-field-enabled clothing is fabricated by using computer-controlled embroidery to integrate low-cost conductive threads in textiles with near-field-responsive inductor patterns. By placing devices near to these patterns, the time-varying magnetic field generated by the reader such as smartphone can be transferred to other connected patterns with meter-scale distance from the hub (proximity to the reader) and then to the respective sensor nodes. Metamaterial textiles which are clothing structured with conductive textiles can support surfaceplasmon-like modes at communication frequencies and thus provide a platform for the propagation of radio waves around the body ( Figure 5H) (Tian et al., 2019). When standard wireless devices are placed near metamaterial textiles, their interconnection can be achieved through the propagation of wireless signals as surface waves instead of wireless signals radiating into the surrounding space. Both the nearfield-enabled clothing and metamaterial textiles transfer the wireless signal across conductive textile other than over air, enabling the network to operate with high efficiency. The physical localization of wireless signals on body surface ensures the networks immune to interference and inherently secure. In contrast with prior efforts to integrate wireless modules into textiles, the near-field-enabled clothing and metamaterial textiles do not incorporate fragile silicon-integrated circuits or require physical connectors with nearby devices, they are entirely fabric-based and are robust to daily wear.
CONDUCTORS FOR E-TEXTILES
Electrical conductor that can be integrated on textiles, term textile conductor, is a critical component to interconnect discretely distributed modules around human body and form an independent e-textile system (Mulatier et al., 2018;Wang and Facchetti, 2019). Textile conductors should not only have high electrical conductivity as metal conductors to form a power bus and data network but also maintain conventional textile properties to enable durable and comfortable wearing, thus require innovation on both material and fabrication methods. Niu et al., 2019), (E and F) seamlessly integrating antennas (Kiourti and Volakis, 2015;Xu et al., 2019), and (G and H) building wireless body sensor network (Lin et al., 2020;Tian et al., 2019). iScience Review Several methods have been developed to fabricate textile conductors and can be classified into two categories. One is to integrate conductive threads on textile by using conventional textile methods such as knitting, weaving, sewing, and embroidering ( Figure 6A) (Ismar et al., 2020;Mohamadzade et al., 2019;Roh, 2017Roh, , 2018Sanchez et al., 2021;Tsolis et al., 2014). The conductive threads include commercially available yarn such as metal-plated, metal filament, and stainless-steel yarns, and polymer threads functionalized with nanomaterial such as nanowires, nanoparticles, and carbon material. While these integration processes are completely solvent-free and compatible with the conventional textile fabrication equipment and largely maintain textile properties, they generally only achieve millimeter-scale pattern resolution, and the threads are subjected to serious mechanical deformation during fabrication.
The other method is to functionalize textiles with conductive material through printing, coating, or deposition ( Figure 6B) (Andrew et al., 2018;Jin et al., 2017;Mohamadzade et al., 2019;Wang and Facchetti, 2019). As textiles are 3D porous structures consisting of a network of interconnected fibers or yarns, these methods create conductive paths on the textiles by filling the voids or the network with conductive ink, paste, or precursor, followed by thermal curing or reaction to form metal composite/coating. While the metal conductive paths can achieve high electrical conductivity, they generally stiffen the textile, block moisture, and are vulnerable to crack or delamination under mechanical deformation.
To implement textile conductors on durable and comfortable wearing, innovation of material and fabrication methods should enable textile conductors with several distinct properties. The textile conductors should maintain high electrical conductivity under repeatable mechanical deformation as they are subjected to stretching, bending, and washing frequently. Pure metallic conductors such as metallic filament yarns generally have low yield point, and thus susceptible to breakage under bending/washing ( Figure 6C) (Hardy et al., 2020). Metallic composites consist of conductive fillers added into a polymer matrix to increase the yield strain, which confers the conductor's stretchability at the cost of decreased electrical conductivity ( Figure 6D) Lee et al., 2015;Matsuhisa et al., 2015Matsuhisa et al., , 2017. Achieving high electrical conductivity with robustness remains a key challenge for textile conductors. To maintain wearable comfortability of textile, the textile conductor should also be lightweight, breathable, and flexible. As such, the textile should maintain its 3D porous structure after functionalized with conductive materials ( Figure 6E) (Kim et al., 2019a;Wu et al., 2018). Finally, the textile conductors should have an insulating layer to protect the wearer and prevent the circuit from effects of temperature, sweat, moisture, and accidental splash (Yin et al., 2018b).
CONCLUSIONS
Owing to their considerable potential for wide application in various fields, e-textile systems have attracted much attention from researchers. These applications include physical, chemical, and biological sensing, energy harvesting, storage, and data interfacing with other smart devices. Studies conducted on e-textiles include washability, nontoxicity, biocompatibility, and mechanical performance, all of which are crucial toward practical applications. Nevertheless, limitations still exist in e-textile systems that impede their development as commercial consumer products.
Limitations of e-textile sensors
Firstly, the quality and repeatability of e-textile sensors are difficult to control. Compared with ordinary electronics, the dimension of electronics in e-textiles are comparatively smaller, so that flexibility and wearability can be achieved. However, the small size of these fibers or thin layers of coating may make the high quality and repeatability difficult to achieve. Most of the reported smart e-textiles are produced in lab and only at the ''proof of concept'' stage, without taking quality and repeatability of the sensor into consideration.
Secondly, there is lack of mass production capability. Most of the smart e-textiles are produced in lab by hand. On the other hand, most of these e-textiles are produced either using expensive materials or with complicated fabrication method, which may lead to high production cost and less acceptable by the market. Thus, mass production capability is difficult to achieve for most of the laboratory produced e-textiles. (Ismar et al., 2020), and (B) coating conductive material . Key performances of the textile conductive materials include robustness against (C) washing (Hardy et al., 2020) and (D) mechanical deformation and (E) water/air permeability(I. Kim et al., 2019a). Thirdly, there is lack of standardization. Even for the same application, different e-textile sensors may have different testing range, resolution, cycle life, hysteresis, and other aspects owing to different materials used, fabrication methods, and working mechanism behind different e-textiles. As such, it is difficult to evaluate or compare the different e-textiles. In order to make e-textile commercially available in the future, standard evaluation system is necessary at least for e-textile with mainstream production methods.
Furthermore, the functionality of some e-textile sensors, especially for bio/chemical sensors, relies highly on reactants that have been integrated in the textile. This may lead to the issue that once the reactants have depleted to a certain level, these e-textile sensors will lose the sensing capability before the reactants are being replenished.
In addition, lacking of compatible technologies may also hinder the development of e-textile sensors. Most of the reported e-textiles are mainly focused on one single component, sensor, energy harvester, or connection. However, every single component cannot work by itself but requires other compatible technologies to support it. For example, some of the e-textiles sensors may require high power energy device, which may not be currently available in the e-textile market. In order to use these e-textile sensors, a bulky battery may need to be connected to the sensor, thus affecting the flexibility and wearability of e-textiles.
Limitations of e-textile energy harvesters and storage systems
Energy harvesting still remains the most significant bottleneck for the energy self-sufficiency of the wearable electronics ecosystem, as the energy generated for practical use is only able to power electronics in very limited applications. We envision future developments in novel materials (e.g. 2D materials, conductive polymers, and high-entropy alloys), fabrication methods, and device structures in existing e-textile energy harvesters to bring improvements in performance, wash-durability, and stretchability. Furthermore, as more energy harvesting mechanisms are explored, energy harvesting system that works in more diverse environment and scenarios may be made available to diversify the energy input to e-textile systems.
Current energy storage is limited by their energy density, power density, and cycle life (Yin et al., 2021c). Hence, the functionality of e-textile systems is limited not only by their performance but also their need for frequent recharge. Although some attention has been directed to incorporate energy harvesters in wearable systems, their power is generally limited, and no harvesters are yet commercially available for use in e-textiles. Nevertheless, the concept of wearable microgrid has been proposed recently, advocating the critical budgeting of energy and power in e-textile systems to enhance the practicality and reliability of wearable energy systems, and its development will rely on multidisciplinary collaboration to make it a success (Yin et al., 2021a;2021b).
Limitations of e-textile communication systems
Textiles have been demonstrated as a unique platform for integrating wireless functionalities and bridging the human body with surrounding physical world without temporal and spatial constraints. To enable public acceptance of wireless e-textiles as conventional clothing, several key challenges still remain and need to be overcome.
Firstly, material and manufacturing innovations are required to seamlessly integrate wireless functionalities into conventional textiles to achieve both high performance on wireless communication and durability/ comfortability for wearing. For example, textile conductive materials are the basic elements for textile antennas. However, current textile conductive materials are either limited to low electrical conductivity, which seriously affected the wireless powering transfer efficiency, or low mechanical robustness, which causes e-textiles to lose their functionalities during daily wearing or washing.
Secondly, novel wireless technologies are required to build a secure wireless network between multiple devices distributed on textiles, skin, and even inside the body. Such a network would enable continuous monitoring of physiological signals that are once invisible and achieve clinical quality data outside the hospital. Power and data must be transmitted reliably across those devices during daily activities without temporal and spatial constraints. The network should also be highly secured against eavesdropping to maintain personal data privacy. | 2022-03-31T15:22:14.488Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "f7499e8d99ac2145c6f7d94a17cd699fd911b729",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ab7a71c8f002410eb80928f53f58ce0bc9882d0a",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
38213269 | pes2o/s2orc | v3-fos-license | Quantum phase transitions without thermodynamic limits
A new microcanonical equilibrium state is introduced for quantum systems with finite-dimensional state spaces. Equilibrium is characterised by a uniform distribution on a level surface of the expectation value of the Hamiltonian. The distinguishing feature of the proposed equilibrium state is that the corresponding density of states is a continuous function of the energy, and hence thermodynamic functions are well defined for finite quantum systems. The density of states, however, is not in general an analytic function. It is demonstrated that generic quantum systems therefore exhibit second-order (continuous) phase transitions at finite temperatures.
A new microcanonical equilibrium state is introduced for quantum systems with finite-dimensional state spaces. Equilibrium is characterised by a uniform distribution on a level surface of the expectation value of the Hamiltonian. The distinguishing feature of the proposed equilibrium state is that the corresponding density of states is a continuous function of the energy, and hence thermodynamic functions are well defined for finite quantum systems. The density of states, however, is not in general an analytic function. It is demonstrated that generic quantum systems therefore exhibit second-order (continuous) phase transitions at finite temperatures. The derivation of phase transitions in quantum statistical mechanics typically requires the introduction of a thermodynamic limit, in which the number of degrees of freedom of the system approaches infinity. This limit is needed because the free energy of a finite system is analytic in the temperature. But phase transitions are associated with the breakdown of the analyticity of thermodynamic functions such as the free energy. Hence in the canonical framework the thermodynamic limit is required to generate phase transitions. Although the existence of this limit has been shown for various systems (see, e.g., [1]), the procedure can hardly be regarded as providing an adequate description of critical phenomena.
One can consider, alternatively, a derivation based on the microcanonical ensemble. The usual construction of this ensemble [2] is to define the entropy by setting S = k B ln n E , where n E is the number of energy levels in a small interval [E, E + ∆E]. The temperature is then obtained from the thermodynamic relation T dS = dE. This approach, however, is not well formulated because (a) it relies on the introduction of an arbitrary energy band ∆E, and (b) the entropy is a discontinuous function of the energy. To resolve these difficulties, a scheme for taking the thermodynamic limit in the microcanonical framework was introduced in [3]. For finite systems, however, the difficulties have remained unresolved.
The purpose of this paper is to demonstrate the following: (i) if the microcanonical density of states is defined in terms of the relative volume, in the space of pure quantum states, occupied by the states associated with a given energy expectation E, then the entropy of a finitedimensional quantum system is a continuous function of E, and the temperature of the system is well defined; and (ii) the density of states so obtained is in general not analytic, and thus for generic quantum systems predicts the existence of second-order phase transitions, without the consideration of thermodynamic limits.
It is remarkable in this connection that similar types of second-order transitions have been observed recently for classical spin systems, for which the associated configuration space possesses a nontrivial topological structure [4].
The paper is organised as follows. We begin with the analysis of an idealised quantum gas to motivate the introduction of a new microcanonical distribution. This leads to a natural definition of the density of states Ω(E). Unlike the number of microstates n E , the microcanonical density Ω(E) is continuous in E. As a consequence, we are able to determine the energy, temperature, and specific heat of elementary quantum systems, and work out their properties. In particular, we demonstrate that in the case of an ideal gas of quantum particles, each particle being described by a finite-dimensional state-space, the system exhibits a second-order phase transition, where the specific heat decreases abruptly.
Ideal gas model. Let us consider a system that consists of a large number N of identical quantum particles (for simplicity we ignore issues associated with spinstatistics). We writeĤ total for the Hamiltonian of the composite system, andĤ i (i = 1, 2, . . . , N ) for the Hamiltonians of the individual constituents of the system. The interactions between the constituents are assumed to be weak, and hence to a good approximation we havê H total = N i=1Ĥ i . We also assume that the constituents are approximately independent and thus disentangled, so that the wave function for the composite system is approximated by a product state.
If the system as a whole is in isolation, then for equilibrium we demand that the total energy of the composite system should be fixed at some value E total . In other words, we have Ĥ total = E total . It follows that N i=1 Ĥ i = E total . Now consider the result of a hypothetical measurement of the energy of one of the constituents. In equilibrium, owing to the effects of the weak interactions, the state of each constituent should be such that, on average, the result of an energy measurement should be the same. That is to say, in equilibrium, the state of each constituent should be such that the expectation value of the energy is the same. Therefore, writing E = N −1 E total , we conclude that in equilibrium the gas has the property that Ĥ i = E. That is to say, the state of each constituent must lie on the energy surface E E in the pure-state manifold for that constituent. Since N is large, this will ensure that the uncertainty in the total energy of the composite system, as a fraction of the expectation of the total energy, is vanishingly small. Indeed, it follows from the Chebyshev inequality that for any choice of x > 0. Therefore, for large N the energy uncertainty of the composite system is negligible. For convenience, we can describe the distribution of the various constituent pure states, on their respective energy surfaces, as if we were considering a probability measure on the energy surface E E of a single constituent. In reality, we have a large number of approximately independent constituents; but owing to the fact that the respective state spaces are isomorphic we can represent the behaviour of the aggregate system with the specification of a probability distribution on the energy surface of a single "representative" constituent.
Microcanonical equilibrium. In equilibrium, the distribution is uniform on the energy surface, since the equilibrium distribution should maximise an appropriate entropy functional on the set of possible probability distributions on E E . From a physical point of view we can argue that the constituents of the gas approach an equilibrium as follows: On the one hand, weak exchanges of energy result in all the states settling on or close to the energy surface; on the other hand, the interactions will induce an effectively random perturbation in the Schrödinger dynamics of each constituent, causing it to undergo a Brownian motion on E E that in the long run induces uniformity in the distribution on E E . We conclude that the equilibrium configuration of a quantum gas is represented by a uniform measure on an energy surface of a representative constituent of the gas.
The theory of the quantum microcanonical equilibrium state presented here is analogous in many respects to the symplectic formulation of the classical microcanonical ensemble described in [5]. There is, however, a subtle difference. Classically, the uncertainty in the energy is fully characterised by the statistical distribution over the phase space, and for a microcanonical distribution with support on a level surface of the Hamiltonian the energy uncertainty vanishes. Quantum mechanically, however, although the statistical contribution to the energy variance vanishes, there remains an additional purely quantum-mechanical contribution. Hence, although the energy uncertainty for the composite system is negligible for large N , the energy uncertainties of the constituents will not in general vanish. An expression for ∆H will be given in equation (9) below.
Density of states. To describe the equilibrium represented by a uniform distribution on the energy surface E E , it is convenient to use the symplectic formulation of quantum mechanics. Let H denote the Hilbert space of states associated with a constituent. We assume that the dimension of H is n + 1. The space of rays through the origin of H is a manifold Γ equipped with a metric and a symplectic structure. The expectation of the Hamiltonian along a given ray of H then defines a Hamiltonian function H(ψ) = ψ|Ĥ i |ψ / ψ|ψ on Γ, where the ray ψ ∈ Γ corresponds to the equivalence class |ψ ∼ λ|ψ , λ ∈ C\0. The Schrödinger evolution on H is a symplectic flow on Γ, and hence we may regard Γ as the quantum phase space. Our approach to quantum statistical mechanics thus unifies two independent lines of enquiry, each of which has attracted attention in recent years: the first of these is the "geometric" or "dynamical systems" approach to quantum mechanics, which takes the symplectic structure of the space of pure states as its starting point [6]; and the second of these is the probabilistic approach to the foundations of quantum statistical mechanics in which the space of probability distributions on the space of pure states plays a primary role [7].
The level surface E E in Γ is defined by H(ψ) = E. The entropy associated with the corresponding microcanoni- Here dV Γ denotes the volume element on Γ. In a microcanonical equilibrium the temperature is determined intrinsically by the thermodynamic relation T dS = dE, which implies that k B T = Ω(E)/Ω ′ (E), where Ω ′ (E) = dΩ(E)/dE. Since the density of states Ω(E) is differentiable, the temperature is well-defined. Other thermodynamic quantities can likewise be precisely determined. For example, the specific heat C(T ) = dE/dT is given by Consider a large system composed of two independent parts, each in a state of equilibrium. Each subsystem is thus described by a microcanonical state with support on the Segré variety corresponding to disentangled subsystem states. Let us write Ω 1 (E 1 ) and Ω 2 (E 2 ) for the associated state densities, where E 1 and E 2 are the initial energies of the two systems. Now imagine that the two systems interact weakly for a period of time, during which energy is exchanged, following which the systems become independent again, each in a state of equilibrium. As a consequence of the interaction the state densities of the systems will now be given by expressions of the form Ω 1 (E 1 + ǫ) and Ω 2 (E 2 − ǫ), for some value of the exchanged energy ǫ. The value of ǫ can be determined by the requirement that the total entropy S(E) = k B ln[Ω 1 (E 1 +ǫ)Ω 2 (E 2 −ǫ)] should be maximised. A short calculation shows that this condition is satisfied if and only if ǫ is such that the temperatures of the two systems are equal. This argument shows that the definition of temperature that we have chosen is a natural one, and is physically consistent with the principles of equilibrium thermodynamics.
Phase transitions. The quantum microcanonical ensemble introduced here is applicable to any isolated finite-dimensional quantum system for which the ideal gas approximation is valid. The volume integral in (2) can be calculated by lifting the integration from Γ to H and imposing the constraint that the norm of |ψ is unity. Then we can write: where dV H is the volume element of H. Making use of the standard Fourier integral representation for the delta function, and diagonalising the Hamiltonian, we find that (3) reduces to a series of Gaussian integrals (see [8] for details). Performing the ψ-integration we then obtain the following integral representation for the density of states: . (4) The two integrals appearing here correspond to the deltafunctions associated with the energy constraint H(ψ) = E and the norm constraint ψ|ψ = 1. Carrying out the integration we find that the density of states is given by where 1 {A} denotes the indicator function (1 {A} = 1 if A is true, and 0 otherwise). In (5) we let m denote the number of distinct eigenvalues E j (j = 1, 2, . . . , m), and we let δ j denote the multiplicity associated with the energy E j . Thus m j=1 δ j = n + 1. In the nondegenerate case, for which δ j = 1 for j = 1, 2, . . . , m, we have With these expressions at hand we proceed now to examine some explicit examples. Nondegenerate spectra. In the case of a Hamiltonian with a nondegenerate spectrum of the form E k = ε(k − 1), k = 1, 2, . . . , n + 1, where ε is a fixed unit of energy, the density of states (6) reduces to We see that Ω(E) is a polynomial of degree n − 1 in each interval E ∈ [E j , E j+1 ], and that for all values of E it is at least n − 2 times differentiable. In Fig. 1 we plot Ω(E) for several values of n. For a system in equilibrium the accessible values of E are those for which Ω ′ (E) ≥ 0. States for which Ω ′ (E) < 0 have "negative temperature" in the sense of Ramsey [9]. The structure of the space of pure states in quantum mechanics is intricate, even for relatively elementary systems. In particular, as the value of the energy changes, the topological structure of the energy surface undergoes a transition at each eigenvalue [10]. For example, in the case of a nondegenerate three-level system, the topology of the energy surface changes according to: Point → S 3 → S 1 × R 2 # → S 3 → Point, as the energy is raised from E min to E max (R 2 # denotes a two-plane compactified into S 2 at a point corresponding to the intermediate eigenstate). These structural changes in the energy surfaces induce a corresponding nontrivial behaviour in the thermodynamic functions.
As an illustration we consider a four-level system and compute the specific heat as a function of temperature. The result is shown in Fig. 2, where we observe that the specific heat drops abruptly from 2k B to 1 2 k B at the critical temperature T c defined by k B T c = 1 2 ε. Therefore, this system exhibits a second-order phase transition, in this case at the critical energy E c = ε. This example shows that the relationships between phase transitions and topology discovered recently in classical statistical mechanics [11] carry over to the quantum domain where, arguably, they may play an even more basic role.
For a system with a larger number of nondegenerate eigenstates, the specific heat also increases abruptly as T is reduced. In this case the specific heat is continuous, and the discontinuity is in a higher-order derivative of the energy. For a system with n + 1 nondegenerate energy eigenvalues, the (n − 1)-th derivative of the energy with respect to the temperature has a discontinuity. The phe-nomenon of a continuous phase transition is generic, and is also observed if the eigenvalue spacing is not uniform.
Degenerate spectra. In a system with a degenerate spectrum, the phase transition can be enhanced. In particular, the volume of E E increases more rapidly as E approaches the first energy level from below, if this level is degenerate. This leads to a more abrupt drop in the specific heat (Fig. 2). | 2017-09-07T07:28:51.880Z | 2005-11-16T00:00:00.000 | {
"year": 2005,
"sha1": "9c8817c2d8dc617c18f3865b2a9715f605f27732",
"oa_license": null,
"oa_url": "http://spiral.imperial.ac.uk/bitstream/10044/1/1205/1/QuantumPhaseTransitionsWithoutThermodynamicLimits.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b02db26f8d9dc81f943a5b809ed2d51ba3dc92c6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
81576380 | pes2o/s2orc | v3-fos-license | EVALUATION OF PREOPERATIVE DIAGNOSTIC ACCURACY OF MODIFIED ALVARADO SCORING SYSTEM IN ACUTE APPENDICITIS
BACKGROUND The Modified Alvarado Scoring System (MASS) has been shown to be easy, simple and cheap diagnostic tool for supporting the diagnosis of acute appendicitis. However, its application and usefulness has not been evaluated in the current study settings in the diagnosis of acute appendicitis. Hence, this study was conducted. A cross-sectional study was conducted for a period of 18 months among 107 patients presenting at the surgery OPD, Sri Siddhartha Medical College Hospital and Research Centre, Tumkur, with complaints of pain in right iliac fossa with suspected features of acute appendicitis. The informed consent was taken. After considering the inclusion and exclusion criteriae, study subjects were categorised based on modified Alvarado scoring system. The subjects with ≥ 7 were made to undergo appendicectomy and those with score < 7 were re-scored after conservative management. The pre-operative diagnostic accuracy was evaluated. Most frequent complaint was nausea and/or vomiting. The sensitivity, specificity, positive (PPV) and negative predictive values (NPV) and diagnostic accuracy were 97.4%, 79.3%, 92.7%, 92.0%, 92.5% respectively. Negative appendicectomy rate was 7.3%. MASS score of ≥ 7 is found to be having high sensitivity, PPV, NPV and good specificity, hence can be used as diagnostic indicator of acute appendicitis in the low resource setting.
It has been stated that ultrasonography is one of the preoperative evaluation technique, which dramatically reduces the number of appendicectomies in patients without appendicitis. 8 Alvarado developed a scoring system for early diagnosis of acute appendicitis that was developed in 1986. Based on clinical signs, symptoms and differential leucocyte count with left shift of neutrophil maturation, it yielded a score of 10. 2 Kalan M et al produced Modified Alvarado Score with 9 point scoring system that helps in increasing accuracy of preoperative diagnosis and thus reducing negative appendicectomy rate. Score of 7 or more has been recommended for surgery. 9, 10 The MASS has been shown to be easy, simple and cheap diagnostic tool for supporting the diagnosis of acute appendicitis, especially for junior surgeons. 11,12 However, its application and usefulness in the diagnosis of acute appendicitis has not been evaluated in the current study settings and hence this study was conducted with the following objectives: 1) To evaluate the specificity and sensitivity of Modified Alvarado Scoring System in the diagnosis of Acute Appendicitis and 2) To assess the rate of negative laparotomies.
MATERIALS AND METHODS
A cross-sectional study was conducted at the outpatient Department of Surgery, Sri Siddhartha Medical College Hospital and Research Centre, Tumkur district, Karnataka for a period of 18 Months (October 1 st 2015 to March 31 st 2017) among the patients presenting at the surgery OPD with complaints of pain in right iliac fossa with suspected features of acute appendicitis. Considering an average of prevalence of acute appendicitis as per other studies as 89.3% with 95% confidence interval and permissible error (L) in the estimate of 'p' as 10%, total sample size of 96.94 was calculated using the formula n= z 2 (pq/L 2 ), where, z= 1.96 at 95% confidence interval, p= estimated prevalence (89.3%), q= 100-p (10.7%) and L= permissible error (10% of p). The total sample size (n) of 96.94 ≈ 97 and adding a 10% non-response to n, 97+9.7=106.7, 107 patients were considered for the study. The patients provisionally diagnosed as having acute appendicitis and willing to give consent for the study were included for the study and patients with appendicitis in pregnancy with appendicular mass or abscess, on chemotherapy and radiotherapy and immuno-compromised patients were excluded from the study. The ethical approval was taken from the IEC Committee of Sri Siddhartha Medical College Hospital and Research Centre, Tumkur district, Karnataka. After obtaining the written informed consent from patients, a detailed clinical history was taken from patients as per the proforma. All the patients were examined clinically and were subjected to routine blood investigations (complete haemogram, bleeding time, clotting time, urine sugar, albumin and microscopy, blood sugar levels, blood urea, serum creatinine), chest x-ray, ECG and ultrasonography of abdomen and pelvis. The study subjects were categorised based on modified Alvarado score. Those with 1 -3 were sent back with oral antibiotics and were asked to report back if the symptoms persisted even after the course of antibiotics. Those who were categorised in 4 -6 were admitted and given parenteral antibiotics and were reassessed for next 24 hours for revision of scoring. If score became ≥ 7 or their clinical condition was highly suspicious of acute appendicitis, they were subjected for appendicectomy. Those who were categorised under 7 -9 were taken for surgery. For those with modified Alvarado scoring of ≥ 7 and < 7, the positives were considered based on histopathological confirmation and for considering negatives HPE confirmation was considered for patients with score of ≥ 7 and USG was considered for patients with < 7, as everybody in the study were not subjected to surgery. Following the consideration of positives and negatives, the evaluation of modified Alvarado scoring was done based on sensitivity, specificity, positive predictive value, negative predictive values and diagnostic accuracies. Negative laparotomies were assessed only for those with score of ≥ 7.
Statistical Analysis
The collected data were entered into an Excel sheet. The data were expressed in means and proportions, and presented in the form of tables and graphs wherever necessary. The mean and standard deviation of age, median and interquartile range of modified Alvarado scores were calculated. Sensitivity, specificity, positive predictive value and negative predictive values were calculated by using the formulae: Sensitivity= [True Positives/ (True Positives + False Positive)]*100. The analysis was done using standard statistical package. The association between the scores and other standard evaluation techniques was assessed using Fisher's exact test. A P-value of < 0.05 was taken as statistically significant.
RESULTS
The mean age of the study participants was 26.54 ± 10.46 yrs. Majority, i.e. 45.8% were in the age group of 21 -30 yrs. Males predominated (67.3%) the study. The most frequent complaint was nausea and/or vomiting (86.7%) followed by loss of appetite and migration of pain. 95.2% had right iliac fossa tenderness followed by 79.0% had rebound tenderness. 80.0% had elevated temperature and 49.5% had leucocytosis. 76.6% had a modified Alvarado score of ≥ 7 and rest belonged to < 7, among whom 10/25 (40.0%) belonged to group with a score of 1 -3 and 15/25 (60.0%) belonged to a group with a score of 4 -6 ( Table 1). Majority had the median modified Alvarado score of 7 with an interquartile range (IQR) of 7 -8.
Characteristics of the Study Participants Age in Years (Mean ± SD)
26 Total 78 (a+c) 29 (b+d) 107 Table 3
. Sensitivity, Specificity, PPV and NPV of Modified Alvarado Scoring on Correlation with Pre-op USG and/or HPE Confirmation
*Indicates statistical significance at p < 0.05.
DISCUSSION
Acute appendicitis being one of the acute emergencies, many diagnostic techniques have been recommended, viz. clinical scoring systems, USG, CT scans, MRI and laparoscopy to identify the condition with an adequate accuracy to avoid the negative laparotomies and prevent the complications of delayed diagnosis like appendiceal perforation. USG is a cheap, quick and non-invasive diagnostic technique with an accuracy rate of 71% -90% for diagnosis of acute appendicitis. Absolute and confirmed diagnosis is only possible at surgical exploration and histopathologic examination of the removed appendix. Modified Alvarado scoring systems is also one such technique to aid diagnosis. 3,13,14 In the present study, a total of 107 patients were included and the age of the participants ranged from 2 -65 yrs. with a mean of 26.54 ± 10.46 yrs. Similarly, Kanumba ES et al had included a total number of 127 patients in their study and the age of the participants ranged from 8 to 76 years (mean 29.64 ± 12.97). 15 Sabhnani G et al reported that majority of them with acute appendicitis were in the age group between 21 -40 yrs., which is in parallel to the current study findings where majority were in 21 -30 years' age group. 16 Kumar SK et al has also included the participants ranging between 7 -65 yrs. and males dominated the study which is similar to the current study findings. 8 Gujar N et al reported majority of them having migration of pain to right iliac fossa as the commonest symptom, followed by nausea and vomiting and anorexia. 2 However, in the current study, the commonest symptom was nausea and vomiting followed by anorexia and migration of pain to right iliac fossa. The slight difference may be due to varied presentation of appendicitis. Among the examination findings, majority (95.2%) had right iliac fossa tenderness and it has been reported as 100.0% in other studies. 2,17 Following right iliac fossa tenderness, majority had fever and rebound tenderness similar to the findings by Rithin PS et al. Leucocytosis was seen in 60.0% of the patients in our study and similarly it was found in 65.9% of the patients as noted by Rithin PS et al. 17 Rithin PS et al noted that 79% of the patients had presented with a modified Alvarado score of ≥ 7 and 21% presented with a score of < 7. Similarly, in our study 76.6% belonged to a modified Alvarado score of ≥ 7 and 23.4% belonged to < 7. Median modified Alvarado scoring was 7 and is similar to findings by Kanumba ES et al. 15 Vandakudri AB et al found 33 patients to be in the group score of 5 -6, 9 were operated on clinical suspicion of high probability of acute appendicitis. Similarly, 25 were in the group score of < 7 and 2 among those were operated for appendicectomy. 18 The sensitivity, specificity, positive predictive value and negative predictive values of modified Alvarado scoring in the present study were 97.4%, 79.3%, 92.7% and 92.0% respectively. Diagnostic accuracy was found to be 92.5%. Similarly, Alamgir et al reported the finding of sensitivity (94.14%), which is in agreement with the present study, but the finding of specificity was 66.66% which was lower than the finding of present study. 19 Raghavan SN et al documented nearly similar results with a sensitivity and specificity of 90.4% and 81.25% respectively. 20 Singh SK et al reported the positive and negative predictive value for Modified Alvarado Score as 91.42% and 65% respectively and it has also noted a diagnostic accuracy of 81.82%; however, the negative predictive value was higher in the present study. 8 HPR and USG according to the modified Alvarado scores were significantly associated in the current study and is similar to the findings noted by Ramachandra J et al. 21 Negative appendicectomy rate in the present study was 7.3% and similarly Kodliwadamath HB et al has found similar rates (7.6%) of negative laparotomies. 10 However, the differences in diagnostic accuracy were observed, as the scores were applied to various populations and clinical settings. 8
Limitations
The study needs to be conducted in larger samples to generalise the results. As the surgery was not conducted in all the study subjects, the calculation of sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy for modified Alvarado scoring was assessed on comparing with combined USG and HPE results.
CONCLUSION
Modified Alvarado score in the current study setting is found to have high sensitivity, positive predictive value, negative predictive value and overall diagnostic accuracy in diagnosing acute appendicitis. It also has good specificity in diagnosing acute appendicitis; hence, aids in fairly differentiating the conditions mimicking acute appendicitis. Negative laparotomy rate also being less it reduces misdiagnosis. Thus, we conclude that use of Modified Alvarado score in a low resource setting can be a very good tool to identify acute appendicitis early and prevent the progression of the disease and its complications. | 2019-03-18T14:04:23.623Z | 2018-07-09T00:00:00.000 | {
"year": 2018,
"sha1": "613b0279ead66b9bbf83ee32dc00897277a82387",
"oa_license": null,
"oa_url": "https://doi.org/10.14260/jemds/2018/731",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cc8ca754aa1a86276b515092ca30709058118234",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265345000 | pes2o/s2orc | v3-fos-license | Research on the Training Model of Network Security Talents in Local Universities under the Background of "Double First Class" Construction
Abstract
I. INTRODUCTION
At present, cyberspace is increasingly linked to many fields such as national life, economy and education, and the international competition over the right to development, dominance and control of cyberspace is becoming increasingly fierce.Cyberspace security has become an important cornerstone of national security.In the new era, the key to maintaining our country's national security and building a strong network power lies in network security talents.In June 2016, the Central Leading Group Office of cybersecurity and Informatization, together with five other departments, issued the "Opinions on Strengthening the Construction of Cybersecurity Discipline and the Cultivation of Talents" 1, which clearly stated that it is necessary to accelerate the construction of cybersecurity academic disciplines and professional departments, and to innovate the training model for cybersecurity talents.The state attaches great importance to the cultivation of cybersecurity talents.In December 2016, the State Internet Information Office issued the "National Cyberspace Security Strategy" 2, and in 2017 it issued the "Regulations on the Security Protection of Critical Information Infrastructure (Draft for Comment)" 3, which repeatedly emphasized the important role of cyberspace security talents in critical infrastructure.To expedite the construction of "double first-class", in August 2018, the Ministry of Education, Ministry of Finance, National Development and Reform Commission issued the "Guiding Opinions on Accelerating the construction of 'Double First class' in Colleges and Universities", which pointed out that emphasis should be placed on docking major national and regional strategies, multi-party integration of educational resources, and strengthening the training of professionals in urgently needed disciplines such as national security and international organizations 4. In January 2022, the Ministry of Education, Ministry of Finance, National Development and Reform Commission issued the "Several Opinions on Further Promoting the Construction of World-class Universities and first-class Disciplines" 5, pointing out that efforts should be made to solve the problems such as insufficient supply capacity of high-level innovative talents still existing in the construction of "double first-class".A series of policies indicate that the construction of cybersecurity disciplines and the cultivation of cybersecurity talents have risen to an unprecedented height 6.However, there are still deficiencies in training of cybersecurity talents in China.Various white papers on cybersecurity talents issued by authoritative institutions show that there is still a shortage of cybersecurity talents in our country, and the training model of talents is still to be improved 7. Therefore, training high-quality cybersecurity talents has the urgency of the times.
In this paper, we analyze the challenges faced in the cultivation of cybersecurity talents in multiple dimensions, starting from the basic status quo of cybersecurity talent cultivation in local colleges and universities.Specifically, we explore the cybersecurity talent cultivation mechanism of local colleges and universities under the background of "double first-class" construction from four aspects of policy guarantee, teacher guarantee, service guarantee, and fund guarantee.We also study the effective paths for the implementation of cybersecurity talent cultivation in local colleges and universities.
II. The Current Situation and Challenges of Cybersecurity Talent Cultivation in Local Colleges and Universities under the Background of "Double First-Class" Construction
In order to promote the construction of cyber power and provide talent support for the maintenance of cyberspace security, it is urgent for local colleges and universities to accelerate the construction and improvement of high-level training system for network security talents and explore a network security talent training mechanism suitable for colleges and universities at present 8.In recent years, some cybersecurity education experts at home and abroad have investigated and analyzed the current situation of network security talents training in local universities 9, and put forward some countermeasures and suggestions on how to train qualified cybersecurity talents that meet the needs of national cybersecurity strategy and local economic development.These countermeasures and suggestions can be summarized in three aspects [10-12, first, from the perspectives of government agencies, universities, and enterprises, the practical dilemma of cybersecurity talent cultivation was macroscopically analyzed, and corresponding countermeasures were given in terms of local regional characteristics, school-enterprise cooperation methods, discipline characteristic construction, and establishment of talent infrastructure; second, drawing on the cybersecurity talent cultivation mechanisms of several major cyber powers in the world, from the aspects of top-level strategy, academic education, and safeguard measures, the "teaching, learning, training, and operating" integrated cybersecurity talent cultivation mechanism was proposed; third, the current situation and outstanding problems of cybersecurity talent cultivation in China were systematically explained from the perspective of talent supply and demand, and some feasible suggestions for cybersecurity talent cultivation were given respectively in terms of talent cultivation orientation, teacher construction, school-enterprise cooperation, and experimental training.
In the current context of "Double First-Class" construction, the cultivation of cybersecurity talents in local colleges and universities still faces the following challenges [13,14].
The Mechanism for Cultivating Cybersecurity Talents is Not Yet Perfect, and the Cybersecurity Educational Culture with the University's own Characteristics Has Not Been Formed:
Cybersecurity talents bear great missions and responsibilities in maintaining national cyberspace security, and they need to possess firm political, moral, and legal literacy as well as strong psychological quality.In terms of policy guarantee, with the central idea of cultivating morality and integrity, it is necessary to start from the aspects of ideology and mechanism construction to improve the training mechanism of cybersecurity talents and create a network security education culture with the university's own characteristics.However, the process faces huge challenges.
The Quality of Cybersecurity Talent Cultivation Faculty Needs to be Further Improved:
Qualified network security talents should have a solid theoretical foundation and application innovation ability, which is inseparable from the education and guidance of high-level teachers.In terms of strengthening the policy guarantee of cybersecurity talent cultivation, there are still many difficulties in further improving the high-level teaching staff system, establishing a multi-level teaching team, and consolidating the innovative teaching team.
The Application and Innovation Ability of Cybersecurity Talents Cannot Meet the Needs of Maintaining National Cybersecurity in the New Era:
Cybersecurity talents shall possess solid engineering practice ability, application innovation ability and broad international vision.Therefore, in terms of improving the application and innovation ability of cybersecurity talents, it is still necessary to explore and improve the vocational training of cybersecurity talents, the establishment of special talent discovery and training system, and educational cooperation at home and abroad.
Shortage of Funds for Running Schools Has Hindered the Rapid Development of High-Level Network Security Professional Departments:
In order to improve the quality and level of cybersecurity professional personnel training in colleges and universities, funding support is indispensable for the construction of high-level teaching staff, continuous investment and update of experimental equipment, and the construction of production and education and the integration of science and education practice base.However, at present, local colleges and universities are in short supply of educational funds from financial investment, which hinders the rapid development of high-level cybersecurity professional departments.
III. Research and Practice on the Training Mechanism of Cybersecurity Talents in Local Universities under the Background of "Double First-Class" Construction
Based on the reasons mentioned above, this paper, combining the characteristics of cultivating of high-quality cybersecurity innovative talents and the actual situation of local colleges and universities, takes the cultivation of cybersecurity talents in Chengdu University of Information Technology as an example, proposes and practices the new mechanism for the cultivation of cybersecurity talents from four aspects: policy guarantee, teacher guarantee, service guarantee and fund guarantee, and gives specific and effective implementation paths (as shown in Figure 1).
A. Strengthening Policy Protection and Improving the Cybersecurity Talent Cultivation Mechanism.
cybersecurity talents bear major missions and responsibilities in maintaining national cyberspace security, and must have firm political, moral and legal literacy, as well as strong psychological quality.In terms of policy guarantees, we focus on cultivating people with moral integrity and mainly focus on ideology, policy guarantees, and mechanism construction to improve the cybersecurity talent training mechanism and create a cybersecurity talent education culture with local university characteristics.
Paying Attention to Ideological Education and Increasing Policy Support:
We should further strengthen the ideological and political education of cybersecurity personnel training, pay attention to psychological education, adhere to moral education as the center, ideological and political work throughout the whole process of network security personnel training.We must first adhere to the political principle and the correct direction of ideology.And it is suggested to strengthen the political review and conditional approval of the cybersecurity talent training.
Starting from the establishment of ideological and political education system, comprehensively promoting the construction of "ideological and political curriculum" and innovating education and teaching methods, we should strengthen the integration of talents, technology and morality, and organically integrate ideological and political education such as political identity, family and country feelings, ideological and moral education, laws and regulations with the teaching of professional theoretical knowledge and skills to create a cybersecurity education culture of "love the Party and patriotism" with high political consciousness, strong rule of law concept and good moral quality.The government should further increase policy support, strengthen policy coordination and matching, and achieve zero breakthrough in national major project plans.In view of the current shortcomings, the government should coordinate the financial funds for higher education and the reform and development of local colleges and universities, and actively guide and support the colleges and universities to achieve zero breakthrough and rapid development of the "Six Excellence and One Top" plan.
1) Overall Planning, Formulating Standardized Systems, and Improving Talent Training Mechanisms:
The government should build and strengthen the top-level design plan of cybersecurity talent training in local colleges and universities.And it is proposed to establish a "Integrated Leadership Group for Cybersecurity Talent Training in Local Colleges and Universities" which is led by the local government and the relevant departments of the local government (such as education department, science and technology department, industry and information technology department, human resources and social security department, public security department, Cyberspace Administration of China and other departments).The group should coordinate the promotion and implementation supervision of local cybersecurity development and talent training plans.At the same time, the government should further establish and improve the cybersecurity industry-education integration collaborative innovation mechanism.The enterprises should be encouraged to deeply participate in the training of cybersecurity talents in colleges and universities and promote the collaborative education of colleges and universities, scientific research institutes, and industry enterprises so as to cultivate cybersecurity talents in a targeted manner and build collaborative innovation centers.
2) Supporting Universities in Implementing Cybersecurity Talent Training Programs and Talent Incentive Mechanisms:
We should implement talent training programs in cybersecurity-related majors (for example, Excellent Engineer Education and Training Program, etc.) and establish cybersecurity talent incentive mechanisms that reflect the characteristics of local colleges and universities.The government should establish special funds and combine social industry funds to reward excellent cybersecurity talents, excellent teachers, and excellent standards, etc.The selection and reward system should be established and carried out for "Excellent cybersecurity Teachers" and "Excellent cybersecurity Students" in local colleges and universities.
B. Improving and Optimizing the Teaching Staff to Enhance the Level of Teaching Staff for Cultivating Cybersecurity Talents.
Qualified network security talents should have a solid theoretical foundation and application innovation ability, which is inseparable from the education and guidance of high-level teachers.This project intends to strengthen the faculty development from three aspects: improving the high-level teaching staff system, establishing a multi-level teaching team, and consolidating the innovative teaching team.
1) Accelerating the Establishment and Improvement of a First-Class Team of Cybersecurity Talent Teachers:
To accelerate the establishment of a high-level and multi-level cybersecurity innovative faculty team at the local government and school levels, we specially invite experienced and highly skilled cybersecurity technology and management experts and industryspecific professionals to serve as part-time teachers.At the same time, we should vigorously support cybersecurity teachers to strengthen cooperation and exchange at home and abroad, conduct visiting study research, organize and participate in all kinds of cybersecurity skills competitions and domestic and foreign academic conference on cybersecurity.Also, we can invite well-known experts and scholars at home and abroad to visit and give lectures, and dispatch young teachers with development potential to well-known cybersecurity research institutions and enterprises for study visits and research.
2) Constantly Expanding the Team of Part-Time Double-Type Teachers in Network Security:
We should establish a part-time teacher team composed of experts and engineers from renowned universities at home and abroad, the cybersecurity industry and enterprises to participate in formulating the development plan and construction of relevant academic disciplines of cybersecurity and guiding the construction of research platforms and practice training platforms.Else, the part-time teacher team can participate in the academic activities of teaching and scientific research, participate in the guidance of students' practice training, graduation design / thesis, career planning and employment and deliver academic reports on cuttingedge theories and technologies of cybersecurity to teachers and students in colleges and universities.And it is particularly encouraged that engineers and technicians from enterprises and institutions bring topics and projects to the school for medium-term and short-term teaching and research work.
3) Supporting Innovative Breakthroughs of Cybersecurity Teachers in Fundamental Theoretical Research and Applied Innovation:
The government should further support policies and financial investments by high-level talents and enterprises in fundamental theoretical research and applied innovation in cybersecurity, establish corresponding incentive mechanisms, establish special funds, increase the number of graduate students and scientific and technological projects in cyberspace security, encourage outstanding faculty and scholars to conduct independent innovation research, and promote the core breakthrough and long-term development of the local cybersecurity industry.
C. Enhancing Service Support and Improving the Application and Innovation Capabilities of Social Cybersecurity Talents:
cybersecurity talents shall possess solid engineering practice ability, application innovation ability and broad international vision.Therefore, in terms of improving the application and innovation ability of cybersecurity talents, we mainly strengthen the construction and improvement from three aspects: the vocational training of cybersecurity talents, the establishment of special talent discovery and training system, and educational cooperation at home and abroad.
1) Further Improving the Career Training System for Cybersecurity Talents:
The local government should establish a multiparty coordinated and unified certification standards and training model among the government, certification bodies, vocational training institutions, colleges and universities, and cybersecurity enterprises and institutions, strengthen the on-the-job training of cybersecurity practitioners, and establish a unified standard training system for local cybersecurity practitioners.
2) Establishing a Discovery System for "Gifted" and "Expert" Talents and Funding Their Cultivation:
To fund these efforts, the government can raise money through channels such as government grants, corporate and institutional donations, and crowdfunding.The government can establish a local government cybersecurity development fund under its supervision, or launch high-level cybersecurity technology-related competitions through school-enterprise cooperation, to discover and cultivate talents with different levels and capabilities.Furthermore, individual differences should be taken into account for their education and cultivation.
3) Deepening Domestic and Foreign Education Cooperation in the Field of Cybersecurity and Establishing a Special Mechanism for Domestic and Foreign Communication and Coordination:
The government should Emphasize the promotion of domestic and foreign joint undergraduate, graduate, and dual-degree training programs, and strengthen the construction of high-level demonstration models for domestic and foreign cooperative education.At the same time, the government should formulate related policies and measures for domestic and foreign scientific research cooperation in cybersecurity, the introduction of high-level foreign talents, the construction of internationalized faculty, student domestic and foreign exchanges, the cultivation and management of international students, and the construction of international programs.And establish a specific management department to be responsible for this.
D. Increasing Financial Support to Promote the Rapid Development of High-Level Cybersecurity Professional Colleges and Universities.
In order to improve the quality and level of cybersecurity professional personnel training in colleges and universities, funding support is indispensable for the construction of high-level teaching staff, continuous investment and update of experimental equipment, and the construction of production and education and the integration of science and education practice base.
IV. ACHIEVEMENT
Relying on the national first-class major in information security and the construction point of "firstclass discipline" of cyberspace security in Sichuan Province, the cybersecurity talent cultivation of Chengdu University of Information Technology is based on the construction idea of "facing national strategies, docking social needs, building brands with quality, and promoting development with innovation".It is committed to cultivating cybersecurity applicationoriented innovative talents with healthy mental and physical physique, good humanistic quality, systematic theoretical knowledge and solid engineering ability for national and local economic development.
After more than two years of research and practice, the reform of cybersecurity talent cultivation mechanism has achieved initial success.At present, a high-level cybersecurity teaching team has been established, with doctors accounting for 61% and "double-qualified" teachers accounting for 94.6%.Take the 2023 graduates of our school's information security major as an example.The students have obtained more than 40 intellectual property rights authorizations (more than 10 patents and more than 30 software copyrights), published more than 20 papers, won more than 50 awards for scientific and technological competitions (more than 10 national first prizes and more than 20 provincial and ministerial first prizes), obtained more than 30 innovation and entrepreneurship projects (8 national, 12 provincial, and 16 school-level projects), and participated in more than 100 research projects of teacher teams.The employment rate of graduates reached 95.2%, achieving a good talent training effect.
V. CONCLUSIONS
In the new era, to maintain China's national security and enhance China's strength in cyberspace, the key lies in cybersecurity talents.However, the cultivation of cybersecurity talent in China is still insufficient.Various white papers on cybersecurity talents issued by authoritative institutions show that there is still a shortage of cybersecurity talents in China, and the talent cultivation model needs to be improved.The cultivation of high-quality cybersecurity talents is urgent.
To cultivate cybersecurity application innovation talents that adapt to the national and local economic development, local colleges and universities can combine the characteristics of cultivating highquality cybersecurity innovation talents and the actual situation of local colleges and universities, and explore and practice the mechanism of cybersecurity talent cultivation from four aspects of policy guarantee, teachers guarantee, service guarantee, and fund guarantee, focusing on how to cultivate cybersecurity application innovation talents with healthy mind and body, good humanistic quality, systematic theoretical knowledge, and solid engineering ability.This article provides reference opinions on how local colleges and universities can explore their own brand characteristics of cybersecurity talent training, and helps local colleges and universities comprehensively improve the system construction and quality and efficiency of cybersecurity talent training. | 2023-11-22T16:19:45.919Z | 2023-11-16T00:00:00.000 | {
"year": 2023,
"sha1": "a7c86485428f12937d802d683f9c90a27c8bc6b8",
"oa_license": null,
"oa_url": "https://doi.org/10.36347/sjet.2023.v11i11.001",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "944cc2a153653549607bd7b8d2988a31205d6f82",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": []
} |
237605323 | pes2o/s2orc | v3-fos-license | Shape Control of Deformable Linear Objects with Offline and Online Learning of Local Linear Deformation Models
The shape control of deformable linear objects (DLOs) is challenging, since it is difficult to obtain the deformation models. Previous studies often approximate the models in purely offline or online ways. In this paper, we propose a scheme for the shape control of DLOs, where the unknown model is estimated with both offline and online learning. The model is formulated in a local linear format, and approximated by a neural network (NN). First, the NN is trained offline to provide a good initial estimation of the model, which can directly migrate to the online phase. Then, an adaptive controller is proposed to achieve the shape control tasks, in which the NN is further updated online to compensate for any errors in the offline model caused by insufficient training or changes of DLO properties. The simulation and real-world experiments show that the proposed method can precisely and efficiently accomplish the DLO shape control tasks, and adapt well to new and untrained DLOs.
I. INTRODUCTION
Deformable linear objects (DLOs) refer to deformable objects in one dimension, such as ropes, elastic rods, wires, cables, etc. The demand for manipulating DLOs is reflected in many applications, and a significant amount of research efforts have been devoted to the robotic solutions to these applications. For example, wires are manipulated to assemble devices in 3C manufacturing [1]; belts are manipulated in assemblies of belt drive units [2]; and in surgery, sutures are manipulated to hold tissue together [3].
The manipulation tasks of DLOs can be divided into two categories [4]. In the first category, the goals are not about the exact shapes of DLOs; rather, they concern highlevel conditions such as insertion [5], tangling or untangling knots [6], obstacle-avoidance [7], flex and flip [8], etc. The second category is about manipulating DLOs to desired shapes, where one key challenge is to obtain the unknown deformation models, i.e., how the robot motion affects the DLO shapes. This paper focuses on the shape control tasks.
Different from rigid objects, it is challenging to obtain the exact models of DLOs, because they are hard to calculate theoretically, and may vary significantly among DLOs. Some analytical physics-based modeling methods can be used to model DLOs, such as mass-spring systems, position-based dynamics, and finite element methods [9]. However, all are approximate models, and require accurate parameters of M. Yu, and X. Li Fig. 1. Overview of the proposed scheme for DLO shape control. The shape of the DLO is represented by multiple features along the DLO. Some of the features are chosen as target points, and the task is defined as moving the target points to their desired positions. In the offline phase, an initial estimation of the deformation model is learned. Then, in the online phase, the shape control task is executed, and the model is further updated to compensate for offline modeling errors.
DLOs which are difficult to acquire. Data-driven approaches have been applied to learn the deformation models, without studying the complex physical dynamics. A common method is to first learn a forward kinematics model offline, and then use model predictive control in manipulation [10]- [13]. Although this allows for learning accurate forward kinematics of an offline-trained DLO, problems may arise when manipulating a different untrained DLO since there is no online update. Reinforcement learning methods have also been studied [14], [15], but they are less data-efficient, and the transfer from trained scenarios to untrained scenarios is challenging. Besides these offline approaches, some studies have used purely online methods to estimate the local linear deformation model of manipulated DLOs, which can be applied to any new DLO [16]- [19]. However, these online estimated models are less accurate because only a small amount of local data can be utilized.
In this paper, we propose a scheme for the shape control of DLOs, where the unknown deformation model is estimated with both offline and online learning, shown in Fig. 1. It allows more accurate modeling through offline learning and further updating for specific DLOs via online learning during manipulation. Specifically, we use a radial-basis-function neural network (RBFN) to model the mapping from the current state to the current local linear deformation model. In the offline phase, the RBFN is trained on the collected data. The offline model then directly migrates to the online phase as an initial estimation. In the online manipulation phase, an adaptive controller is proposed to control the shape, in which the RBFN is further updated to adapt to the manipulated DLO concurrently. Thus, the offline learning and online learning complement each other. In addition, we apply proper state representations and domain randomization methods to improve the model's generalization ability. The simple structure of RBFN and the linear format of the deformation model enable data-efficient offline training and fast online adaptation. Compared to the nonlinear offline methods, our method is more data-efficient and can adapt to untrained DLOs; compared to the purely online methods, our method is more stable and can handle more complex tasks. The stability of the closed-loop system and the convergence of task errors are analyzed using the Lyapunov method. Simulation and real-world experiment results are presented to demonstrate the better performance of the proposed scheme compared to the previous data-driven methods. The video and code are available at https://mingrui-yu.github.io/ shape_control_DLO/.
II. RELATED WORK
In this section, existing approaches for the shape control of DLOs will be discussed.
Analytical physics-based modeling of DLOs has been researched over the past several decades [9]. Some works about shape control were based on physics-based models. In [20], finite element model (FEM) simulation of DLOs was used for open-loop shape control. An approach using reduced FEM to closed-loop shape control of DLOs was proposed in [21]. These methods highly rely on the accuracy of the analytical model, requiring the accurate DLO parameters which are hard to obtain in reality.
Data-driven approaches have been applied to the shape control of DLOs recently, dispensing with analytical modeling. In [22]- [24], the shaping of DLOs was addressed by learning from human demonstrations. Robots could reproduce human actions for specific tasks. Reinforcement learning (RL) has also been applied to learn policies for DLO shape control in an end-to-end manner. A simulated benchmark of RL algorithms for deformable object manipulation was presented in [14], in which the Soft Actor Critic (SAC) algorithm performs best in rope straightening tasks. RL policies for shape control of elastoplastic DLOs were learned in [15]. Like other RL applications, these methods suffer from high training expenses and challenging transfer from simulation to real-world scenarios.
Different from the end-to-end methods, many works first learn forward kinematics models of DLOs offline, and then use model predictive control (MPC) to control the shape. The forward kinematics model predicts the shape at the next time step based on the current shape and input action. In [10], [11], an encoder from the image space to the latent space, and a forward kinematics model in the latent space, were jointly trained. A more robust and data-efficient approach is to estimate the DLO state first and then learn the forward kinematics in the physical state space. A bi-directional LSTM network whose structure is similar to chain-like DLOs was applied in [12]. An interaction network was integrated with a bi-directional LSTM network in [13] to better learn the local interactions between segments of DLOs. The problem with these offline methods is that the generalization to different untrained DLOs cannot be guaranteed.
To control the shape of unknown objects, a series of methods tackle the shape control problem based on purely online estimation of the local linear deformation models of DLOs, in which a small change of the DLO is linearly related to a small displacement of the robot by a locally effective estimated Jacobian matrix. The control input can be directly calculated using the inverse of the Jacobian matrix.
In [17]- [19], the local Jacobian matrix was obtained using the (weighted) least square estimation on only the data in the current sliding window. However, the accuracy of the online estimated models is limited and cannot be improved with more data. Thus, these methods mostly handle tasks with local and small deformation.
III. METHODOLOGY
This paper considers the quasi-static shape control of elastic DLOs. 'Quasi-static' refers to the motion being slow, in which the shapes of DLOs are assumed to be determined by only their potential energies and no inertial effects [16]. As illustrated in Fig. 1, the robot end-effectors grasp the ends of the DLO and manipulate it to the desired shape. The overall shape of the DLO is represented by the positions of multiple features uniformly distributed along the DLO. The target points are chosen from the features, and the task is defined as moving the target points on the DLO to their corresponding desired positions. The specific choice of the target points depends on the task needs.
Some frequently-used notations are listed as follows. The vertical concatenation of column vector a and b is denoted as [a; b]. The position vector of the end-effectors is represented as r ∈ n . The position of the i th feature is represented as x i ∈ l . The overall shape vector of the DLO is represented as x = [x 1 ; · · · ; x m ] ∈ lm , where m is the number of the features. The dimension n and l are adjustable according to the requirements of the task.
A. Local Linear Deformation Model
One key problem of the shape control of DLOs is to study the mapping from the motion of the end-effectors to the motion of the DLO features. The velocity vector of the DLO features can be locally linearly related to the velocity vector of the end-effectors using a Jacobian matrix [16]- [19]. Different from the previous works, we estimate the Jacobian matrix by learning the mapping from the current state (x, r) to the current Jacobian matrix J , i.e., J can be obtained as a function of x and r:ẋ Proposition 1: With the quasi-static assumption, the velocity vector of the features on the elastic DLO can be related to the velocity vector of the end-effectors as (1).
Proof: Denote the potential energy of the elastic DLO as E, which is assumed to be fully determined by x and r. In the quasi-static assumption, internal equilibrium holds at all states during the manipulation, where the DLO's internal shape x locally minimizes the potential energy E [25]. That is, ∂E/∂x = 0 at any state. Consider the DLO is moved from state (x,r) to state (x + δx,r + δr) where δx and δr are small displacements of the features and the end-effectors. Denote ∂E/∂x as g(x, r), ∂ 2 E/(∂x∂x) as A(x, r), and ∂ 2 E/(∂x∂r) as B(x, r). Using Taylor expansion and neglecting higher order terms, we have g(x+δx,r+δr) ≈ g(x,r) + A(x,r)δx + B(x,r)δr (2) where g(x + δx,r + δr) = g(x,r) = 0. Note that A and B physically represent the unknown stiffness matrices. Assuming the DLO has a positive and full-rank stiffness matrix around the equilibrium point, matrix A is invertible [16]. Then, it can be obtained that In slow manipulations,ẋ ≈ δx/δt andṙ ≈ δr/δt with small δt. Then, denoting −(A(x, r)) −1 B(x, r) as J (x, r), we derive (1) and prove the proposition. Note that (1) can be rewritten aṡ . Thus, it can be obtained thaṫ which indicates that different features correspond to different Jacobian matrices. This formulation makes it convenient when choosing any subset of features as the target points in the manipulation task.
However, it is difficult to theoretically calculate the Jacobian matrix. We estimate the Jacobian matrix in a data-driven way, combining both offline learning and online learning.
B. Offline Learning
Prior to the shape control tasks, a data-driven learning method is employed to obtain the initial estimation of the model, based on offline collected data.
We apply a neural network (NN) to approximate the Jacobian matrix, in which the input is the current state and the output is the Jacobian. Two properties of the Jacobian can be noticed intuitively: (1)translation-invariance: translation of the whole DLO without changes of the shape will not alter the Jacobian matrix; (2)approximate scale-invariance: DLOs with different lengths but similar overall shapes and the same number of features may have similar Jacobian matrices. Thus, to improve the NN's generalization ability, we modify the representation of the input state from [x; r] to wherē An illustration of the proposed DLO state representation in (6). The position of the k th feature x k can be determined byx orientations of the left and right grasped ends. As illustrated in Fig. 2, this relative representation only determines the overall shape, ignoring the scale and overall translation. Therefore, it is much more data-efficient than the absolute representation [x; r] which requires a larger network and training dataset to guarantee the generalization to different DLOs. Note that this representation avoids using relative positions between adjacent features, which may suffer from accumulated errors when perception errors exist.
Then, (5) can be rewritten aṡ We apply a radial-basis-function neural network (RBFN) to represent the actual Jacobian matrix as a function of φ: where vec(·) refers to the column vectorization operator, and W k is the matrix of unknown actual weights of the RBFN for the k th feature. The θ(φ) represents the vector of activation functions, and θ(φ) = [θ 1 (φ), θ 2 (φ), · · · , θ q (φ)] T ∈ q . We use gaussian radial function as the activation function: where the parameters µ i and σ i are trainable in the offline phase but fixed in the online phase. Equation (9) can be decomposed as where J ki is the i th column of J k , and W ki is the whereṙ i is the i th element ofṙ. The estimated Jacobian matrix is represented as whereŴ is the matrix of estimated weights. The approximation error for the k th feature e k is specified as Fig. 3.
The architecture of the RBFN for learning the local linear deformation model. The network takes the state representation in (6) as the input and outputs the estimated Jacobian matrices which relate the velocity vectors of DLO features to the velocity vector of the robot end-effectors.
The architecture of the RBFN is shown in Fig. 3. Note that the learning or estimation for Jacobian matrices of different features is carried out in parallel. In the offline phase, the ends of the DLO are controlled to move randomly to collect the training dataset, which contains x k ,ẋ k , r,ṙ, (k = 1, · · · , m). Then, the RBFN is trained on the collected data. Considering the noise and outliers in the data, we use the smooth L1 loss [26] of e k for training.
The k-means clustering on a subset of the training data is used to calculate the initial value of µ i and σ i , (i = 1, · · · , q). Then, all parameters including µ i , σ i andŴ are updated using the Adam optimizer [27]. We choose RBFN for its simple structure, robustness, and online learning ability [28]. Though less expressive than some more complex network architectures, it performs well enough in this work.
C. Adaptive Control through Online Learning
Considering the differences between the manipulated DLO in the online phase and the trained DLOs in the offline phase, online learning during manipulation is required. We propose an adaptive control scheme, in which the offline estimated model is treated as an initial approximation and then further updated during the shape control tasks.
The control objective is to move the target points on the DLO to the desired positions. The target points can be any subset of the features, whose indexes form set C. Then, the target shape vector x c , target Jacobian matrix J c (φ), and target weights W c are denoted as The velocities of the robot end-effectorsṙ are controlled, and the control input is specified aṡ where Ĵ c (φ) † is the Moore-Penrose pseudo-inverse of the estimated Jacobian matrix. In addition, ∆x c = x c − x c desired where x c desired is the desired position vector of the target points, and α ∈ is a positive control gain. In actual implementations,ṙ is bounded to avoid too fast motion.
The online updating law of the j th row ofŴ ki of the RBFN is specified aṡ W T kij =ṙ i θ(φ)(η 1 ∆x kj + η 2 e kj ), j = 1, · · · , l (17) where ∆x kj is the j th element of the task error ∆x k , and e kj is the j th element of the approximation error e k . The η 1 and η 2 are positive scalars. Such updating is done for all k ∈ C and i = 1, · · · , n.
The proposed control scheme by (16) and (17) allows controlling the target points on the DLO to the desired positions while updating the RBFN concurrently to compensate for any offline modeling errors.
The stability of the system is analyzed as follows. Below J c (φ), J k (φ), θ(φ) are shortened to J c , J k , θ for simplicity. Premultiplying both sides of (16) byĴ c , we havê Note that from (14) and (15), it can be obtained that where e c = [· · · ; e k ; · · · ], k ∈ C. Since the desired positions are fixed, substituting (19) into (18) yields A Lyapunov-like candidate is given as Differentiating (21) with respect to time and substituting (20) (17) and (14) into it, we can obtain thaṫ As V > 0 andV ≤ 0, the closed-loop system is stable. The boundedness of V ensures the boundedness of ∆x c from (21). If l × |C| ≤ n andĴ c holds full row rank,Ĵ c (Ĵ c ) † is an identity matrix. Then, it can be proved that ∆x c → 0 as t → ∞, following [29]. Otherwise,Ĵ c (Ĵ c ) † is only positive semi-definite, resulting in an underactuation system. However, from (14), (16) and (22) it can be proved thaṫ V = 0 if and only ifṙ = 0, which only happens when there are huge conflicts between the desired moving directions of different target points so that the robot movement in any direction cannot reduce the ∆x c at this "local minimum point". Actually, this situation happens rarely in experiments owing to the coupling between the target points, so in most casesV < 0 always holds and finally ∆x c → 0.
IV. RESULTS
We carry out both simulation and real-world experiments to validate the proposed method. The simulation of DLOs is based on Obi [30], a unified particle physics engine for deformable objects in Unity3D [31], as shown in Fig. 4. In the simulation, the two ends of the DLO are grasped by two grippers, which can translate and rotate. Both 2D and 3D tasks are tested. In the 2D tasks, the environment dimension l is 2 and the control input dimension n is 6; in the 3D tasks l=3, n=12. In the real-world experiments, the DLOs are placed on a table. One end of the DLO is grasped by a UR5 arm, and the other is fixed. Thus, l=2 and n=3. The shape of the DLO is represented by 8 features (m=8), and the positions of the features are obtained by measuring the markers on the DLO with a calibrated RGB camera in the experiments. Both the data collection frequency and control frequency are 10 Hz.
We choose three representative classes of methods for comparison. The first class is learning forward kinematics models of DLOs offline and using MPC for shape control (FKM+MPC). According to [12], we choose bi-directional LSTM (biLSTM) for modeling and Model Predictive Path Integral Control (MPPI) for control. The second class is estimating the Jacobian matrix online using weighted least square estimation (WLS). We specifically use the method in [19]. The third is based on reinforcement learning. We train an agent using Soft Actor Critic (SAC) [32].
A. Offline Learning of the Deformation Model
The offline data of DLOs are collected in simulation, by randomly moving the ends of the DLOs. A RBFN with 256 neurons in the middle layer (q = 256) is first trained offline to learn the initial deformation model. First, we test the offline modeling accuracy on a certain DLO and its relationship to the amount of training data, in which we compare our local linear Jacobian model with nonlinear forward kinematics models based on multi-layer perceptrons (MLP) or biLSTM. Training data and 10k test data are collected on the same DLO. For testing, we use trained models to predict the shape of the DLO after 10 steps. The prediction of the shape at the next step using our method is calculated asx , where the subscript [t] represents the variables at step t and ∆t is the step interval. Shown in Fig. 5, the results indicate that our Jacobian model can achieve higher prediction accuracy with less training data. The biLSTM-based model incorporates the physics priors of chain-like DLOs [12], so it performs better than MLP when the training set is small. Our Jacobian model implies strong local linear prior, which is theoretically and practically reasonable. Hence, the learning efficiency is highly improved. Second, we further compare the performance of our methods using the absolute state input [x; r] and relative state input φ as (6) in the RBFN. We collect data of 10 different DLOs in the simulation. For testing, we perform cross validation, i.e., for each round the model is trained on 9×3k data of 9 DLOs and tested on 10k data of the remaining one. We also test the performance on the test set with constant position translation. The average results of 10 rounds shown in Table I reveal that the relative state input can achieve higher prediction accuracy on new DLOs with different lengths. In addition, when position translation is added, the relative state input is not affected, while the performance of the absolute state input significantly decreases.
B. Shape Control with Online Learning
We evaluate the proposed method in DLO shape control tasks and compare it with other methods. All m features are set as target points for shape control. During the tasks, the offline-trained models are used, and in our method the model will be further updated concurrently. In addition, domain randomization is applied during offline training to improve the models' generalization abilities. All the offline methods are trained on data collected on 10 DLOs with different lengths or diameters in simulation, while our method uses much less data than other offline methods.
Several criteria are defined to evaluate the performance: (1)final task error: the Euclidean distance between the desired shape and final shape within 30s; (2)success rate: if the final task error is less than 5cm, this case is regarded successful; (3)average task error: the average of the final task errors over all successful cases; (4)average task time: the average time used to achieve success over all successful cases. Note that the task time is for reference only, since it depends on the control gain in servo methods or the sample range of control input in MPC or RL.
1) Simulation: First, we test their performance in both 2D and 3D DLO shape control tasks in simulation. The manipulated DLO is an untrained DLO, and 100 cases with different feasible desired shapes are tested. We also do the ablation study to validate the effect of the online learning in our method. The parameters are set as α = 0.3, η 1 = 10 −3 , η 2 = 50. As shown in Table II, our method significantly outperforms the compared methods on both success rate and average task error, even using much less offline training data. The task time of the online method WLS is the highest because it needs to initialize the Jacobian by moving the DLO ends in each DoF every time it starts. The average task error of FKM+MPC is higher because it has no further updating for the untrained manipulated DLO. The poor performance of SAC may be due to insufficient training. The contrast is starker in more challenging 3D tasks, where the success rates of compared methods are very low (≤55%), including our method without online learning (69%), while our method with online learning achieves a 92% success rate and the lowest average task error.
2) Real-world experiments: We also evaluate these methods in real-world 2D tasks. The same offline models as those in the simulation are used, which means no real-world data are collected for offline training. We separately carry out 5 tests with different feasible desired shapes on two DLOs: an electric wire with a length of 0.45m and diameter of 8mm, and an HDMI cable with a length of 0.6m and diameter of 5mm, as shown in Fig. 6. The parameters are set as α = 0.3, η 1 = 10 −3 , η 2 = 200. The results are shown in Table III. Fig. 7 visualizes the control processes of two cases. Since the control input dimension is only 3, all methods perform well in these relatively simple tasks except SAC. It is shown that the processes of WLS are slow and unsmooth, and those of FKM+MPC are fast but less precise. Our method completes all 10 tasks and achieves the lowest average task error, where the online learning enables faster and more precise control.
V. CONCLUSION This paper considers the shape control of DLOs with unknown deformation models. We formulate the deformation model in a local linear format, which is estimated in both offline and online phases. First, the offline learning well initiates the estimation of the model. Then, the adaptive control scheme with online learning further updates the model and achieves the shape control tasks in the presence of an inaccurate offline model. The experiments demonstrate that the offline learning of the local linear deformation model is accurate and data-efficient. By combining the offline and online learning, our method outperforms the compared methods, and adapts well to untrained DLOs. Future work will include more detailed analysis of our method and validations in real-world 3D dual-arm manipulation tasks using a highprecision 3D camera. | 2021-09-24T01:15:31.261Z | 2021-09-23T00:00:00.000 | {
"year": 2021,
"sha1": "5b5ebb502cc7d006b3bc236bac9b7779bfd345e7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5b5ebb502cc7d006b3bc236bac9b7779bfd345e7",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
195765236 | pes2o/s2orc | v3-fos-license | Lessons from the Field Beyond the Numbers: Narratives of Professionals on Women who Experienced Severe Maternal Morbidity
Objective Several factors might affect the health and the quality of life of women who had a severe maternal morbidity (SMM) or a maternal near-miss (MNM) episode. The objective of the present study was to explore the perspectives of the professionals on the repercussions of SMM or of MNM after interviewing women who survived such episodes. Method Selected cases that captured the attention of professionals were reported. The professionals built individually 10 narratives, which were analyzed with the technique of content analysis. Results According to the perspectives of the professionals, women surviving a severe maternal condition and their families experienced clinical and psychosocial consequences. Some cases portrayed the intense psychological distress in mourning for the loss of the fetus or of their reproductive capacity and changes in family dynamics generating emotional overload, depression, and gender violence. Conclusion The analysis of narratives may offer an idea on the complexity of the perception of care by professionals and on the need for an interdisciplinary follow-up of women surviving an SMM or an MNM episode.
Introduction
Recent data indicate a considerable reduction of maternal mortality in Brazil. This decrease is partly due to the qualification of obstetric care offered to the health of the women and of the emergency facilities. 1 However, it is also necessary to seek beyond surviving cases. This is a challenging task, because little is already known about the long-term repercussions of maternal complications on the mental health and on the quality of life of these postpartum women. 2,3 In addition, information on long-term repercussions of severe maternal morbidity (SMM) has not yet been fully explored. Available data are scarce; however, they are necessary to improve health care and the prevention of damage to surviving women. [3][4][5][6] Lack of knowledge might hinder the desirable convergence between the reduction in maternal deaths and the decrease in severe complications of pregnancy. 7 Recently, studies have been carried out to understand and assess risk factors and potential strategies for the prevention of maternal near-miss. 8 The combination of severe lifethreatening complications might trigger intense physical and psychological distress and might culminate in posttraumatic stress disorder (PTSD), not limited to the postpartum period, among other problems. 7,[9][10][11] The assessment of pregnancy repercussions usually does not extend beyond the postpartum visit at 6 weeks after delivery. Nevertheless, women suffering from severe maternal morbidity might continue to have long-term negative consequences of this episode. 4 Survivors of obstetric complications are more physically and socially vulnerable. They are also more prone to develop postpartum mental issues, such as depression and anxiety. 12 Fear of death, loss of hope, concerns about possible upcoming surgical procedures, memory lapses, mourning for the loss of the baby or of the reproductive capacity because of hysterectomy, feelings of loss of female identity, among others, have already been recognized. 7,13 These results underline the need to make efforts in both directions: first, to reduce maternal losses; and second, to provide care beyond the postpartum period, considering that 9.5 million women suffer from complications during pregnancy, childbirth, or the postpartum period annually and survive. 14 Testimonies by female survivors are currently used to understand and identify management problems and other determining factors of the health-disease process in women. 4,7 However, beyond listening to the women, it is also fundamental to understand the difficulties of care from the perspective of the professionals. Therefore, the purpose of the current study was to understand the perceptions of the health care professionals of these consequences and of the health care conditions offered to postpartum women who have suffered an SMM or a maternal near-miss (MNM) episode.
Methods
This is a descriptive qualitative study emerging from the narratives of health professionals involved in a retrospective cohort study aimed to understand the long-term repercussions of SMM and MNM on the various domains of the lives of the women, in a multidimensional way. 15 The researchers (obstetricians, pediatricians, nurses, and psychologists) wrote the narratives during interviews with women who have survived an SMM. In the current analysis, we have selected some stories that drew special attention of professionals at the time the interviews were performed.
The women were contacted by telephone and invited to participate in the study. The perception of health status, reproductive history, quality of life, PTSD, sexual dysfunction, multiple disabilities and functioning, as well as growth and development conditions of the child were evaluated using specific tools for each assessment. 2,15 During the training for the interviews, the health professionals were instructed to describe the most interesting cases that called their attention, with a written report that should be a narrative from their own perspective. The instructions on how each narrative should be written were not predefined. The main orientation for the interviewers was for them to "listen beyond the questionnaires" and to register all that affected them while listening to the women during the interviews. sobre as repercussões da MMG ou do NMM após terem entrevistados mulheres que sobreviveram a um desses episódios. Métodos Casos selecionados que chamaram a atenção dos profissionais foram relatados. Estes profissionais construíram individualmente 10 narrativas, que foram analisadas com a técnica de análise de conteúdo. Resultados Segundo as perspectivas dos profissionais, as mulheres que sobreviveram a uma condição materna grave e suas famílias vivenciaram consequências clínicas e psicológicas. Alguns casos relataram um intenso estresse psicológico no luto pela perda do feto ou de sua capacidade reprodutiva e de mudanças da dinâmica familiar, gerando sobrecarga emocional, depressão e violência de gênero. Conclusão A análise das narrativas pode oferecer uma ideia sobre a complexidade da percepção do cuidado de profissionais e sobre a necessidade de um seguimento interdisciplinar das mulheres sobreviventes de um episódio de MMG ou de NMM.
Palavras-chave
► narrativas pessoais ► nascimento ► profissionais de saúde ► período pós-parto The narratives were different in form as they were the result of a spontaneous record of what had affected each professional. We have decided to analyze the narratives considering that they are important tools for health professionals to express their perception of their daily work, including emotional issues. Thus, they transcribed cases and life histories that most captured their attention. The reports gave important information from the perspective of each professional involved in the data collection, similarly to the process of describing and reporting narratives in their clinical practice.
The process of analysis was based on the Bardin content analysis technique, 16 which offers the possibility of a qualitative or quantitative exploration of messages and information on various documents and texts. It helps to reinterpret messages and to achieve comprehension of their meanings at a level that goes beyond the common reading. Therefore, in the present study, we have adopted the qualitative content analysis technique. All of the narratives were analyzed as they composed a diversity of clinical conditions and represented different forms of understanding the topic by different professionals. Each narrative was defined as a unit of analysis that would be submitted to interpretation, because it contained information with a complete significance in itself.
For the analysis, we used the following steps: presentation and reading of the reports for the group of researchers; coding records as narratives 1 to 10; several careful readings of all of the narratives to identify underlying themes; and finally, the thematic analysis of each narrative. Categorization was performed by similarity and analogy, and it was not defined a priori, but emerged from the reading and the proximity of the records. 16 Finally, we allocated the complete description of each record produced by the health professionals to defined categories. The categories were built during the analysis, not to generalize the results nor to test hypotheses, but in order to understand perceptions of the professionals on the health status and on the challenges of the women for the management of these conditions. The categories were titled arbitrarily, but the literature on maternal morbidity was pivotal.
The research proposal generating these records was approved by the Institutional Review Board (letter 233/2009). After information on the study proposal, all of the participants signed a written consent form. Their names were kept confidential.
Results and Discussions
A total of 10 narratives was selected for analysis. They illustrate the perception of health professionals on the clinical, emotional, and psychosocial repercussions of SMM and MNM on the lives of the women. It was assumed that the narratives of health professionals could bring singularities to each one, and that they would produce metahistories of disorders, thus promoting a new understanding. The professionals needed to understand the significance of the disorder from the perspective of the woman to compose their narratives based on the symptoms of the patient. 17 Qualitative analysis did not permit generalization of the results, but it allowed the exploration and the understanding of certain specific situations. Indeed, there was no intention of generalizing the results. 18 Based on the definition of the 10 units of analysis, after a thorough reading of the material, 4 categories emerged: 1-The need of longitudinal care beyond the postpartum period: Narrative 5 MMCR, age 42, was very anxious when we met. I remember that she was worried about the type of questions I would ask, because she could not remember things that had happened. [...] She underwent a surgery because she had an ectopic pregnancy and one of her tubes had ruptured. She had her first pregnancy at 24 years old that resulted in a spontaneous abortion. She did not know about the pregnancy and only remembered waking up at the intensive care unit (ICU). The doctor told her that the baby was dead but she knew nothing about it. [...] After 3 years, she got pregnant. She cried a lot when she told her story. A year later, bleeding began and she underwent a new surgery for the removal of her uterus, which was full of fibroids.
Narrative 7 ECA, age 40, member of an evangelical church. She has only one baby but had four pregnancies. In her 1 st pregnancy, she had an abortion at 36 years old. In the 2 nd pregnancy, she had a baby who is 3 years old today. The 3 rd pregnancy had no fetus [...] I explained that, eventually, women might have a fertilized egg connected to the uterine walls but no development of an embryo. I told her that she and her husband were not guilty. The 4 th pregnancy was a spontaneous abortion. During the interview, she cried a lot [...] I offered her psychological assistance, which she accepted, although she had never thought about it. Finally, I gave her the support to go on.
Ectopic pregnancy is a severe obstetric condition, and the leading cause of maternal deaths in the 1 st trimester of pregnancy; it might be diagnosed early and managed conservatively, 19 and might also result in infertility. 20 The text showed the emotional impact experienced by the woman. In a similar way, narrative 7 produced the clinical history of a woman who had repeated abortions, followed by a successful term pregnancy. From a medical and reproductive point of view, abortion is considered a clinically common event. It might be the reason why the event is not highly valued by both health care providers and by society. 21 Testimonies on late consequences of ectopic pregnancy and abortion were shown, including emotional aspects of the woman, and difficulty in understanding what happened to her body and health. Persistent doubts about possible causes of abortions, and the challenge in explaining to women what actually happened, might generate equivocal beliefs in women, such as fantasies of guilty. As a result, the woman might become emotionally vulnerable to stress, 21 to sadness, and to feelings of helplessness. 22 The emotional impact generated by perinatal death might cause a state of deep mourning, affecting the women for a long period. This could generate marital conflicts, persistent depression, and social isolation. In contrast, in narrative 1, shown below, the woman had received longitudinal care where she was admitted during her 1 st pregnancy. The hypertensive condition seemed to be a protective factor for her in terms of continued medical care.
The health professional described complications during the pregnancy, resulting in fetal death. Her needs were met in a timely way, since longitudinal care was offered. Health care was not provided only in an acute emergency event. The situation was perceived as a "relief" by the health professional describing the narrative, because a residual morbid condition allowed the woman to receive better medical care.
Narrative 1 A 38-year-old woman, obese, with chronic hypertension. She had her 1 st delivery at the same institution 2 years before, when she developed a superimposed preeclampsia and fetal growth restriction, with admission at 26 weeks of gestation for fetal monitoring. An intrauterine fetal death occurred at 28 weeks. She underwent labor induction and had a vaginal delivery. After 6 months postpartum, she had an open myomectomy for removal of a large uterine fibroid. Today, she is still under monitoring in the institution to control blood pressure and is preparing herself for a new pregnancy. She arrived smiling, confident, and humorous at the outpatient clinic. She was well-dressed and very peaceful about a new pregnancy. During the visit, she did not show any trace of anxiety or fear that a new SMM could occur during a future pregnancy. She barely spoke about the loss of her 1 st child, a boy. There was sadness in her prepared speech and attitude, but she clearly preferred to maintain a positive outlook. A partner with a stable financial condition supported her. She made a few questions and walked away looking very satisfied with the appointment, despite the hardships she went through.
There is evidence that medical and psychological care for a woman suffering from early reproductive loss might have a significant effect on her experience and physical/emotional recovery. 21 Furthermore, professional health care extended to the family nucleus is an important resource to reduce the period of mourning and its negative consequences. 23 However, health professionals do not always acknowledge this, once early reproductive loss is interpreted as a common event and the preservation of the life of the woman might be felt as the end of the action. 24 Health professionals find it difficult to approach emotionally painful situations such as the loss of a baby. 25 Adequate management with friendly sensitive treatment influences how the woman experiences this event. 10,26 Therefore, preparing and providing health professionals with tools for dealing with their own emotions is indicated as a necessary resource for the quality of care. 24 Furthermore, the approach to female mental health should not be limited to specialized facilities or to professionals such as psychiatrists or psychologists, because they are not always available.
2-The impact of severe maternal morbidity and maternal near-miss on the mental health of women and management difficulties in a woman health care facility: Narrative 6 A 30-year-old woman, who first became pregnant during adolescence and had an elective cesarean section. After a few years, she got married to another partner, and got spontane-ously pregnant with twins. This pregnancy evolved with a twin-to-twin transfusion syndrome (TTTS) in the 5 th month of pregnancy and intrauterine loss of both children. Birth was induced. She stated that she became very sad and tearful, requiring treatment with psychotropic drugs and help from a mental health professional. After 2 years, she presented with a new spontaneous pregnancy, when she was enrolled in the study. It was another twin pregnancy, also with progression to severe TTTS between the 4 th and 5 th month. She was hospitalized during several days for respiratory failure because of polyhydramnios of the hydropic fetus, which was repeatedly drained. She claimed that cesarean section was indicated at 26 weeks of gestation and that there was little hope for survival of the newborns. The babies were born, admitted to the neonatal intensive care unit (NICU), where they spent several months. During the admission to the NICU, she became clinically depressed. Now, it has been 2 years since this pregnancy. Even though both children survived without complications, the patient refers mood swings, tearfulness, insomnia, panic attacks, and barely leaves the house with fear of the world around her. She refers a worse quality of life since the last pregnancy. She almost has no sex and experiences a troublesome marital relationship.
Narrative 9 H., 36 years old, full higher education, circus acrobat, married. She underwent a cesarean section in her 1 st pregnancy 3 years earlier because of "lack of dilation". The patient reported that she was ready for vaginal delivery but "was in labor for 15 days and did not dilate". Her prenatal care was in a private health facility and one of the physicians was her sister "who did everything possible for her to deliver vaginally". In her 2 nd pregnancy, she started bleeding in the 5 th month and was referred to the institution for follow-up. At that moment, she was using supplemental health insurance for prenatal care. She was then diagnosed with total central placenta previa and placental percretism. She also developed severe preeclampsia, requiring the use of magnesium sulfate and prolonged hospitalization. She underwent cesarean section at 37 weeks of gestation, with uterine artery embolization and subsequent postpartum hysterectomy. She was admitted to the ICU and received transfusion therapy. It has been 8 months since childbirth, and she is still breastfeeding. She cried a lot during the visit. She feels mutilated for not having a uterus and not being able to deliver anymore. She almost never has sex. She refers that the abdominal scar is horrible and disfigures her body. She was unable to lose the 10 kg that she gained during pregnancy, so she feels diminished concerning physical aesthetics, an area of great value to her. She is unable to practice all the activities of her presentation and her gym clothes no longer fit her. At the same time, she refused to undergo psychological follow-up. She considered that she has no right to be sad, because she is alive, has healthy children and a loving husband.
Narrative 3 AS, 38 years old, in her 2nd pregnancy, presented with bleeding in the 1 st and 2nd trimesters. She was diagnosed with total placenta previa and placenta percreta. She was referred to the institution, where she underwent an elective cesarean section at 32 weeks of gestation, uterine artery embolization, and subtotal hysterectomy. During the late postoperative period, she developed a vesicovaginal fistula, which required medical and surgical treatment. The urological follow-up lasted for several months, which compromised her quality of life, taking care of her premature boy, and breastfeeding.
In the narratives above, the professionals described potentially fatal obstetric and clinical complications, in which the women survived. The opportunity to know histories based on narratives of women provides health professionals with the chance to understand the facts biased toward success. These women were alive, in spite of everything. 27 In contrast, studies have shown that even after hospital discharge, survivors of an MNM event were more vulnerable to physical, psychological, and social debilitating consequences up to 1 year after childbirth. 5,27-29 They might even have the potential risk of developing PTSD, 9 and postpartum depression. 28 In the three narratives above, opportune obstetric interventions were noted, as well as the action responsible for the survival of these women. Nevertheless, medical care provided during hospitalization appeared as insufficient to avoid the emergence or maintenance of suffering and mental illness in these women after discharge. It is known that mental health of a woman might be affected not only by a particular event, but also by the cumulative effect of various circumstances in life. 5 In narrative 6, the neonatal outcome was positive, if compared with the previous pregnancy, in which intrauterine fetal death occurred. In narrative 9, despite the complications, the baby was also born alive. The literature indicates that the successful experience of giving birth to a live baby after a near-miss episode might be a protective factor for the mental health of a woman. 5 Nevertheless, each woman attributes singular meaning to the lost pregnancy. Difficulties generated by family or marital tensions, devaluation of body image, sexual, economic, and social difficulties, among others, might compromise the recovery of health conditions desired by women.
The psychiatric impact of SMM on the lives of women is uncertain, because of multiple factors that might be associated with mental disorders. However, recent studies have suggested that there might be a potential relationship between SMM and PTSD symptoms. 9 These symptoms might impair the quality of affective and social relationships, 11 and have negative repercussions on the well-being of a woman. 28 In a recent review of qualitative studies on the perception of patients of SMM, 10 it was concluded that women frequently affected by comorbid conditions required health care after discharge. The negative impact of SMM on mental health might be expressed a long time after the postpartum period. However, the quantitative results of the current study, although showing negative impacts on maternal functioning, sexual health and quality of life, [29][30][31] were not able to identify an important impact on the occurrence of PTSD among women experiencing SMM at least up to 5 years after the event. 32 3-The woman in a vulnerable state and the challenges of daily practice in health care facilities: Narrative 10 It was a case of a diabetic mother, with complications because of severe preeclampsia in her unplanned 1 st pregnancy. It seems that she was adequately referred to the University hospital. In addition to ICU admission and all clinical complications, there were no major consequences. She wanted her next pregnancy, was not afraid and had no further complications. Her 2nd prenatal care was not even classified as highrisk and received no referral. Thank God she had no complications on her 2 nd pregnancy (I cannot explain how!). But now she is completely involved in taking care of her kids with no time to control her diabetes or to attend routine medical appointments. She is not feeling well. Her glucose levels are very high and she has symptoms of neuromyopathy. She uses no contraceptive method. While we were filling all the answers to the research questionnaires, she burst into tears… she felt she had the right. At that moment, she realized all the things she had been through and all that was missing in her life. She was happy for the boys, but also worried, maybe for the first time. She promised to try hard to control her diet and medication. She promised to schedule an appointment with her endocrinologist and another with her gynecologist. Promises… However, this was ONE appointment… I might not ever know what happens to this nice woman… unless I see her again during her next pregnancy… I hope not as a new near-miss event.
Narrative 8 EFC, 23 years old, epileptic. She could not enlist the help of her partnerneither financial nor emotional support (they never lived together). In late pregnancy, she was diagnosed with depression, persisting until the present date. Last year, she had to leave her sister, who was going through serious financial difficulties. By that time, she brought her mother and son with her. She attempted suicide in July 2011 after quarrelling with her sister. She was hospitalized for 30 days in a psychiatric clinic. Now she reports partial improvement of depression, especially after leaving home. She did psychiatric and neurological monitoring and described important improvement in her medical status. During an acute phase of depression, she also stated that her mother was responsible for taking care of her son. The woman said that her ex-husband frequently bothers her, threatening to demand custody of their child. Narrative 4 CLT is 35 years old, currently living in her niece's house. Her partner lives elsewhere. She is jobless and still using drugs. Talking to her niece on the first contact, the interviewer found out that CLT is a homeless drug user and a wanderer. CLT told me that she is currently smoking and using crack but had also used "oxy". I asked her what that drug was made of. She supposed that it is a mixture of crack and burnt car oil because the small rocks are black. She noticed that her street friends looked like "animals" after using that substance. She confessed that she and her partner could live without using drugs for up to 2 weeks. However, they had frequent relapses and did use drugs again. Her partner was in ill health. He had tuberculosis and was rather weak. I suspected that he was HIV positive. CLT is quite lucid, finished high school and had an appointment with the general practitioner, plus blood tests at the health unit near her house. At present, she is followed in an outpatient clinic for mental health because she is depressed. She said that, in recent years, she had been living on the streets, where she became a prostitute. She was repeatedly a victim of sexual violence, even attacked by military police. She had six pregnancies, one normal delivery, two cesareans, and three abortions. Interestingly, she wanted four children in her life and one more after her last son. In her 1 st pregnancy, fetal death occurred. In her last pregnancy, she used crack all the time. She has three living children, but does not have custody of any child. She demonstrated deep love for her youngest child and said that she intends to get better in the future to have custody of the children. Two sons are from the current partner. I offered psychiatric outpatient care at the institution for patients addicted to chemical substances, but at the time she had no interest in treatment. I asked if we could help her in any way and she answered no. I finished the interview by making myself available, in case she needed any more help in the future. Participants usually receive some money for covering expenses. However, before the interview, the caregiver asked me not to tell her about the money, because she would use it to buy drugs. As CLT was leaving the room, she asked about her child's growth, if it was normal according to the pediatrician. Then she went to meet her son and her niece, who had been waiting at the entrance of the building.
In these narratives, health professionals not only informed about the fragile health conditions of the women, but also how vulnerability in health emerges and/or can be aggravated over time. Women in vulnerable conditions are more exposed to illnesses and have a higher risk of dying. In addition, resources must be taken into account, such as the basic conditions prior to the disease; the social, economic, and psychological aspects, as well as any situation that exposes or protects a person from morbid situations.
From this perspective, in the three narratives presented, it was possible to identify different conditions of vulnerability in these women. In narrative 10, the clinical history was not biased toward successful health interventions. According to the professional, the fact that the woman was still alive was attributed to luck and to a supernatural force. In the three narratives presented, professionals evidenced emotional and social vulnerabilities in the women who were exposed. The narratives indicated that health interventions for these women needed to be extended beyond the time for disease control to minimize the vulnerable condition of the patient. The reason is that social issues of gender, cognition, and emotional aspects may keep these patients vulnerable and at a higher risk of death.
In narrative 10, the patient was expected to follow the instructions and seek health facilities, because she is at risk of a new pregnancy and her condition demands care. Similarly, in narrative 4, the health professional used interventions to understand the health conditions of the woman. However, these interventions were not very effective when he tried to include the patient in a facility considered most appropriate for her at the time. Despite a history of six pregnancies and three abortions, the practice of prostitution and the exposure to contamination by infectious-contagious diseases and violence, referral to a family planning program was not performed. The patient was referred to a specialized mental health facility.
Furthermore, the search for and compliance with treatments are not simple processes and involve the capacity to understand the problem, to incorporate knowledge, and to transform behavior based on the rapport established between health professionals and patients. Although morbidity may result from biological processes, another factor that makes management of these narratives more difficult is that women respond to these events in a manner that is not unique or biologically determined. 17 Even if health facilities are available, health care models focused on the management of acute conditions and the lack of longitudinal care may contribute to increased maternal morbidity and mortality. 27 Furthermore, the medical diagnosis of obstetric complications and the manner in which a woman understands and recognizes this diagnosis do not always agree. The medical point of view is not always the same as that of a woman. This can determine seeking behavior and compliance with treatment. 28 Despite new guidelines and international recommendations, mental health care actions have still not been incorporated in the routine practice of specialized health service for women in Brazil. When pregnant women or those in the postpartum period become mentally ill, they are referred to a specialized mental health facility, which does not offer conditions to care for specific female issues.
4-Violence against women, its impact on health, and complexity of management in health services: Narrative 2 A 34 year-old-woman. Her 1 st pregnancy was diagnosed with total central placenta previa, which was treated with an elective cesarean section. At the immediate postoperative period, the patient developed postpartum hemorrhage and underwent a new laparotomy with subtotal hysterectomy, need for massive blood transfusion, and admission to the ICU. She had a larger family expectation. The marital relationship was bad, the couple was about to split when she got pregnant and decided to maintain the marriage for the benefit of the child to be born. Before the pregnancy, she was also a victim of aggression by the partner, besides having suffered repeated betrayals. After the hysterectomy, she felt less feminine and regretted the impossibility of further pregnancies. The marital relationship remains poor, although the attacks today are psychological and not physical. The verbal abuse to which her partner submits her includes references to a foul vaginal odor, not recognized either by the woman herself or by any of the gynecologists that had examined her after childbirth. They do not have intercourse often, and her husband frequently repeats that she is no longer attractive to him. Eventually, when they consummate their relation, he complains about "her vaginal odor", using degrading terms and pimps. She wept during the query, desolated by what occurred during the birth of her daughter, and by being tied in a conjugal relationship of aggression and with no prospects. A mental health professional has followed her, but she had little improvement of the clinical depression. She does not take any medication. She refers that she often cries, but she appears well cared for and well nourished.
In narrative 2, the history of violence suffered in the domestic setting with acts of aggression displayed by the intimate partner and the obstetric outcome culminating in hysterectomy are highlighted. Both put the woman in a fragile position. Female infertility may generate negative social consequences, especially in social settings in which gender identities and values are defined by the fertility capacity of a woman. 5,33 In this narrative, the woman appears as someone who feels less feminine with the impossibility of new pregnancies, feeling incapable of satisfying her partner sexually. Nevertheless, she remains connected to him and suffers violence. Psychological and sexual violence associated with deficiencies in women resulting from obstetric complications may increase their vulnerability. 34 Dealing with the topic of violence is difficult, owing to its complexity and to the understanding that it is an intimate subject. It demands the ability of health professionals who are not always skilled. They often have to deal with what is not said and what one is afraid to say because it may be considered rejected, forbidden or embarrassing. 35 Therefore, they must approach these women and establish a bond, turning something nonexistent (and unmanageable) into something real.
The approach to violence against women in health care settings is challenging. In addition, there is a demand for long-term intersectoral actions that are difficult to put into practice when health care models prioritize the approach to disease symptoms and do not have resources to address the problem. 36 For this purpose, the organization of interdisciplinary health actions in follow-up models should not be directed only at investigating and managing diseases, but must also be concerned with the approach to situations that increase the risk and vulnerability of these women and with the impact on the loss of their quality of life. These are the main reasons why, recently, the World Health Organization (WHO) is recommending a more integrated approach for dealing with pregnant or postpartum women experiencing not only maternal morbidities but also mental or social impairments. 37
Conclusion
Narratives provided information about long-term repercussions of SMM and/or MNM, according to the perception of health professionals. They presented the singularities of each case. In addition, they showed how long-term maternal complications extended beyond the postpartum period, affected the health and lives of women, along with other aspects. Complex life histories emerged, narrated beyond the clinical symptoms that are usually investigated in the clinical practice. Active listening and biomedical aspects in the clinical practice provided health professionals with life histories and life narratives. These professionals met another dimension in the health-disease process. Thus, from their perspective, these cases showed that the repercussions of SMM are not restricted only to the pregnancy and postpartum period, and that they demand longitudinal and interdisciplinary care beyond the full postpartum period. These cases indicated the importance of careful listening.
Contributors
All of the authors contributed with the project and data interpretation, the writing of the article, the critical review of the intellectual content, and with the final approval of the version to be published.
Conflicts of interests
The authors have no conflicts of interests to declare. | 2019-07-02T13:47:47.714Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "28e32362999a8c7b764560351abc7d0a4c88a2bb",
"oa_license": "CCBY",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0039-1688833.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "daedcc38996e2afc9e29cc64ac04b5cfccbc7525",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266683236 | pes2o/s2orc | v3-fos-license | Acceptance, Knowledge, and Attitude of Parents Toward the Human Papillomavirus Vaccine in the Eastern Region of Saudi Arabia: A Cross-Sectional Study
Background Human papillomavirus (HPV) is a common sexually transmitted virus associated with conditions such as skin warts and cervical cancer. Although many individuals clear the infection, some face persistent risks. Cervical cancer, which is linked to certain types of HPV, is a major health concern both globally and in Saudi Arabia, with preventive measures including HPV vaccination. However, parental knowledge and attitudes toward vaccinating their children vary. Therefore, this research aims to assess parental acceptance and understanding of the HPV vaccine in the Eastern Region of Saudi Arabia. Methodology This cross-sectional study was conducted in the Eastern Region of Saudi Arabia using an online questionnaire during 2022-2023. The data were cleaned in Excel and analyzed using SPSS version 29 (IBM Corp., Armonk, NY, USA). The study assessed parents’ knowledge and acceptance of HPV vaccination. Results A total of 380 participants were included in this study, the majority of whom were female, married, well-educated, and residents of Al-Ahsa, Saudi Arabia. Awareness about the HPV vaccine was modest, with only 46.6% of participants having heard of it. Most parents reported that their doctors did not mention the vaccine (62.9%), and 67.1% stated that their children had not received it. Factors influencing acceptance included support from the Ministry of Health and belief in the vaccine’s effectiveness. Concerns about side effects and vaccine effectiveness were the main barriers to vaccination. Sociodemographic factors, including gender, age, education, employment, and number of children, significantly influenced both knowledge and acceptance. Notably, awareness of HPV was strongly associated with acceptance. Conclusions Our study revealed limited knowledge and vaccine acceptance among parents in the Eastern Region of Saudi Arabia. Sociodemographic factors, including gender, age, education, employment, and number of children, significantly influenced both knowledge and acceptance. Thus, sociodemographic factors played a significant role in shaping these attitudes, emphasizing the need for targeted awareness campaigns and improved communication between healthcare providers and parents to enhance vaccine uptake.
Introduction
Human papillomavirus (HPV) is a double-stranded DNA virus and one of the most common sexually transmitted infections.It is mainly transmitted through sexual contact and infects the cutaneous and mucosal epithelium, which are associated with common skin warts and cervical cancer, respectively.Although most infected individuals will eventually clear the infection, a persistent infection remains a risk for all affected individuals [1].The HPV family has over 200 genotypes, with types 16 and 18 being the cause of approximately 70% of cervical cancer cases.HPV has been also associated with anal, vaginal, valvular, penile, and oropharyngeal cancer [2].
Cervical cancer is the fourth most common cancer in females worldwide and the eighth most common cancer among females in Saudi Arabia.In 2020, it was estimated that approximately 358 cases are diagnosed with cervical cancer annually in Saudi Arabia.The annual number of deaths due to cervical cancer in Saudi Arabia is approximately 179, making it the seventh leading cause of cancer deaths in women 15-44 years of age [2].
HPV-related cancers can be prevented using primary prevention, which includes HPV vaccination [3].In 2006, the U.S Food and Drug Administration approved the quadrivalent HPV vaccine (Gardasil), which protects against HPV6, 11, 16, and 18, to be used in females 9-26 years of age; by 2017, 71 countries worldwide introduced the HPV vaccine in their national vaccination programs for young girls [4].In Saudi Arabia, the vaccine was first approved by the Saudi Food and Drug Administration in 2010 and introduced in the updated national immunization schedule in 2019 for girls 11-12 years of age [5,6].
In 2020, the World Health Organization (WHO) adopted a global strategy to eliminate cervical cancer, marking the first global health strategy to eliminate a type of cancer.The strategy proposes that, by 2030, each country should reach a target of 90% of girls being fully vaccinated against HPV by 15 years old [3].
Because the HPV vaccine is better administered before exposure to HPV through sexual contact, WHO recommends vaccination of girls at 9-14 years of age [7].In line with WHO's global strategy to eliminate cervical cancer, Saudi Arabia has launched a national vaccination program to vaccinate girls 9-13 years of age by visiting several schools to provide the vaccine [8,9].The vaccination has been also provided in several hospitals and primary health centers, where the individual must visit the location to receive the vaccine.Parents' awareness and acceptance of the HPV vaccine are required for them to take their daughters to a healthcare center to get vaccinated.
Several studies globally have shown that parents have limited knowledge about the HPV vaccine.In Qatar, Hendaus et al. reported that >60% of parents were not aware that HPV can cause cancers such as cervical and genital cancers [10].However, 77% of parents responded as being "very comfortable" with giving their children a vaccine that would protect them from getting genital cancer.However, <4% of parents said that their children's doctors ever recommended the HPV vaccine [10].In Ethiopia, a study with 638 participants showed that only 35.8% of parents were knowledgeable about HPV vaccination, and 44.8% were willing to have their children vaccinated [11].
In Saudi Arabia, only two studies have investigated parents' attitudes and knowledge toward the HPV vaccine, which were done in Riyadh and the Western Region [12,13].Parents' acceptance and knowledge are determining factors in the vaccination of the younger population, but, according to our research, there have not been any study exploring this in the Eastern Region of Saudi Arabia.Therefore, our study aims to estimate the attitudes, knowledge, and acceptance of the HPV vaccine among parents in the Eastern Region of Saudi Arabia.
Materials And Methods
A cross-sectional study design was used in this investigation.A survey was undertaken online with data gathered using a questionnaire administered and distributed on different social media platforms that participants completed after providing their consent to participate in the study.The study area comprised the Eastern Region of Saudi Arabia for a duration of 11 months from November 2022 to October 2023.A total of 380 eligible participants were included in the study determined using the Raosoft Sample Size Calculator with a 95% confidence interval and a 7% margin of error.The inclusion criteria were as follows: ≥18 years old, residing in the Eastern Region of Saudi Arabia, and literate.The exclusion criteria were as follows: no children and data-entry errors.The questionnaire was entirely original, as no questionnaire that related specifically to parents' orientation with HPV vaccination was found.Accordingly, a new questionnaire was designed by gathering some questions from previously validated and reliable questionnaires covering elements related to the awareness of HPV vaccines [10,12,14].
The questions were subjected to reliability through Cronbach's alpha coefficient for scale data of 0.68.Permission to undertake this research was sought, and approval was granted by the ethical committee of King Faisal University in Al-Ahsa (approval number: KFU-REC-2022-NOV-ETHICS337).The participants engaged in the survey were informed that their involvement was voluntary and that completing the distributed questionnaires implied that they had agreed to participate in the study.The questionnaire was designed following a comprehensive discussion with a team of experts in the field of general obstetrics and gynecology and was then validated by a team of experts, including consultants and an associate consultant from the Obstetrics and Gynecology Department at King Faisal University.The content of the questionnaire was translated into Arabic to preserve the meaning of the important elements it captured.The translated copy of the questionnaire was authenticated in terms of its face and content validity.The final copy of the questionnaire that was used in the actual study included 29 questions grouped into four sections.The first section contained eight questions on biographical data such as age, gender, marital status, educational level, and field of study.The second section contained 11 questions to assess general knowledge about the HPV vaccine.The third section contained seven questions on awareness of HPV vaccines.The fourth section contained three questions that assessed the willingness and acceptance of parents and their partners for their children to receive the HPV vaccine.The questions had multiple response choices, with one "I don't know" option to avoid guesses from the respondents.A score of 1 was assigned to each correct answer and 0 to each incorrect or "I don't know" answer.A higher score indicated that the respondent had better knowledge about HPV and its vaccine.The maximum score was 18 (second and third sections), and the minimum was 0. The total knowledge score was grouped as follows: 0-6 = poor knowledge, 7-12 = fair knowledge, and 13-18 = good knowledge.The percentage of knowledge score was then calculated as the total score obtained divided by the maximum score (i.e., 18 points).Relationships between social variables and level of knowledge of HPV were measured using Fisher's exact test, where p < 0.05 was considered statistically significant.In the fourth section, we aimed to create a list of factors that influenced parents' decision-making and correlate that with their final level of knowledge.Data were extracted, coded, and analyzed using SPSS version 29 (IBM Corp., Armonk, NY, USA).
Most participants worked in sectors other than healthcare (91.1%), and the number of children varied, with three to four children being the most common (32.7%) (Table 1).Figure 1 shows the proportion of participants who had heard about the HPV vaccine.Overall, 46.6% of participants had heard about HPV infection, whereas 42.6% had not heard about it, and 10.8% were not sure, although knowledge levels were modest (mean score = 3.00 out of 11).
Frequency Table 2 provides insights into parents' acceptance of the HPV vaccine.A majority of parents mentioned that their doctors did not mention the vaccine (62.9%).Regarding vaccination, 67.1% of parents reported that their children had not received it.A minority of parents agreed to have their child vaccinated by age 12 (41.1%).For spousal agreement, a notable number of parents (25.6%) reported their spouse's agreement to HPV vaccination.
No Don't know Yes
The Figure 2 shows the factors influencing parents' decision to have their child vaccinated against HPV.Overall, 38.5% of parents were encouraged by the Ministry of Health's support and their thoughts on the vaccine's effectiveness in preventing the disease (35.1%).Some parents cited no specific reason (16.7%), whereas others were influenced by their physician's advice (8.1%).A small proportion had other reasons for vaccination (1.6%).
FIGURE 2: Factors influencing parents' decision to vaccinate their child against human papillomavirus.
Figure 3 shows the factors preventing parents from vaccinating their children against HPV.Concerns about potential side effects affecting their child (29.8%) and uncertainty regarding the vaccine's effectiveness (27.9%) were prominent factors.Some parents cited their child's lack of sexual activity (17.4%) or the belief that their child does not need the vaccine (17.4%).A smaller percentage mentioned a lack of knowledge about the disease and vaccine importance (2.3%), and not knowing where to obtain the vaccine (2.4%).A few parents cited religious reasons (1.9%), whereas others had different concerns (0.9%).
FIGURE 3: Factors preventing parents from vaccinating their child against human papillomavirus.
Table 3 lists the significant associations between sociodemographic factors and general knowledge about HPV.Notably, gender played a role, with females showing higher knowledge (n = 171) than males (n = 81), which was a significant difference (p = 0.007).Age was influential, with older individuals exhibiting higher knowledge, particularly those 41-50 years of age (n = 99), and a weaker association among those <30 years of age (n = 29) (p < 0.001).Educational status also mattered, as those with a bachelor's degree or higher had greater knowledge (n = 190) compared to those with up to a high school education (n = 62) (p=0.041).
Employment status was also significant, with employed individuals displaying higher knowledge (n = 138) (p = 0.047).Additionally, the number of children influenced knowledge, with those having three to four children displaying higher knowledge (n = 91) compared to those with >7 children (n = 17) (p = 0.025).Notably, having heard of HPV was strongly associated with higher knowledge (n = 165) compared to those who were not sure (n = 22) or had not heard of it (n = 65) (p < 0.001).No/Lower knowledge includes poor and fair.
General knowledge about
Table 4 shows the associations between sociodemographic features and acceptance of the HPV vaccine.Gender played a vital role, with females exhibiting higher acceptance (n = 166) compared to males (n = 72), which was a highly significant difference (p < 0.001).Age also had a substantial impact, as older participants demonstrated greater acceptance, particularly those 41-50 of age (n = 96), whereas participants 18-30 years of age had the lowest rate of acceptance (n = 30) (p < 0.001).Educational status also mattered, with those holding a bachelor's degree or higher displaying higher acceptance (n = 180) compared with those with up to a high school education (n = 58) (p = 0.045).Employment status was significantly associated with acceptance of the HPV vaccine, with employed individuals showing higher acceptance (n = 132) (p < 0.001).The number of children also influenced acceptance, as those with three to four children displayed higher acceptance (n = 86) (p = 0.015).Importantly, having heard of HPV was strongly associated with higher acceptance (n = 154) compared to those who were not sure (n = 24) or had not heard of it (n = 60) (p < 0.001).
Discussion
HPV is a common sexually transmitted virus associated with conditions such as skin warts and cervical cancer.Although many individuals clear the infection, some face persistent risks.Cervical cancer, linked to certain types of HPV, is a major health concern globally and in Saudi Arabia [14].Our study aimed to assess the parental awareness and acceptance of the HPV vaccine for their children and identify the factors influencing their acceptance or refusal.Our study highlights important findings, which will be discussed in the context of existing medical literature.
Our study population is reflective of the Eastern Region of Saudi Arabia, where the majority of participants were female (62.8%) and married (97.3%).These results align with the cultural norms and expectations within the region, where mothers often take on a prominent role in childcare [15,16].The high educational attainment (72.2% with a bachelor's degree or above) we observed was also encouraging, suggesting a welleducated sample.
However, knowledge levels about HPV were modest, with a mean knowledge score of 3.00 out of 11.This result indicated a significant knowledge gap among parents in the region.These findings are in line with previous research in Saudi Arabia, which also reported insufficient knowledge about HPV and the vaccine among parents.For example, Alkalash et al. found that only 32.9% of their study participants had heard about HPV, and their knowledge scores were similarly low [13].
The vast majority of parents (62.9%) noted that their doctors discussed the HPV vaccine, raising concerns about the role of healthcare providers in vaccine promotion.Previous Saudi research by Osaghae et al. emphasized the significance of healthcare provider recommendations and confidence in counseling hesitant parents, highlighting a need for enhanced communication about the importance of the HPV vaccine [17].
Furthermore, a significant proportion (67.1%) of parents indicated that their children had not received the HPV vaccine, reflecting a concerningly low uptake despite the vaccine's potential to prevent cancer.These results align with Alghamdi et al.'s study in Saudi Arabia, which showed that the majority of parents exhibited positive knowledge, attitudes, and practices regarding vaccination, which were influenced by sociodemographic factors.Nonetheless, addressing vaccination hesitancy by further targeting the identified contributing factors is warranted [18].
Approximately 41.1% of parents expressed willingness to have their child vaccinated by age 12, which is a promising prospect, as HPV vaccination is recommended at 11-12 years, indicating a receptive target group.
Understanding the factors that influence this willingness to vaccinate at the recommended age can provide insights for interventions to boost vaccination rates [19].
Various factors influence parents' decision to vaccinate their children against HPV.Notably, 38.5% cited the support of the Ministry of Health and 35.1% emphasized belief in the vaccine's effectiveness as the key drivers for acceptance, aligning with similar findings in international studies, such as Kolek et al.'s work in Kenya [20].
However, the fact that a significant percentage of parents cited "no specific reason" (16.7%) for vaccinating their children suggests a lack of awareness or passive attitudes toward vaccination.Improving communication and education campaigns to emphasize the safety and effectiveness of the HPV vaccine may address this group.
Various factors also prevented parents from vaccinating their children.Concerns about potential side effects affecting their child (29.8%) and uncertainty regarding the vaccine's effectiveness (27.9%) were prominent barriers.These concerns echo findings from various studies, including a review by Zheng et al., which highlighted concerns about safety as a consistent barrier to HPV vaccination [21].Notably, a considerable percentage (17.4%)cited their child's lack of sexual activity as a reason not to vaccinate.This misconception is an essential point to address, as HPV vaccination is most effective when administered before sexual debut.Religious reasons (1.9%) were also cited as a barrier.Therefore, it is crucial to engage religious leaders and scholars to clarify that the vaccine is compatible with Islamic values.A study by Hamdi et al. demonstrated Islamic teachings and scholars' perspectives to understand the cultural tensions around sexuality, shedding light on barriers to vaccine acceptance [22].
There were significant associations between sociodemographic factors and general knowledge about HPV.Gender, age, educational status, employment status, and number of children were linked to HPV knowledge levels.Women exhibited higher knowledge compared to men.Older individuals and those with higher education had better knowledge, as did employed individuals and those with three to four children.Additionally, having heard of HPV was strongly associated with increased knowledge.These findings mirror previous studies both from Saudi Arabia and internationally [23].
This study did have some limitations.First, the study design was an online questionnaire; more accurate results could have been obtained with a physical form.Second, 62.8% of the participants were female (i.e., mothers); having an equal number of male and female participants in future studies is recommended for better representation of the results.
Conclusions
This study reveals the insufficient level of knowledge about the HPV vaccine among both male and female Saudi parents in the Eastern Region of Saudi Arabia.This study highlights the need for targeted awareness campaigns to improve HPV vaccine knowledge and acceptance, especially among males and younger individuals.Healthcare providers should actively recommend the vaccine, address concerns about its side effects, and emphasize its effectiveness.Collaboration with religious leaders and educators is crucial to bridge knowledge gaps and reduce HPV-related health disparities.This study informs public health strategies to enhance vaccine coverage and reduce HPV-related diseases.
FIGURE 1 :
FIGURE 1: Proportion of parents who heard about human papillomavirus vaccine.
TABLE 3 : Associations between sociodemographic features and general knowledge about the human papillomavirus (HPV).
*: P-value is calculated with Fisher's exact test.
TABLE 4 : Associations between sociodemographic features and acceptance of the human papillomavirus (HPV) vaccine.
*: P-value is calculated with Fisher's exact test. | 2023-12-31T16:18:08.804Z | 2023-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "6c2e4886156b5494abad23a2431aba00215bcbb0",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/206975/20231229-10218-173ojh0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7b9404aa9af3966efd4f7b79d68ef00c2e7f86f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13289232 | pes2o/s2orc | v3-fos-license | Topical Delivery of Withania somnifera Crude Extracts in Niosomes and Solid Lipid Nanoparticles
Background: Withania somnifera is a medicinal plant native to India and is known to have anticancer properties. It has been investigated for its anti-melanoma properties, and since melanoma presents on the skin, it is prudent to probe the use of W. somnifera in topical formulations. To enhance topical drug delivery and to allow for controlled release, the use of niosomes and solid lipid nanoparticles (SLNs) as delivery vesicles were explored. Objective: The objective of this study is to determine the stability and topical delivery of W. somnifera crude extracts encapsulated in niosomes and SLNs. Materials and Methods: Water, ethanol, and 50% ethanol crude extracts of W. somnifera were prepared using 24 h soxhlet extraction which were each encapsulated in niosomes and SLNs. Franz cell diffusion studies were conducted with the encapsulated extracts to determine the release and skin penetration of the phytomolecules, withaferin A, and withanolide A. Results: The niosome and SLN formulations had average sizes ranging from 165.9 ± 9.4 to 304.6 ± 52.4 nm with the 50% ethanol extract formulations having the largest size. A small particle size seemed to have correlated with a low encapsulation efficiency (EE) of withaferin A, but a high EE of withanolide A. There was a significant difference (P < 0.05) between the amount of withaferin A and withanolide A that were released from each of the formulations, but only the SLN formulations managed to deliver withaferin A to the stratum corneum-epidermis and epidermis-dermis layers of the skin. Conclusion: SLNs and niosomes were able to encapsulate crude extracts of W. somnifera and release the marker compounds, withaferin A, and withanolide A, for delivery to certain layers in the skin. SUMMARY Withania somnifera crude extracts were prepared using ethanol, water, and 50% ethanol as solvents. These three extracts were then incorporated into niosomes and solid lipid nanoparticles (SLNs) for use in skin diffusion studies, thus resulting in six formulations (ethanol niosome, water niosome, 50% ethanol niosome, ethanol SLN, water SLN, and 50% ethanol SLN). The diffusion of two marker compounds (withaferin A and withanolide A) from the formulations into the skin was then determined. Abbreviations used: API: Active pharmaceutical ingredient, ANOVA: Analysis of variance, ED: Epidermis-dermis, HPLC: High-performance liquid chromatography, HLB: Hydrophilic-lipophilic balance, NMR: Nuclear magnetic resonance spectroscopy, PDI: Polydispersity index, SLN: Solid lipid nanoparticle, SD: Standard deviation, SCE: Stratum corneum-epidermis, TEM: Transmission electron microscopy.
INTRODUCTION
Withania somnifera (also known as Ashwagandha, Indian ginseng or winter cherry) is a plant well-known for its diverse medicinal properties in the Ayurveda system of natural medicine.Extracts from the plant leaves have a high anti-oxidant potential, and they contain a high concentration of bioactive compounds. [1,2]Therefore, the plant leaves were used in this study as the aim was to prepare formulations that contain different W. somnifera extracts for potential use in the treatment of skin conditions such as skin cancer (melanoma) and aging.The main bioactive compounds in W. somnifera are steroidal lactones collectively known as withanolides. [2,3]Throughout this study, the main focus was on withaferin A and withanolide A as bioactive marker molecules, which are known to be present in the leaves of W. somnifera. [4]me of the medicinal properties of W. somnifera that have been identified to date include antidiabetic, antihypertensive, antibacterial, antiaging, and anticancer properties. [5,6]The plant extract is currently TAWONA N. CHINEMBIRI, et al.: Topical Delivery of Withania somnifera available on the market as a powder, tonic, and as capsules. [2]In this study, it was decided to encapsulate different W. somnifera crude extracts in niosomes and solid lipid nanoparticles (SLNs) for topical delivery to the skin.The skin is the body's first line of defense and is thus rather impermeable to any foreign substances. [7,8]Nanovesicles such as niosomes, SLNs, liposomes, ethosomes, and ufosomes are being investigated for use in the delivery of medicinal compounds to and through the skin. [9,10]anoparticles are advantageous in cancer therapy because they can aid in the transport of therapeutic agents through barriers such as the skin, improve the pharmacokinetic profile of medicinal agents, and they can be used for targeted drug delivery (e.g., dermal vs. transdermal delivery). [11,12]Niosomes are known to enhance the absorption of compounds through the skin, increase physicochemical stability of compounds and protect the skin from the potential irritating effects of medicinal compounds. [13,14]Various plant extracts have been successfully encapsulated in niosomes and delivered to the skin; [9,10] hence, the use of niosomes in the topical delivery of W. somnifera crude extract.SLNs have been reported to be suitable for topical drug delivery, resulting in reduced systemic delivery of medicinal compounds due to controlled and targeted drug delivery. [15,16]he aim of this study was to prepare three different W. somnifera crude extracts and encapsulate these extracts in niosomes and SLNs for use in Franz cell diffusion studies.A stability assessment of the formulations was conducted to determine if certain marker molecules in the extracts remained stable in the niosomes and SLN.
MATERIALS AND METHODS Materials
The withaferin A and withanolide A USP analytical standard compounds were purchased from ChromaDex (Irvine, California, USA).Ethanol and methanol for plant extractions and analytical standard preparation were purchased from Associated Chemical Enterprises (Johannesburg, South Africa).High-performance liquid chromatography (HPLC) grade acetonitrile and deuterated chloroform (CDCl 3 ) were obtained from Merck Chemicals (Johannesburg, South Africa).The Compritol 888 ATO (glyceryl dibehenate) that was used for formulation of the SLN was a generous gift from Gattefossè (Lyon, France).
Preparation of plant extracts
W. somnifera leaves were purchased from Mountain Herb Estate Nursery (Kameeldrift-West, Pretoria, South Africa) and authenticated at the South African National Biodiversity Institute National Herbarium (Pretoria, South Africa).Plant leaves were cleaned, air-dried then crushed to a fine powder on receipt.A 24 h soxhlet extraction was used to prepare three separate crude extracts from the leaf powder using water, ethanol, and ethanol/water (50:50) as the solvents.After the soxhlet extraction, the ethanol was evaporated using a rotary evaporator, and the water was removed by using a freeze dryer (VirTis, Gardiner, NY, USA).The dry end-products were stored in glass containers, protected from light at −20°C.
Chemical characterization of Withania somnifera extracts with nuclear magnetic resonance spectroscopy
For each individual extract, approximately 50 mg of plant extract was weighed out and dissolved in 1.5 ml deuterated chloroform then filtered into a nuclear magnetic resonance (NMR) tube to remove any undissolved residue.Both 1 H-NMR and C 13 -NMR spectra were obtained using an Avance III 600 Hz NMR Spectrometer (Bruker, Rheinstetten, Germany).
Chemical characterization of Withania somnifera extracts with high-performance liquid chromatography
The HPLC analytical method was developed in the Analytical Technology Laboratory of the North-West University, Potchefstroom, South Africa.This method was used for chemical fingerprinting of the plant extracts and for the detection of the marker compounds (withaferin A and withanolide A) throughout the study.The separation was carried out on an Agilent 1100 series HPLC equipped with a quarternary gradient pump, autosampler, diode array detector, and Chemstation A.10.01 data acquisition and analysis software (Agilent, Palo Alto, CA, USA) on a Venusil XBP C18 (2), 150 mm × 4.6 mm, 5 µm column (Agela Technologies, Newark, DE).A gradient elution method was used, in which mobile phase A was HPLC-grade water and mobile phase B was 100% acetonitrile.The run was started at 10% acetonitrile with a linear gradient to reach 100% acetonitrile after 10 min and holding to 20 min before reequilibrating at the start conditions.The flow rate, injection volume, detection wavelength, and stop time were set to 1 ml/min, 50 µl, 210 nm, and 22 min, respectively.The standard solutions and samples for chemical finger-printing were all prepared using analytical grade methanol and HPLC-grade water.To obtain a chemical finger-print of W. somnifera crude extracts 10 mg of plant extract was dissolved in 1 ml of methanol with the aid of sonication and topped to 10 ml using Milli-Q water.The resulting solution was filtered then analyzed using HPLC.
Formulation of niosomes and solid lipid nanoparticles
The solvent injection method was utilized for the formulation of both the niosomes and SLN.Pando et al. reported that the solvent injection method for niosome formulation resulted in a higher resveratrol encapsulation efficiency (EE) and higher stability.Preformulation studies confirmed that this method of preparation of the nanoformulations was acceptable for encapsulation of the W. somnifera crude extracts. [17]or the niosomes, a 2:1 mixture of surfactant (tween 80/span 60) and cholesterol (w/w) were dissolved in diethyl ether while the aqueous phase was heated to 60°C ± 2°C.The diethyl ether solution was slowly injected using a hypodermic needle into the preheated aqueous phase.When it came to the SLNs, a 2:1:1 mixture of surfactant, compritol 888ATO, and L-α-phosphatidylcholine (w/w) was weighed and dissolved in the organic solvent.This organic phase was then slowly injected into a preheated (60°C ± 2°C) aqueous phase.The organic and aqueous phases were continuously magnetically stirred, and the temperature maintained at 60°C ± 2°C until the organic solvent was driven off.The resulting formulation was cooled and sonicated on ice using a Hielscher UP 200ST sonicator (Hielscher Ultrasound Technology, Teltow, Germany).The ethanol and 50% ethanol extracts (2.0% w/w) were added to the organic phase while the water extract was added to the aqueous phase before the injection step.Zorzi et al. advise that a maximum of 2.0% crude extract should be incorporated into nanoformulations. [18]In total, six formulations were prepared as one niosome and one SLN formulation was prepared for each extract.
Physicochemical characterization of formulations
The physicochemical characteristics of the niosomes and SLN formulations that were assessed in this study include morphology, particle size, zeta-potential, polydispersity index (PDI), pH, and EE (withaferin A and withanolide A).Transmission electron microscopy (TEM) was used to visualize the morphology of the formulations.Zeta-potential, size, and PDI were measured using a Zetasizer Nano ZS (Malvern Instruments, Worcestershire, UK).Approximately 1 ml of each formulation was injected respectively into a disposable folded capillary cell for zeta-potential measurement, and the reading was taken using the Zetasizer Nano ZS.Freshly prepared formulations had their pH measured at 25°C using a Mettler Toledo pH meter (Mettler Toledo, Columbus, OH, USA).EE of the formulations was determined according to the method as described by Junyaprasert et al. [19] Briefly, the formulations were centrifuged in an Optima L-100XP ultracentrifuge (Beckman Coulter, Brea, California, USA) for 30 min at a speed of 30,000 rpm and temperature of 4°C.The supernatant was then diluted and analyzed using the HPLC analytical method for withaferin A and withanolide A. The percentage EE (%EE) was then calculated as follows: total amount of compound added free amount of compound %EE = 100% total amount of compound -Stability testing of formulations A 3-month temperature stability assessment was conducted on lyophilized niosomes and SLNs.Niosomes and SLNs were formulated according to the described method, lyophilized using a VirTis freeze-dryer (Gardiner, NY, USA), and stored at room temperature for 3 months.The formulations were kept in temperature-controlled laboratories at a temperature of 22°C.The formulations were resuspended in Milli-Q water then particle size, zeta-potential, pH and %EE were measured after 0, 7, 14, 28, 56, and 84 days.
Skin preparation for skin diffusion studies
Caucasian, female, abdominal skin obtained from abdominoplasty patients was used for the skin diffusion studies.Informed consent was obtained from each patient, and the NWU Research Ethics Committee gave approval for obtaining, preparing, and using human excised skin for research purposes (Ethical approval number -NWU-00114-11-A5).The collected skin was inspected for imperfections such as holes and stretch marks so that such areas would be excluded from the experimental skin samples.The split-thickness skin at a thickness of 400 µm was prepared using a Zimmer® dermatome (Warsaw, IN, USA).The skin was placed on Whatmann® filter paper, wrapped in foil, placed in Ziploc® bags then frozen at −20°C for not more than 3 months.
Franz cell diffusion studies
Franz cell membrane diffusion studies were done to determine the withaferin A and withanolide A release characteristics of the niosomes and SLNs.Subsequent to the membrane diffusion studies, skin diffusion studies were performed to assess the diffusion of withaferin A and withanolide A into and through the skin.Static Franz diffusion cells with a diffusion area of 1.075 cm 2 and receptor capacity of at least 2 ml were used for the membrane diffusion studies and the skin diffusion studies.The formulations were prewarmed to 32°C (temperature at the surface of the skin), [20] and phosphate buffer solution (0.06 M NaOH and 0.08 M KH 2 PO 4 , pH 7.4) was prewarmed to 37°C (physiological temperature) in appropriately set water baths.This was done to mimic in vivo conditions. [20,21]The donor and receptor compartments were greased with Dow Corning ® vacuum grease, and a magnetic stirring rod was placed inside the receptor compartment.A 0.45 µm polytetrafluoroethylene membrane filter (Whatman Plc, Maidstone, UK) or piece of skin (stratum corneum facing the donor compartment) was placed between the donor compartment and receptor phase.To avoid leaks, the two compartments were sealed and fastened together using vacuum grease and a horseshoe clamp.Two milliliters of buffer solution were added to the receptor compartment, and 1.0 ml of formulation was added to the donor compartment.Ten samples from the same skin donor (n = 10) were setup, and two Franz cells were setup with placebo formulations as the controls.The Franz cells were placed on a Franz cell stand in a 37°C water bath with a Variomag® magnetic stirrer to stir the receptor phase and keep it homogenous.The receptor phase was extracted at predetermined time intervals and replaced with fresh buffer.The extracted receptor phase was then analyzed using HPLC.Extractions for membrane diffusion studies were done every hour up to 6 h while a single extraction after 12 h was done for the skin diffusion studies.
Tape-stripping studies
The tape-stripping technique was used to determine the amounts of withaferin A and withanolide A that had permeated into the different skin layers.This technique works by selectively removing the upper skin layer and analyzing for the amount of compound within the stripped layer. [22,23]The method as described by Pellet et al. [24] was followed for the tape-stripping study.After the skin diffusion study was completed, the skin was cleaned using a paper towel to remove any unabsorbed drug.Thereafter, a piece of 3M Scotch ® Magic tape was applied to the diffusion area, removed and discarded to strip off any unabsorbed compound on the skin surface.The stripping process was repeated with 15 pieces of tape, and these tape strips were all placed into a polytop containing 5 ml of phosphate buffer solution.The remaining piece of skin was cut into small pieces to increase surface area and placed into a polytop containing 5 ml of phosphate buffer solution.This process was repeated for each Franz cell, and the polytops were stored at 4°C overnight.On the following morning, the buffer solution was filtered into appropriately labeled HPLC vials, and the samples were analyzed using the HPLC analytical method.
Statistical analysis
Statistical analysis of the Franz cell diffusion data was done using Statistica data analysis software system (StatSoft Inc., version 12 [2015], Tulsa, Oklahoma, USA).The mean and median flux values were calculated for each experiment.A one-way and a two-way analysis of variance were done together with t-tests to determine any significant differences within and between the different experiments.
Chemical characterization of Withania somnifera extracts
The HPLC analytical method was robust and suitable for use in the analysis of both withaferin A and withanolide A. Withaferin A eluted at approximately 7.5 min and withanolide A at 8.5 min.Figure 1 shows the chromatograms of the individual standard compounds and those of the crude extracts.The analysis of the crude plant extracts revealed that the withaferin A (w/w) content of the extracts was 0.98% w/w (water extract), 1.76% w/w (ethanol extract), and 4.55% w/w (50% ethanol extract), respectively.The withanolide A content of the three extracts was 5.04% w/w (water extract), 1.21% w/w (ethanol extract) and 3.04% w/w (50% ethanol extract), respectively.The 1 H-NMR and C 13 -NMR spectra of the different W. somnifera crude extracts are shown in Figures 2 and 3, respectively.
Physicochemical characterization of formulations
The physicochemical properties of all the formulations are summarized in Table 1.The mean of three independent experiments is shown ± standard deviation.Figure 4 shows the TEM micrographs of the formulated placebo niosomes and SLNs.The 50% ethanol formulations displayed with relatively larger average particle sizes as compared to the other formulations.The different chemical compositions of the crude extracts possibly played a role as the 50% ethanol extract was expected to contain both polar and nonpolar compounds due to the presence of both an aqueous solvent and an organic solvent during the extraction process.All the freshly prepared formulations had pH values that were between 5.017 and 5.709 which is considered safe for topical application as the skin's pH lies between 4 and 6. [21,25] SLNs were generally the least homogenous and this was probably due to the high lipid content of the SLNs.It is possible that the lipids were affected by the energy released during the sonication process resulting in aggregation of some particles, thus resulting in relatively higher PDI values.Becker Peres et al. found that a long sonication time (90 s) resulted in an increase in particle size.This was possibly due to slight destabilization which resulted in very small droplets that could not be completely covered by the surfactant in the formulation. [26]The presence of water-soluble compounds in the water extracts possibly contributed to the low absolute zeta-potential values of the water extract formulations by reducing the cohesive properties of the formulations.Use of a higher concentration of a high hydrophilic-lipophilic balance surfactant may be able to improve the issues to do with the stability of the water extract formulations.These water extract formulations had a very low percentage encapsulation of withaferin A but a high withanolide A percentage encapsulation.It is apparent that a change in formulation (SLN vs. niosome) did not cause any major changes in particle size and EE of withaferin A. The SLNs exhibited a slightly higher encapsulation of both withaferin A and withanolide A than the respective niosome formulations.This effect of the SLNs was more apparent for the
Both vesicle types managed to encapsulate withaferin A and withanolide
A from all the extracts.The highest percentage encapsulation (95.3%) that was obtained was for withanolide A from the water extract SLNs.It is, however, possible that the nonencapsulated extract compounds could have been solubilized in the external aqueous phase or adsorbed on the surface of the carrier vesicles instead of being encapsulated in the vesicles. [18]
Stability testing of formulations
The changes that transpired over the 84 days test period have been summarized in Table 2.The changes in the pH values of the formulations ranged from 0.089 (50% ethanol extract SLNs) to 0.890 (ethanol extract niosomes) over the 3 months period.The final values were still within an acceptable range for topical application.Zeta-potential measurements of some of the formulations fluctuated over the 3 months period with the changes per formulation ranging from 0.78 mV (ethanol extract niosomes) to 13.12 mV (water extract niosomes).The formulations which had the most electronegative initial zeta-potential values (ethanol extract niosomes and SLNs) exhibited the smallest fluctuations with respect to zeta-potential measurements, thus implying that these formulations were relatively stable colloidal systems.At the end of the 3-month testing period, all the average particle sizes were above 300 nm with the ethanol extract niosomes having the smallest change (137.73 nm) and the 50% ethanol extract SLN having the largest size increase (1454.33 nm).The changes in the PDI values ranged from 0.001 (water extract SLN) to 0.185 (50% ethanol extract SLN).Changes in percentage encapsulation of withaferin A ranged from 2.03% (50% ethanol extract niosomes) to 26.00% (ethanol extract SLN) while the changes for withanolide A ranged from 0.72% (water extract SLN) to 37.61% (50% ethanol extract SLN).Percentage encapsulation efficiencies of both withaferin A and withanolide A generally varied the most with the SLN formulations as compared to the niosome formulations.SLNs stored at 4°C are said to have better stability as compared to SLNs stored at room temperature; therefore, the higher storage temperature may have been responsible for the instability that was observed. [27]Stability problems of nanovesicles are usually due to postformulation expulsion of the active pharmaceutical ingredient (API) and particle aggregation.To increase the stability of nanoformulations, it may be necessary to increase surfactant content as this increases the physical stability of the nanoparticles and also results in a high concentration of smaller nanoparticles. [28]The relative instability of the formulations that was observed may have been due to the presence of sodium cholate which is capable of accelerating degradation of formulations in the long-term. [29]nitial physicochemical characterization was done on freshly prepared samples while the stability experiments were done after freeze-drying.The formulations in this study were freeze-dried in the absence of a lyoprotectant such as sucrose, mannitol, or trehalose and this may have been responsible for some of the instability issues encountered. [30]eports have been made that lyophilization of nanoformulations can cause instability with respect to particle aggregation, physical properties, osmolarity, pH, and drug loading. [31]The absence of lyoprotectants during freeze-drying can also affect the EE of compounds in liposomes, and a similar phenomenon may also occur with encapsulation in niosomes and SLNs. [30]The presence of many unidentified compounds in the crude extract may have also contributed to the physicochemical changes that were detected.
Franz cell diffusion studies
The average percentage release of withaferin A and that of withanolide A after the 6 h membrane diffusion was calculated after the membrane release experiment and is represented in Table 3.Average cumulative amount of withaferin A released per unit area is also shown as is that of withanolide A. After the 12 h, skin diffusion study neither withaferin A nor withanolide A was detected in the receptor phase.This led to the assumption that the compounds had only been retained within the skin and had not permeated through the skin to reach receptor phase, so the tape-stripping study was conducted to determine the quantities of these marker molecules in the distinctive layers of the skin.
Tape-stripping studies
The average concentration of each compound that was detected in the stratum corneum epidermis and in the epidermis-dermis was calculated and tabulated in Table 4.A comparison was made between the amount of marker compound that reached the two skin layers, and a statistically significant difference was detected.This implied that the difference was due to effect of the physical, biological, and chemical differences between the stratum corneum epidermis and epidermis-dermis.The extent of the skin penetration of the marker compounds was different for each formulation.Only the 50% ethanol extract SLNs managed to deliver both withaferin A and withanolide A to both the epidermis and dermis.Permeation to the epidermis-dermis level was only achieved by the SLN formulations.It is thus conceivable that the SLNs had a greater ability to deliver withaferin A and withanolide A to the deeper skin layers as compared to the niosomes.In any study, it is imperative to select the most appropriate nanocarrier as it will result in the required amounts of API reaching the desired skin layers. [21]he 50% ethanol SLNs were the only formulation which was able to deliver both withaferin A and withanolide A to the stratum corneumepidermis and the epidermis-dermis.These results suggested that the 50% ethanol SLN formulation was the optimum formulation as it was capable of delivering the two marker compounds to the target skin layers for topical cancer chemotherapy.Melanoma penetrates vertically into the dermis before metastasizing; therefore, delivery of an API into the dermis is ideal for potential skin cancer treatment.The 50% ethanol niosomes, however, depicted the highest average concentration of withaferin A in the stratum corneum-epidermis (1.364 µg/ml), followed by 50% ethanol SLNs (0.489 µg/ml), water SLNs (0.299 µg/ml), ethanol niosomes (0.298 µg/ml), and finally, the ethanol SLNs (0.061 µg/ml).The withaferin A content of the extracts influenced the permeation of withaferin A into the skin as is reflected by the 50% ethanol SLNs and niosomes resulting in the highest concentrations of withaferin A in the stratum corneum-epidermis.The 50% ethanol extract contained 4.55% withaferin A which was considerably higher than the withaferin A content of the water (0.98%) and ethanol (1.76%) extracts.With respect to withanolide A, all the SLN formulations resulted in withanolide A reaching the stratum corneum-epidermis and the epidermis-dermis while the 50% ethanol extract niosomes only managed to result in withanolide A reaching the stratum corneum-epidermis.The SLNs were clearly superior to niosomes in terms of the ability to deliver withaferin A and withanolide A to the deeper skin layer (epidermis-dermis).This is similar to what was observed by Dwivedi et al., in the topical delivery of artemisone whereby SLNs delivered artemisone to the stratum corneum-epidermis and epidermis-dermis while the niosomes only delivered artemisone to the stratum corneum-epidermis. [32]The lack of ability to penetrate right through the skin barrier may be the reason why niosomes have been conventionally used for topical delivery of APIs to the stratum corneum versus transdermal delivery of APIs. [10]The occlusive effect of SLNs which inhibits transepidermal water loss may have influenced this observed result of the SLNs. [31]one of the marker compounds were detected in stratum corneum-epidermis or the epidermis-dermis after the water extract niosome diffusion study, which was consistent with the membrane release and skin diffusion results.The lack of information with respect to all the phytocompounds in all the crude extracts makes it difficult to account for all the differences that were observed.This, however, reflects that relative high variation can be expected when it comes to the medicinal use of plant extracts as there is no set standard for composition and expected effects or outcomes.There is a need for standardized plant extracts or methods for extract preparation so as to ensure that the expected treatment outcomes are achieved. [33]The use of pure compounds to avoid issues due to complex mixtures may be tempting, but it is has been found that isolation of pure compounds at times resulted in loss of activity, chemical instability, and eliminates possible synergism. [18]
CONCLUSION
The results of the membrane release studies showed that withaferin A and withanolide A were released from the niosome and SLN formulations to varying extents.Therefore, these compounds would be available for diffusion into and through the skin.The different release characteristics of the formulations and the differences in the skin samples were partly responsible for the differences that were observed for the skin diffusion studies. [34]uring the 12 h skin diffusion study, relatively low concentrations of withaferin A and withanolide A diffused into the skin.Possibly a longer time frame would have resulted in higher concentrations of compound being detected since SLNs are said to allow for sustained release of encapsulated APIs into the skin as the API must firstly diffuse through the solid lipid matrix. [35]It has been suggested that nanovesicles above 20 nm in diameter do not permeate the skin but rather accumulate in hair follicles where they act as API reservoirs.It is possible that the marker compounds in this study were slowly being released from the nanovesicle reservoirs and penetrating the skin barrier to reach the stratum corneum epidermis and epidermis-dermis. [8]Withaferin A and withanolide A being fairly lipophilic compounds could easily overcome the stratum corneum barrier, but the aqueous layer beneath the horny layer was possibly the biggest deterrent when it came to reaching the deeper layers (dermis). [34]In this study, it was also revealed that a high EE does not necessarily correlate with a high drug release and skin permeation as other researchers have also reported. [17]nancial support and sponsorship TAWONA N. CHINEMBIRI, et al.: Topical Delivery of Withania somnifera
Figure 1 :
Figure 1: High-performance liquid chromatography chromatograms of withaferin A and withanolide A standards (a), ethanol extract (b), 50% ethanol extract (c), and water extract (d) for high-performance liquid chromatography finger-printing d
Figure 2 :
Figure 2: 1 H-nuclear magnetic resonance spectra of water (a), 50% ethanol (b), and ethanol (c) crude extracts for nuclear magnetic resonance fingerprinting c
Figure 3 :a
Figure 3: C 13 -nuclear magnetic resonance spectra of water (a), 50% ethanol (b), and ethanol (c) crude extracts for nuclear magnetic resonance fingerprinting c
Figure 4 :
Figure 4: Transmission electron micrographs of placebo niosomes (a and b) and placebo solid lipid nanoparticles (c and d) d c
Table 1 :
Average values for the physicochemical properties of freshly prepared formulations±standard deviation (n=3)
Table 2 :
Mean initial (day 0) and final (day 84) physicochemical values recorded for the stability study with an indication of percentage change over the period indicates that absolute value dropped over the test period.EE WFA: Encapsulation efficiency of withaferin A; EE WNA: Encapsulation efficiency of withanolide A; NW: Water extract niosomes; SW: Water extract solid lipid nanoparticles; NE: Ethanol extract niosomes; SE: Ethanol extract solid lipid nanoparticles; N50: 50% ethanol extract niosomes; S50: 50% ethanol extract solid lipid nanoparticles
Table 3 :
Total amount of marker compound released as a percentage of initial amount in donor formulation and average cumulative amount of marker compound released after the 6 h membrane diffusion studies±standard deviation (n=10) signifies a significant difference between comparisons.*Not detected; a P=0.000132; b P<0.000000; c P=0.000132; d P=0.000132; e P<0.000000; P=0.000132.NW: Water extract niosomes; SW: Water extract solid lipid nanoparticles; NE: Ethanol extract niosomes; SE: Ethanol extract solid lipid nanoparticles; N50: 50% ethanol extract niosomes; S50: 50% ethanol extract solid lipid nanoparticles f | 2018-04-03T02:26:41.745Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "0b98d189b3871250bfb1bd9beb9087eac692a790",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc5669113",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3d8e6bc4d737dd0cad0940b28a26b64228a8b10",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
245823121 | pes2o/s2orc | v3-fos-license | Study on Overlying Strata Movement and Surface Subsidence of Coal Workfaces with Karst Aquifer Water
: The overlying strata layers of coal workfaces with karst aquifer water normally causes serious safety problems due to the precipitation, drainage and water inrush, such as a wide range and long term of surface subsidence. In this study, by taking 10,301 working faces of the Daojiao coal mine in Guizhou Province as the engineering background, the numerical model of water-bearing strata with fluid-solid coupling was established by using UDEC to illustrate the laws of overlying strata movement and surface subsidence. A theory model was proposed to calculate the surface settlement caused by the drainage of aquifer based on the principle of effective stress modified by the Biot coefficient α b . The results showed that the corresponding maximum value (0.72 m) and the range of the surface subsidence with the occurrence of karst aquifer water were larger than that of the overlying strata without karst aquifer water (e.g., the maximum value of surface subsidence with 0.1 m). Moreover, the surface subsidence caused by the drainage of aquifer accounted for 17.8% of the total surface subsidence caused by coal mining. According to the field monitoring of surface subsidence in 10,301 working faces, the maximum value was 0.74 m, which was highly consistent with the results of numerical simulation and theoretical analysis. It verified the accuracy and reliability of the numerical model and the theory model in this study.
Introduction
As one type of geological disaster, surface subsidence generally occurs slowly and is difficult to detect, and once the surface has subsided it is almost impossible to completely recover. The ecological environment has obviously deteriorated due to surface subsidence. It has caused serious safety problems and economic losses in industrial and agricultural production, transportation and people's lives.
The whole dynamic movement, deformation and failure process of overlying strata in the period of coal mining was complex due to the special conditions of coal bed and the particularity mechanism of coal seam failure. Therefore, the accuracy regarding the overlying strata movement and surface subsidence played a vital role to ensure coal mining with safety, efficiency, intelligence and green practices. Normally, the overlying strata can be divided into three different moving zones in longwall mining, and the general understanding of the three zones in the horizontal and vertical directions have been proposed, respectively [1][2][3][4][5][6]. Liang et al. established the ANSYS numerical model to illustrate the surface-subsidence laws of thick alluvium caused by coal mining [7]. Xu et al. revealed that the number of key overburdened strata in deep mining was generally larger than that in shallow mining, and that the distance between the main key overburdened strata and the mining coal seam in deep mining was generally greater than that in shallow
Engineering Background
The Daojiao coal mine is located in Songkan Town, Tongzi County, Guizhou Province with geographical coordinates 106 • 53 49 -106 • 54 25 E and 28 • 30 14 -28 • 31 32 E. The designed production capacity of the technical transformation of the mine was 300,000 t/A. C 3 as the main coal seam in the Daojiao coal mine was located in the middle and upper part of Longtan Formation, about 26 m away from the limestone of Maokou Formation, and the thickness of the coal seam was in the range of 1.86 to 2.26 m with an average thickness of 2.02 m. In addition, the dip angle and the average buried depth were 7 • and 210 m, respectively. The 10,301 working faces were laid out in this main coal seam, and the length, mining height and advancing length were 160 m, 2.02 m and 540 m, respectively. Moreover, the roof lithology in the 10,301 working faces was mudstone, carbonaceous mudstone and locally intercalated siltstone, while the floor lithology was clay rock and mudstone as shown in Figure 1. The main aquifers and impermeable stratum in the mining area were Quaternary (Q) pore aquifer, impermeable stratum of Triassic system (T 1 y 3 , T 1 y 1 ), aquifers of Triassic system (T 1 y 2 ), impermeable stratum of Permian (P 3 l) and aquifers of Permian (P 3 c, P 3 c).
Engineering Background
The Daojiao coal mine is located in Songkan Town, Tongzi County, Guizhou Province with geographical coordinates 106°53′49″-106°54′25″ E and 28°30″14″-28°31′32″ E. The designed production capacity of the technical transformation of the mine was 300,000 t/A. C3 as the main coal seam in the Daojiao coal mine was located in the middle and upper part of Longtan Formation, about 26 m away from the limestone of Maokou Formation, and the thickness of the coal seam was in the range of 1.86 to 2.26 m with an average thickness of 2.02 m. In addition, the dip angle and the average buried depth were 7° and 210 m, respectively. The 10,301 working faces were laid out in this main coal seam, and the length, mining height and advancing length were 160 m, 2.02 m and 540 m, respectively. Moreover, the roof lithology in the 10,301 working faces was mudstone, carbonaceous mudstone and locally intercalated siltstone, while the floor lithology was clay rock and mudstone as shown in Figure 1. The main aquifers and impermeable stratum in the mining area were Quaternary (Q) pore aquifer, impermeable stratum of Triassic system (T1y 3 , T1y 1 ), aquifers of Triassic system (T1y 2 ), impermeable stratum of Permian (P3l) and aquifers of Permian (P3c, P3c).
geological characteristics
Purple red, yellow clay and gravel, 0-7 m thick Purple red, dark purple with yellow green mudstone, with thick layered marl, limestone. Thickness greater than 50 m Deep gray, gray thin to medium thick interlayers micro to fine crystalline limestone, argillaceous limestone and argillaceous limestone, the bottom is yellow green, brown gray shale, calcareous s h a l e , c a l c a r e o u s m u d s t o n e . 1 2 0 ~ 1 4 0 m t h i c k .
Mudstone and argillaceous limestone, the bottom is yellow green, brown gray shale, calcareous shale, calcareous mudstone.
The lithology is gray-deep gray medium-thick layer to fine-grained gray. Bioclastic limestone, containing a small amount of black carbon mudstone, bottom with 0.10 ~ 0.50 m black mudstone. The thickness is about 50m. Integrated contact with underlying strata.
The lithology is mainly composed of gray, dark gray mudstone, silty mudstone, clay rock, pyrite-bearing clay rock and coal, with thin lenticular siderite ( limonite after surface weathering ). Carpiopods, plant fossils. The thickness is about 60m. Contact with underlying strata false integration.
Light gray, dark gray medium to thick layered limestone. Bioclastic limestone, lenticular rock in the middle of the gray, dark gray thick to full thick block fine limestone. The upper part is light gray, gray thick massive limestone. Thickness is greater than 100m.
Model Establishment and Parameters Selection
By taking 10,301 working faces of the Daojiao coal mine in Guizhou province as the engineering background, numerical simulation models using fluid-solid coupling with the length and height of 500 m × 240 m, respectively, were established to illustrate the influence of coal mining on the overlying strata movement and surface subsidence while both considering karst aquifer and not, as shown in Figure 2. In this study, the Mohr-Coulomb theory was adopted as the failure yield criterion and the contact Coulomb slip model was selected for the structural plane. Moreover, the upper part of the model was the free boundary and the lower part of the model was restrained vertically with lateral constraints on the left and right boundaries. In terms of overlying strata containing two extremely thick karst aquifers, a smaller block in these layers was set because its integrity and strength were porous. In addition, there was an impermeable stratum of mudstone between the two karst aquifers with good integrity and high strength to set as the larger block, and the block size of other overlying strata gradually increased with the distance away from coal seam. Moreover, the 50 m coal pillars were reserved on the left and right to eliminate the boundary effect with the working face total advancing 400 m.
Model Establishment and Parameters Selection
By taking 10,301 working faces of the Daojiao coal mine in Guizhou province as the engineering background, numerical simulation models using fluid-solid coupling with the length and height of 500 m × 240 m, respectively, were established to illustrate the influence of coal mining on the overlying strata movement and surface subsidence while both considering karst aquifer and not, as shown in Figure 2. In this study, the Mohr-Coulomb theory was adopted as the failure yield criterion and the contact Coulomb slip model was selected for the structural plane. Moreover, the upper part of the model was the free boundary and the lower part of the model was restrained vertically with lateral constraints on the left and right boundaries. In terms of overlying strata containing two extremely thick karst aquifers, a smaller block in these layers was set because its integrity and strength were porous. In addition, there was an impermeable stratum of mudstone between the two karst aquifers with good integrity and high strength to set as the larger block, and the block size of other overlying strata gradually increased with the distance away from coal seam. Moreover, the 50 m coal pillars were reserved on the left and right to eliminate the boundary effect with the working face total advancing 400 m. According to the field-monitoring hydrogeological data from the coal mine, the average buried depth of the static water level in the mining area was 10 m, and there was a certain hydraulic connection between the two aquifers. Therefore, a water head pressure of 1.8 MPa varying in gradient along the gravity direction was applied at the bottom of the first karst aquifer, and the left and right lower boundaries were set as impervious boundaries. The flow steady was set for seepage calculation. For comparing the influence of water drainage in the karst aquifer on the overlying strata movement and surface subsidence, a reference numerical simulation model of a karst aquifer without water was also established while keeping the other conditions constant, and the monitoring line was set along the strike on the surface with 30 points to analyze the overlying strata movement and surface subsidence. Table 1 illustrates the physical, mechanical and hydraulic parameters of each overlying strata layer. According to the field-monitoring hydrogeological data from the coal mine, the average buried depth of the static water level in the mining area was 10 m, and there was a certain hydraulic connection between the two aquifers. Therefore, a water head pressure of 1.8 MPa varying in gradient along the gravity direction was applied at the bottom of the first karst aquifer, and the left and right lower boundaries were set as impervious boundaries. The flow steady was set for seepage calculation. For comparing the influence of water drainage in the karst aquifer on the overlying strata movement and surface subsidence, a reference numerical simulation model of a karst aquifer without water was also established while keeping the other conditions constant, and the monitoring line was set along the strike on the surface with 30 points to analyze the overlying strata movement and surface subsidence. Table 1 illustrates the physical, mechanical and hydraulic parameters of each overlying strata layer.
Overlying Strata Movement under the Condition of Karst Aquifer without Water
With the advance of the working face, the overlying strata movement and surface subsidence for the karst aquifer without water in the model are illustrated in Figure 3. When the advance of working face was 50 m, the immediate roof was suspended firstly under the action of self-weight stress, and the separation of strata or even collapses could be observed in the middle part of the overlying strata. With the working face advancing to 100 m, the overlying strata began to move, separate and sink due to the varying physical and mechanical properties of each rock layer, and the rock stratum in the middle of the goaf was collapsed and compacted. In addition, there was a large separation in the rock strata above the middle. With the continuous advancing of the working face as shown in Figure 3c-h, the caving strata in the middle of the goaf was continuously compacted, and large separation fractures appeared in the overlying strata near the open cut and the working face. The cantilever structure of the overlying strata was formed at the open cut and working face due to the support of the coal wall. Therefore, a similar triangular area could be observed between the open cut and the working face. Generally, the overlying strata undergo the processes of deformation, separation and collapse with the generation of three zones in the horizontal and vertical directions.
Overlying Strata Movement under the Condition of Karst Aquifer without Water
With the advance of the working face, the overlying strata movement and surface subsidence for the karst aquifer without water in the model are illustrated in Figure 3. When the advance of working face was 50 m, the immediate roof was suspended firstly under the action of self-weight stress, and the separation of strata or even collapses could be observed in the middle part of the overlying strata. With the working face advancing to 100 m, the overlying strata began to move, separate and sink due to the varying physical and mechanical properties of each rock layer, and the rock stratum in the middle of the goaf was collapsed and compacted. In addition, there was a large separation in the rock strata above the middle. With the continuous advancing of the working face as shown in Figure 3c-h, the caving strata in the middle of the goaf was continuously compacted, and large separation fractures appeared in the overlying strata near the open cut and the working face. The cantilever structure of the overlying strata was formed at the open cut and working face due to the support of the coal wall. Therefore, a similar triangular area could be observed between the open cut and the working face. Generally, the overlying strata undergo the processes of deformation, separation and collapse with the generation of three zones in the horizontal and vertical directions. Table 2 illustrates the front and rear collapse angles of the overlying strata with the different advance distances. It can be observed that the rear collapse angle of the overlying strata stayed constant at 63°, and that the front collapse angle of the overlying strata increased with the increase in advancing distance at first, and then kept stable at 65° after advancing 300 m. In addition, the front collapse angle of the overlying strata was equal to the rear collapse angle when the advance distance of the working face was 200 m. Table 2 illustrates the front and rear collapse angles of the overlying strata with the different advance distances. It can be observed that the rear collapse angle of the overlying strata stayed constant at 63 • , and that the front collapse angle of the overlying strata increased with the increase in advancing distance at first, and then kept stable at 65 • after advancing 300 m. In addition, the front collapse angle of the overlying strata was equal to the rear collapse angle when the advance distance of the working face was 200 m. Figure 4 illustrates the surface vertical displacement with the increase in advancing distance of the working face for the karst aquifer without water. It can be observed that the surface subsidence caused by coal mining was relatively small (<0.06 m) until the working face advanced to 150 m. When the advancing distance of the working face was 200 m, the surface subsidence in the middle of the goaf was greater than that of both sides, and the growth rate of the maximum surface subsidence increased significantly until the advancing distance of the working face increased to 300 m. Correspondingly, the maximum surface subsidence reached to 0.62 m. And the maximum point of surface subsidence was close to the middle of goaf. In addition, the surface-subsidence curve was symmetrical with the center of the overlying strata in the goaf. Moreover, the surface subsidence above the open cut was always greater than that of above the working face in the whole advancing process of the working face. Figure 4 illustrates the surface vertical displacement with the increase in advancing distance of the working face for the karst aquifer without water. It can be observed that the surface subsidence caused by coal mining was relatively small (<0.06 m) until the working face advanced to 150 m. When the advancing distance of the working face was 200 m, the surface subsidence in the middle of the goaf was greater than that of both sides, and the growth rate of the maximum surface subsidence increased significantly until the advancing distance of the working face increased to 300 m. Correspondingly, the maximum surface subsidence reached to 0.62 m. And the maximum point of surface subsidence was close to the middle of goaf. In addition, the surface-subsidence curve was symmetrical with the center of the overlying strata in the goaf. Moreover, the surface subsidence above the open cut was always greater than that of above the working face in the whole advancing process of the working face. In terms of the vertical displacement of different overlying strata layers, the curves were generally symmetrically distributed as shown in Figure 5, and the vertical displacement of the overlying strata in the goaf decreased with the increase in the distance away from the coal seam, while the vertical displacement of the overlying strata above the open cut and the coal wall slightly increased with the increase in the distance away from the coal seam. To be specific, the maximum vertical displacement of the overlying strata away from coal seam with 10 m, 34 m and 84 m was 1.79 m, 1.65 m and 0.94 m, respectively. In terms of the vertical displacement of different overlying strata layers, the curves were generally symmetrically distributed as shown in Figure 5, and the vertical displacement of the overlying strata in the goaf decreased with the increase in the distance away from the coal seam, while the vertical displacement of the overlying strata above the open cut and the coal wall slightly increased with the increase in the distance away from the coal seam. To be specific, the maximum vertical displacement of the overlying strata away from coal seam with 10 m, 34 m and 84 m was 1.79 m, 1.65 m and 0.94 m, respectively. Figure 4 illustrates the surface vertical displacement with the increase in advancing distance of the working face for the karst aquifer without water. It can be observed that the surface subsidence caused by coal mining was relatively small (<0.06 m) until the working face advanced to 150 m. When the advancing distance of the working face was 200 m, the surface subsidence in the middle of the goaf was greater than that of both sides, and the growth rate of the maximum surface subsidence increased significantly until the advancing distance of the working face increased to 300 m. Correspondingly, the maximum surface subsidence reached to 0.62 m. And the maximum point of surface subsidence was close to the middle of goaf. In addition, the surface-subsidence curve was symmetrical with the center of the overlying strata in the goaf. Moreover, the surface subsidence above the open cut was always greater than that of above the working face in the whole advancing process of the working face. In terms of the vertical displacement of different overlying strata layers, the curves were generally symmetrically distributed as shown in Figure 5, and the vertical displacement of the overlying strata in the goaf decreased with the increase in the distance away from the coal seam, while the vertical displacement of the overlying strata above the open cut and the coal wall slightly increased with the increase in the distance away from the coal seam. To be specific, the maximum vertical displacement of the overlying strata away from coal seam with 10 m, 34 m and 84 m was 1.79 m, 1.65 m and 0.94 m, respectively. Figure 6 illustrates the overlying strata movement for the karst aquifer with water. The separation and collapse of the overlying strata could be firstly observed in the middle position when the advance distance of the working face was 50 m, and a further large separation between the sandy mudstone and the bottom of the first karst aquifer was revealed due to the influence of mining stress and aquifer seepage when the advance distance of the working face was 100 m. With the advance distance of the working face increasing to 150 m, the separation range between the sandy mudstone and the bottom of the first karst aquifer continued to expand, and then this separation was compacted when the advance distance of working face was 200 m, which could not be observed in the overlying strata movement for the karst aquifer without water. Meanwhile, a large separation between the bottom of the second karst aquifer and the mudstone was generated. With the continuous advancing of the working face, the separated strata were gradually compacted again.
Overlying Strata Movement under the Condition of Karst Aquifer with Water
Mathematics 2022, 10, x FOR PEER REVIEW 8 of 18 separation between the sandy mudstone and the bottom of the first karst aquifer was revealed due to the influence of mining stress and aquifer seepage when the advance distance of the working face was 100 m. With the advance distance of the working face increasing to 150 m, the separation range between the sandy mudstone and the bottom of the first karst aquifer continued to expand, and then this separation was compacted when the advance distance of working face was 200 m, which could not be observed in the overlying strata movement for the karst aquifer without water. Meanwhile, a large separation between the bottom of the second karst aquifer and the mudstone was generated. With the continuous advancing of the working face, the separated strata were gradually compacted again. Table 3 illustrates the collapse angle of the overlying strata for the karst aquifer with water. Similarly, the rear collapse angle was still 63°, while the front collapse angle increased at first and then kept constant with the increase in the workface advancing, and the front collapse angle was equal to the rear collapse when the advance distance of the working face was 250 m. Generally, the overburden collapse angle in this condition was roughly the same for the karst aquifer without water under each advancing distance of the working face. Figure 7 illustrates the surface vertical displacement with the increase in advancing distance of the working face for the karst aquifer with water. It can be observed that the surface subsidence caused by coal mining was relatively small (<0.07 m) until the working face advanced to 150 m. When the advancing distance of the working face was 200 m, the surface subsidence in the middle of the goaf was greater than that of both sides, and the Table 3 illustrates the collapse angle of the overlying strata for the karst aquifer with water. Similarly, the rear collapse angle was still 63 • , while the front collapse angle increased at first and then kept constant with the increase in the workface advancing, and the front collapse angle was equal to the rear collapse when the advance distance of the working face was 250 m. Generally, the overburden collapse angle in this condition was roughly the same for the karst aquifer without water under each advancing distance of the working face. Figure 7 illustrates the surface vertical displacement with the increase in advancing distance of the working face for the karst aquifer with water. It can be observed that the surface subsidence caused by coal mining was relatively small (<0.07 m) until the working face advanced to 150 m. When the advancing distance of the working face was 200 m, the surface subsidence in the middle of the goaf was greater than that of both sides, and the growth rate of the maximum surface subsidence increased significantly until the advancing Mathematics 2022, 10, 169 9 of 17 distance of the working face increased to 300 m. Correspondingly, the maximum surface subsidence reached 0.72 m, which was greater than that of the karst aquifer without water.
Mathematics 2022, 10, x FOR PEER REVIEW 9 of 18 growth rate of the maximum surface subsidence increased significantly until the advancing distance of the working face increased to 300 m. Correspondingly, the maximum surface subsidence reached 0.72 m, which was greater than that of the karst aquifer without water. In terms of the vertical displacement of the different overlying strata layers for the karst aquifer with water, the curves were generally symmetrically distributed as shown in Figure 8, and the vertical displacement of the overlying strata in the goaf decreased with the increase in the distance away from the coal seam, while the vertical displacement of the overlying strata above the open cut and the coal wall slightly increased with the increase in the distance away from the coal seam. Compared with the condition of the karst aquifer without water, the vertical displacement in the different rock layers for the karst aquifer increased. To be specific, the maximum vertical displacement of the overlying strata away from coal seam with 10 m, 34 m and 84 m was 1.87 m, 1.74 m and 1.2 m, respectively. As shown in Figure 9, the surface subsidence for the karst aquifer with water was greater than the karst aquifer without water. To be specific, the maximum vertical displacement was 0.62 and the subsidence coefficient was 0.305 in the condition of the karst aquifer without water, while the corresponding values were 0.72 m and 0.35, respectively, for the karst aquifer with water. This can be explained by the fact that the water in the karst aquifer flowed to the working face along the fracture with the advance of the working face because the fracture in the rock strata was connected to the karst aquifer. In addition, the drainage of the working face led to a decrease in osmotic pressure of the karst aquifer and the increase in the effective stress of the fractured limestone. In terms of the vertical displacement of the different overlying strata layers for the karst aquifer with water, the curves were generally symmetrically distributed as shown in Figure 8, and the vertical displacement of the overlying strata in the goaf decreased with the increase in the distance away from the coal seam, while the vertical displacement of the overlying strata above the open cut and the coal wall slightly increased with the increase in the distance away from the coal seam. Compared with the condition of the karst aquifer without water, the vertical displacement in the different rock layers for the karst aquifer increased. To be specific, the maximum vertical displacement of the overlying strata away from coal seam with 10 m, 34 m and 84 m was 1.87 m, 1.74 m and 1.2 m, respectively.
Mathematics 2022, 10, x FOR PEER REVIEW 9 of 18 growth rate of the maximum surface subsidence increased significantly until the advancing distance of the working face increased to 300 m. Correspondingly, the maximum surface subsidence reached 0.72 m, which was greater than that of the karst aquifer without water. In terms of the vertical displacement of the different overlying strata layers for the karst aquifer with water, the curves were generally symmetrically distributed as shown in Figure 8, and the vertical displacement of the overlying strata in the goaf decreased with the increase in the distance away from the coal seam, while the vertical displacement of the overlying strata above the open cut and the coal wall slightly increased with the increase in the distance away from the coal seam. Compared with the condition of the karst aquifer without water, the vertical displacement in the different rock layers for the karst aquifer increased. To be specific, the maximum vertical displacement of the overlying strata away from coal seam with 10 m, 34 m and 84 m was 1.87 m, 1.74 m and 1.2 m, respectively. As shown in Figure 9, the surface subsidence for the karst aquifer with water was greater than the karst aquifer without water. To be specific, the maximum vertical displacement was 0.62 and the subsidence coefficient was 0.305 in the condition of the karst aquifer without water, while the corresponding values were 0.72 m and 0.35, respectively, for the karst aquifer with water. This can be explained by the fact that the water in the karst aquifer flowed to the working face along the fracture with the advance of the working face because the fracture in the rock strata was connected to the karst aquifer. In addition, the drainage of the working face led to a decrease in osmotic pressure of the karst aquifer and the increase in the effective stress of the fractured limestone. As shown in Figure 9, the surface subsidence for the karst aquifer with water was greater than the karst aquifer without water. To be specific, the maximum vertical displacement was 0.62 and the subsidence coefficient was 0.305 in the condition of the karst aquifer without water, while the corresponding values were 0.72 m and 0.35, respectively, for the karst aquifer with water. This can be explained by the fact that the water in the karst aquifer flowed to the working face along the fracture with the advance of the working face because the fracture in the rock strata was connected to the karst aquifer. In addition, the drainage of the working face led to a decrease in osmotic pressure of the karst aquifer and the increase in the effective stress of the fractured limestone. Mathematics 2022, 10, x FOR PEER REVIEW 10 of 18 Figure 9. Comparison of surface vertical displacement.
Calculation Model of Surface Subsidence Caused by Aquifer Drainage
When considering the consolidation drainage of a soil aquifer, the compression deformation is usually calculated by formula (1) [28]. The corresponding principle of effective stress can produce an increment of Δp when the pore-water pressure in the soil decreases Δp and the axial deformation of limestone aquifer (εz) is related to various factors such as osmotic pressure drop (Δp), initial seepage pressure (p0), confining pressure (σ3) and other factors in the calculation of drainage deformation of the karst aquifer. However, the osmotic pressure in the fractured limestone can decrease Δp, and the stress increment is different due to the difference in initial osmotic pressure (p0) and confining pressure (σ3), which did not agree with the principle of effective stress. Therefore, the principle effective stress modified by the Biot coefficient αb can be used in the calculation of consolidation-drainage deformation of the karst aquifer, and Equation (1) can be changed to Equation (2). (2) where αb is the Biot coefficient; εz is the axial displacement deformation; av is the compressibility coefficient of rock and soil mass; e0 is the initial void ratio of rock and soil mass; Δp is the pore pressure.
According to the characteristics of aquifer drainage, the consolidation-drainage-deformation model of the karst aquifer was established as shown in Figure 10.
Calculation Model of Surface Subsidence Caused by Aquifer Drainage
When considering the consolidation drainage of a soil aquifer, the compression deformation is usually calculated by formula (1) [28]. The corresponding principle of effective stress can produce an increment of ∆p when the pore-water pressure in the soil decreases ∆p and the axial deformation of limestone aquifer (ε z ) is related to various factors such as osmotic pressure drop (∆p), initial seepage pressure (p 0 ), confining pressure (σ 3 ) and other factors in the calculation of drainage deformation of the karst aquifer. However, the osmotic pressure in the fractured limestone can decrease ∆p, and the stress increment is different due to the difference in initial osmotic pressure (p 0 ) and confining pressure (σ 3 ), which did not agree with the principle of effective stress. Therefore, the principle effective stress modified by the Biot coefficient α b can be used in the calculation of consolidation-drainage deformation of the karst aquifer, and Equation (1) can be changed to Equation (2).
where α b is the Biot coefficient; ε z is the axial displacement deformation; a v is the compressibility coefficient of rock and soil mass; e 0 is the initial void ratio of rock and soil mass; ∆p is the pore pressure. According to the characteristics of aquifer drainage, the consolidation-drainagedeformation model of the karst aquifer was established as shown in Figure 10.
Calculation Model of Surface Subsidence Caused by Aquifer Drainage
When considering the consolidation drainage of a soil aquifer, the compression deformation is usually calculated by formula (1) [28]. The corresponding principle of effective stress can produce an increment of Δp when the pore-water pressure in the soil decreases Δp and the axial deformation of limestone aquifer (εz) is related to various factors such as osmotic pressure drop (Δp), initial seepage pressure (p0), confining pressure (σ3) and other factors in the calculation of drainage deformation of the karst aquifer. However, the osmotic pressure in the fractured limestone can decrease Δp, and the stress increment is different due to the difference in initial osmotic pressure (p0) and confining pressure (σ3), which did not agree with the principle of effective stress. Therefore, the principle effective stress modified by the Biot coefficient αb can be used in the calculation of consolidation-drainage deformation of the karst aquifer, and Equation (1) can be changed to Equation (2). where αb is the Biot coefficient; εz is the axial displacement deformation; av is the compressibility coefficient of rock and soil mass; e0 is the initial void ratio of rock and soil mass; Δp is the pore pressure.
According to the characteristics of aquifer drainage, the consolidation-drainage-deformation model of the karst aquifer was established as shown in Figure 10. "a" was set as the buried depth of the groundwater level before aquifer drainage. If the rock mass below z = a was saturated, the element stress of plane "dcdh" at depth h can be expressed as follows where γ s is the volume weight of rock mass above the groundwater level; γ sat is the volume weight of saturated rock mass. Pore-water pressure can be expressed as follows where γ w is the volume weight of pore water. The particle stress of rock mass skeleton can be expressed as follows The pore-water pressure caused by the drainage of aquifer will be borne by the skeleton particles of the rock mass. According to the principle of effective stress modified by the Biot coefficient α b , the stress increment of the rock-mass skeleton particles can be expressed as follows According to Equation (2), the small compression deformation of element "dcdh" under the action of stress increment (α b ∆p) can be expressed as follows According to the random medium theory, after the drainage of rock mass, the movement of rock mass above the unit is dW water (x) due to the small subsidence (ds) caused by the drainage of karst aquifer. Meanwhile, the movement of rock mass above the unit is dW(x) + dW water (x) due to the micro subsidence (dh), where dW(x) is the micro unit of surface subsidence caused by the rock mass without considering drainage.
Therefore, the corresponding expression can be expressed as follows According to the random medium theory, the surface forms a small unit subsidence basin. In the drainage water of rock and soil mass, any unit can produce slight compression, and the sinking formula of a water micro unit can be shown as follows where r w is the main influence radius of surface subsidence caused by drainage. According to the principle of superposition, the surface subsidence caused by aquifer drainage can be expressed as follows According to Equations (4)-(8), the formula can be obtained as follows According to the literature, when the drainage of rock mass is not considered and the mining width of the coal seam is "c", the surface subsidence can be expressed as follows [28] where W 0 is the maximum surface subsidence, r is the main influence radius of surface subsidence, m is the mining height, and α is the dip angle of coal seam. Using the probability integral function erf, Equation (12) can be transformed into the following formula.
where erf is the probability integral function, erf The probability integral function can be obtained by looking up the probability integral table.
By substituting Equation (13) into Equation (11), the following formula can be obtained.
It can also be expressed as follows The change of the geotechnical void ratio can be expressed as follows And the surface subsidence caused by underground mining and aquifer drainage can be expressed as follows where W (x) is the surface subsidence caused by underground mining and aquifer drainage, W(x) is the surface subsidence caused by mining obtained by probability integral method.
Theoretical Calculation
According to the above numerical-simulation results, due to the influence of mining, the water flow in the overlying aquifer was recharged downward to the lower aquifer as shown in Figure 11, and there was a hydraulic connection among the three rock strata.
where W0 is the maximum surface subsidence, r is the main influence radius of surface subsidence, m is the mining height, and α is the dip angle of coal seam.
Using the probability integral function erf, Equation (12) can be transformed into the following formula.
where erf is the probability integral function, . The probability integral function can be obtained by looking up the probability integral table.
By substituting Equation (13) into Equation (11), the following formula can be obtained.
[ ] It can also be expressed as follows The change of the geotechnical void ratio can be expressed as follows And the surface subsidence caused by underground mining and aquifer drainage can be expressed as follows where W'(x) is the surface subsidence caused by underground mining and aquifer drainage, W(x) is the surface subsidence caused by mining obtained by probability integral method.
Theoretical Calculation
According to the above numerical-simulation results, due to the influence of mining, the water flow in the overlying aquifer was recharged downward to the lower aquifer as shown in Figure 11, and there was a hydraulic connection among the three rock strata. However, with the advancement of workface mining, the following processes can change in turn: When the water-conducting fissure zone extended to the first karst aquifer, the first karst aquifer was drained. There was a head difference between the first karst aquifer and the second karst aquifer. When the initial seepage gradient of the rock and soil strata was reached, the overflow from phreatic water to confined water could be observed.
With the expansion of the water flow seepage to the impermeable mudstone, the overall permeability of the impermeable mudstone increased. The second karst aquifer replenished the first karst aquifer through the impermeable mudstone, and the water volume increased. The amount of water supplemented by the second karst aquifer to the first karst aquifer through the impermeable mudstone increased as well.
When the overall water impermeability of the mudstone stratum reduced to a certain value, the first and second karst aquifers could connect with each other and the drawdown of the water level increased with the increase in the thickness of aquifer, which intensifies the surface subsidence value caused by the consolidation-drainage deformation of aquifer. Therefore, the two overlying karst aquifers can be regarded as one aquifer for calculation.
According to field data in the Daojiao coal mine, without considering the drainage of aquifer, the calculation parameters of surface subsidence after coal mining can be expressed as follows.
The coefficient of mining subsidence: q = 0.305. The main influence tangent angle: where ϕ is the internal friction angle (30 • ).
The main influence radius of surface subsidence after C 3 coal-seam mining. × 400) + 1 = 0.13m The maximum value of surface subsidence caused by underground mining and aquifer drainage can be expressed as follows: W max = W water (x) + W 0 = 0.73 m.
According to the calculation results, the surface subsidence caused by aquifer drainage accounted for 17.8% of the total surface subsidence, indicating that the increment of surface subsidence caused by multi-aquifer drainage cannot be ignored.
Observation Scheme
Laser-tracking technology has the advantages of a high measurement accuracy and efficiency, and it is widely used in mine-surface measurement. Therefore, laser-tracking technology was used to observe the surface subsidence with the advance of the 10,301 working faces in the Daojiao coal mine. In order to reduce the impact of temperature fluctuation on the observation results, the observation points arranged on the surface were buried 300 mm below the topsoil. The observation data adopted the national standard of the third-class vertical control point as the reference point in order to ensure the accuracy of the observation results. Figure 12 the third-class vertical control point as the reference point in order to ensure the accuracy of the observation results. Figure 12 illustrates the arrangement of surface-observation stations in the 10,301working faces. An observation line was designed along the trend of the working face with a length of 500 m (baseline A) and an observation point was arranged every 20 m on this line (total 26 observation points). In addition, a total of nine observation lines were arranged along the inclination of the working face with a length of 200 m and observation points arranged every 20 m on these lines. Specifically, observation line B was located above the coal pillar and away from the boundary by 25 m, and observation lines C1 to C8 were located above the goaf and away from open cut by 25
Results and Analysis
The laser-tracking technology was used to observe the surface subsidence during the whole mining period (about one month), and total data records were compiled for each measuring point. Figure 13 illustrates the surface-subsidence curves in the trend and inclination directions of the 10,301 working faces. With the continuous advancement of the working face, the maximum value of the surface subsidence gradually moved to the central position of the goaf, and the surface subsidence increased with the increase in workface advancement until the workface had advanced to 300 m. The maximum surface subsidence was 0.74 m and the subsidence curve was symmetrical with the middle of the goaf as the center. In addition, the surface subsidence near the open cut was always greater than that near the working face. This can be explained by the fact that the affected range of surfaces in the mining process became larger with the advancement of the workface and the cracks caused by mining were gradually compacted due to the collapsed rock filling in the goaf as the roof support. Moreover, the consolidation-drainage of aquifer near the open cut was greater than that near the working face, and the stress originally borne by pore water was transferred to the rock mass to increase the effective stress, which caused the consolidation of rock and the surface subsidence. It can be illustrated that the surface subsidence for the karst aquifer with water was the joint action of coal-seam mining and aquifer drainage.
Results and Analysis
The laser-tracking technology was used to observe the surface subsidence during the whole mining period (about one month), and total data records were compiled for each measuring point. Figure 13 illustrates the surface-subsidence curves in the trend and inclination directions of the 10,301 working faces. With the continuous advancement of the working face, the maximum value of the surface subsidence gradually moved to the central position of the goaf, and the surface subsidence increased with the increase in workface advancement until the workface had advanced to 300 m. The maximum surface subsidence was 0.74 m and the subsidence curve was symmetrical with the middle of the goaf as the center. In addition, the surface subsidence near the open cut was always greater than that near the working face. This can be explained by the fact that the affected range of surfaces in the mining process became larger with the advancement of the workface and the cracks caused by mining were gradually compacted due to the collapsed rock filling in the goaf as the roof support. Moreover, the consolidation-drainage of aquifer near the open cut was greater than that near the working face, and the stress originally borne by pore water was transferred to the rock mass to increase the effective stress, which caused the consolidation of rock and the surface subsidence. It can be illustrated that the surface subsidence for the karst aquifer with water was the joint action of coal-seam mining and aquifer drainage.
Discussion
Based on the numerical simulation and theory analysis, the maximum surface subsidence of the mining area for the karst aquifer without water drainage was 0.62 m and 0.6 m, respectively, and the maximum surface subsidence of the mining area with considering water drainage was 0.72 m and 0.73 m, respectively, which was consistent with the field monitoring result of 0.74 m. It verified the results and the accuracy of the numerical simulation and theoretical calculation. The proposed theory model in this study was
Discussion
Based on the numerical simulation and theory analysis, the maximum surface subsidence of the mining area for the karst aquifer without water drainage was 0.62 m and 0.6 m, respectively, and the maximum surface subsidence of the mining area with considering water drainage was 0.72 m and 0.73 m, respectively, which was consistent with the field monitoring result of 0.74 m. It verified the results and the accuracy of the numerical simulation and theoretical calculation. The proposed theory model in this study was based on the effective stress principle modified by the Biot coefficient α b and can effectively and accurately obtain the surface subsidence while considering water drainage in a karst aquifer and also provide great help in understanding karst topography.
However, there were also some limitations in illustrating the surface subsidence and the overlying strata movement with respect to the water drainage of aquifer. Specifically, the hydrogeological conditions were simplified in the numerical simulation. However, the lithology and hydrogeological conditions of the overlying strata were complex and changeable in the process of real mining environment. In addition, the drainage tests of aquifer rock samples can be further performed to analyze the relationship between axial deformation and the relevant factors (e.g., pore pressure drop ∆p, initial seepage pressure p 0 and confining pressure σ 3 ). Moreover, more than two observation lines can be laid out along the strike of workface to improve the accuracy of the observation results under the condition of economic rationality.
Conclusions
In this study, by taking 10,301 working faces of the Daojiao coal mine in Guizhou Province as the engineering background, comprehensive research methods (e.g., numerical simulation, theory analysis, field monitoring) were adopted in order to illustrate the laws of overlying strata movement and the mechanism of surface subsidence with/without considering water drainage in multi-karst aquifers. The main conclusions are as follows: (1) Compared with the karst aquifers without water, the movement and deformation characteristics of overburden for the karst aquifers with water drainage were quite different, while the collapse angle of overburden was roughly the same. (2) With the advance of the workface, the fracture caused by mining activities can penetrate to a karst aquifer and the water in the aquifer can flow to the workface along the fracture. The water drainage can decrease the osmotic pressure of aquifer, increase the effective stress of fractured limestone and the compression of aquifer, which can intensify the surface subsidence. (3) The maximum surface subsidence of mining area for the karst aquifer without water drainage was about 0.6 m, and the maximum surface subsidence of the mining area when considering water drainage was about 0.73 m. Therefore, the surface subsidence caused by drainage of multi-aquifer accounted for 17.8% of the total surface subsidence. (4) Based on the effective stress principle modified by the Biot coefficient α b , the axial deformation of aquifer with considering water drainage can be obtained, and the field-monitoring results of surface subsidence can also verify the accuracy of the theory-model results. | 2022-01-09T16:17:25.923Z | 2022-01-06T00:00:00.000 | {
"year": 2022,
"sha1": "ec9bb81459d74ec163f66373f0e446beb50240d2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7390/10/2/169/pdf?version=1641461188",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1f738b14d171e40d794b52d605c3cb34dab76fd7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
252081932 | pes2o/s2orc | v3-fos-license | The roles of social status information in irony comprehension: An eye-tracking study
The literature on irony processing mainly focused on contextual effect, leaving other factors (such as social factors) untouched. The current study investigated how social status information affected the online comprehension of irony. As irony might be more damaging when a speaker uses it to a superordinate than the other way around, it is assumed that greater processing efforts would be observed in the former case. Using an eye-movement sentence reading paradigm, we recruited 36 native Mandarin speakers and examined the role of social status information and literality (i.e., literal and irony) in their irony interpretation. Our results showed ironic statements were more effortful to process than literal ones, reporting an early and consistent effect on the target regions. The social status effect followed the literality effect, with more difficulty in processing ironic statements that targeted the superordinate than the subordinate; such an effect of social status was missing with literal statements. Besides, an individual’s social skill appeared to affect the perception of status information in ironic statements, as the socially skillful readers needed more time than the socially unskillful to process irony targeting a subordinate in the second half of the experiment in the critical region. Our study suggests that irony processing might be further discussed in terms of the relative predictability of linguistic, social, and individual variabilities.
Introduction
Irony is traditionally viewed as a figurative expression that carries the meaning opposite to its literal meaning, thus violating the Quality Maxim (Grice, 1975). Given the various subtypes (e.g., sarcasm, jocularity, rhetorical questions, hyperbole, and understatement) (Gibbs, 2000), the most common use of irony is to criticize, i.e., ironic criticism infers the negative by stating the positive. It stems from our positive expectations in most cases, so the failure of the expectation may lead to an ironic remark (Wilson and Sperber, 2012). For example, after hiking on a rainy day, a man says "what a good weather!" He is hoping for good weather when he goes hiking. But when the weather is in fact bad, he is complaining about the bad weather by saying the opposite. On account of its typicality, many studies use irony as a synonym for ironic criticism to investigate how irony is produced or comprehended.
The major challenge to understand irony is how to access the speaker's real intention hidden behind the literal meaning. This has been substantially investigated with three major models discussing how the incongruency between context and the literal meaning leads to the ironic interpretation. The Standard Pragmatic View (Grice, 1975;Searle, 1979), a modular processing model, proposes that the literal meaning is always first activated. As readers notice the incongruency between the context and the literal interpretation, the ironic meaning is then activated, together with the suppression of the former. Following this view, irony comprehension is a more cognitively demanding process, with an extra processing effort following the activation of literal meaning. The Direct Access View (Gibbs, 1986(Gibbs, , 2002 states that the context plays a predictive role in the interpretation of the forthcoming expressions, so that when the context is irony-biased, the ironic meaning can be directly activated without necessary full access to the literal meaning. Therefore, no additional effort is required in irony comprehension compared with a literal one. The Gradient Salience Hypothesis (Giora, 1997(Giora, , 2003Giora and Fein, 1999) assumes that irony comprehension depends on the meaning of salience (e.g., frequency, conventionality, or familiarity), with the salient meaning being activated prior to other interpretations. Therefore, for conventional ironies, the literal and ironic meanings are equally salient, so that the two meanings can be activated parallelly. However, if the ironic meaning is unconventional, the salient literal meaning is first activated, followed by the access to the ironic interpretation.
The previous experimental studies have yielded mixed results concerning which model better explains the real-time irony processing (e.g., Standard Pragmatic View: Dews and Winner, 1999;Direct Access View: Ivanko and Pexman, 2003;Gradient Salience Hypothesis: Filik et al., 2014). However, when factors other than the context-literal incongruency are considered, that goes beyond the scope of the above three models. In view of this, Katz (2005) and Pexman (2008) have proposed the Constraint-Satisfaction Model, in which all available cues or constraints (e.g., familiarity, language experience, and prosody) are involved and compete in a parallel manner, and the human parser finally reaches the interpretation that best satisfies the available constraints. Hence, if the ironic meaning is supported by more constraints in comparison with the literal meaning, the expression will be interpreted ironically.
There is ample evidence that in addition to the incongruency between contextual valence and literal meaning, other cues, social factors in particular, might constrain the interpretation of irony. For example, as irony normally conveys a critical attitude toward the addressee, it should be used with great caution. In some situations, irony distances the speaker from the addressee when the addressee realizes what he/she is expected to do is in contrast with what he/she actually does; it also seems to maintain or promote relationship as it brings humor or emphasizes solidarity in a conversation (Dews et al., 1995;Jorgensen, 1996;Gibbs and Colston, 2001). Therefore, the speaker normally evaluates the relationship with the addressee to make sure that irony can be understood properly, and irony conducted in an inappropriate relationship may be perceived as somewhat offensive or even assaulting.
There are emerging studies to examine the role of social information in irony comprehension. For example, the addressee can be habituated to how ironic a speaker is and adopt the communicative style to comprehend the ironic utterance (Regel et al., 2010). Besides, children's understanding of irony that violates socially shared norms can improve reliably better than understanding that violates situationally defined norms (Massaro et al., 2014) with the increase in their age. It is hypothesized that the socially relevant information may be processed with the involvement of the right anterior superior temporal gyrus (Akimoto et al., 2014). As for the relationship between the communicators in an ironic scenario, previous studies mainly focused on the common ground shared by the communicators, namely, "the solidarity relation." They found the use of irony was deemed more appropriate when the speaker and the addressee share more common ground, such as friends, siblings, or couples (Jorgensen, 1996;Kreuz et al., 1999;Pexman and Zvaigzne, 2004). The literature has generally suggested that a higher degree of solidarity relation might bring a facilitation effect on irony comprehension (Kreuz and Link, 2002), especially at the early processing stage (Whalen et al., 2020). Yet, such a facilitation effect was missing in Pexman et al. (2010). Given the controversial results, further studies are needed to address the effect of social relations, especially when the addresser and the addressee are of unequal social status.
Social status, as a power relation of one speaker over the other (Brown and Gilman, 1960), plays a role in irony production and comprehension: irony is normally directed at a subordinate (Dews et al., 1995) and is deemed inappropriate if used with a superordinate (Jorgensen, 1996). The inappropriateness of irony may damage the relationship between the communicators. Social status plays an important role in Chinese culture, where communicators need to adjust their speech based on their social status in order to maintain politeness since ancient times (Gu, 1990). For example, in Modern Chinese, the second-person singular pronoun nin is an honorific pronoun addressing a respected, higher-status addressee, so the inappropriate overuse of nin to a subordinate can serve ironic purposes (Chao, 1956;Jiang et al., 2013;Ji, 2021). In a study by Jiang et al. (2013), a more prominent N400 and late positivity effect was reported when a superordinate used nin to a subordinate than the other way around, suggesting that Chinese readers had expectations over the use of honorifics, had integration difficulty when the honorifics fail to match the actual social status, and worked hard to figure out the pragmatic intent behind the deliberate misuse of honorifics. Similar effects are also observed with grammatically encoded honorific forms used in some languages, for example, Japanese. The use of Japanese status-inconsistent honorifics can also have an ironic flavor, and an ironic expression with honorific grammar targeting a subordinate is perceived as more ironic and offensive than irony without honorifics (Okamoto, 2002), suggesting that the perception of irony is influenced by status information. Also in Polish, irony initiated by a subordinate to a superordinate is considered more critical and offensive than that conducted in a high-to-low status direction, showing that there might be a culturally independent social norm regarding the risk of using irony to a superordinate (Gucman, 2016). However, the existing studies mainly employed offline methods, e.g., Likert scales, to reveal the effect of social status relations; little is known concerning the online effect of social status information in irony processing.
Apart from social factors, individual differences, social skills in particular, might bear relevance with irony processing. In a self-paced reading study, Spotorno and Noveck (2014) adopted Baron-Cohen et al.'s (2001) Social Skill subscale in Autism Spectrum Quotient (AQ). They divided trials into two halves in terms of the presentation order and found that the socially unskillful participants tended to maintain the reading time difference between literal and ironic sentences, while the socially skillful gradually narrowed the reading time gap in the second half of the experiment (Experiment 2). To examine whether Chinese native readers perform in a similar manner, we also adopt the Social Skill subscale. The subscale contains 10 self-evaluation items, with higher scores indicating lower social skills. It also has the highest internal consistency reliability among all the five subscales (i.e., Social Skill, Attention Switching, Attention to Detail, Communication, and Imagination) in AQ (Austin, 2005;Hurst et al., 2007). Meanwhile, this subscale can also be an indirect measurement of the Theory of Mind, a mechanism underlying the social skill that might infer mental states (Premack and Woodruff, 1978;Baron-Cohen et al., 1985), which are widely examined in irony studies (Dews and Winner, 1997;Wang et al., 2006;Li et al., 2013).
The present study investigates how social status information affects online irony comprehension. As previous studies have shown the inappropriateness of irony used to address a superordinate relative to a subordinate (Jorgensen, 1996;Okamoto, 2002), the study examines the role played by the status information in irony comprehension, especially when the time window unfolds. By adopting the eye-tracking reading paradigm, it is predicted that a longer reading time is needed to process irony targeting a higher-status addressee than a lowerstatus one. Meanwhile, the literal statements in the baseline condition mainly serve complementary purposes. Based on a survey showing that in Chinese culture, compliments are mostly conveyed toward a status-equal addressee (84.4%), with relatively fewer cases conveyed among the status-unequal (to a subordinate: 10.7%; to a superordinate: 4.9%) (Yu, 2005). It is assumed that the reading time for literal statements might not be significantly affected by unequal status relations.
More importantly, the present study can help distinguish different theoretical accounts of irony comprehension: comparable reading time in processing irony and literal statements supports the Direct Access account (Gibbs, 1986), while longer reading time in the later stage of processing irony statements agrees with the Standard Pragmatic View (Grice, 1975;Searle, 1979). The Constraint-Satisfaction Model (Katz, 2005;Pexman, 2008) can be endorsed if both literality and status information are involved earlier in processing. Meanwhile, the present study follows Spotorno and Noveck (2014) to use the Social Skill subscale, with the purpose to understand how readers' social skill affects real-time irony processing.
Materials and methods Participants
Thirty-six subjects participated in the study. They were students at Shanghai Jiao Tong University (14 men, aged 19-27 years, mean age = 22.79 years, SD = 2.91; 22 women, aged 18-30 years, mean age = 22.55 years, SD = 2.85). All the participants were native Mandarin speakers born and raised in mainland China, using simplified Chinese as their daily reading and written language. They were all right-handed and had normal or corrected-to-normal vision. None of them reported language or hearing disorders. Participants were recruited in a voluntary manner via an online notice and signed a written consent prior to their participation.
Materials and design
Thirty-two sets of target items were designed for the present study. Each item followed a six-clause structure (see Table 1 for an example). The first clause introduced the background or topic of the scenario. As irony can be normally invited by expectation failure (Kumon-Nakamura et al., 1995;Campbell and Katz, 2012), the second and third clauses used numeric scales to show how the expectation was satisfied or violated, so that the context was literality-biased or ironybiased. The fourth clause revealed the social status relationship between the communicators, so as to manipulate the social relationship between the speaker and the addressee (high-tolow vs. low-to-high). The strategy marking the social status of the communicators was adopted from a study by Jiang et al. (2013). The fifth clause was a literally positive statement made by the speaker, having the linguistic structure of second-person pronoun ni + verb + adverb de + degree modifiers zhenshi tai + evaluative adverb + sentence-final particle le. This clause can be interpreted as literal when the context was positive (literality-biased), or ironic when the preceding context was negative (irony-biased). The sixth clause was an attitude-neutral clause, in which the first five characters served as the spill-over region for analysis. Hence, the study had a 2 (literality: literal vs. ironic) × 2 (status: high-to-low vs. low-to-high) within-subject design.
Two validation tests were conducted. A status validation test was conducted to examine the readers' perception of social status relationships: 12 participants who did not participate in the eye-tracking experiment were instructed to identify the one with higher social status among communicators in each item. Items were counterbalanced across conditions and presented in four lists, with each participant reading one condition within each item. A score of 1 would be given for each item if a participant chose the presumed communicator as having higher status, so the highest score for each item would be 12. Results showed that the average score of status identification was 9.97 (range: 5-12, SD = 1.71), higher than chance level (t = 13.106, p < 0.001). Besides, an additional 12 participants who did not participate in the eye-tracking experiment rated on a 5-point Likert scale the topic familiarity, smoothness, and scenario rationality of the test items, with 1 coded as "very unfamiliar/unsmooth/irrational" and 5 coded as "very familiar/smooth/rational." Items were also counterbalanced and presented in four lists, so that each participant only read one condition within each item. The overall familiarity, smoothness, and rationality were 3.96 (SD = 1.12), 3.41 (SD = 1.23), and 2.99 (SD = 1.49), respectively.
Test items were counterbalanced and divided into four lists, so that each list included an equal number of items of the four conditions, and participants would only read one condition within each item. Apart from 32 test items, 70 filler items with a similar six-clause structure were designed and added to each list. They included five types of scenarios: (1) evaluative (N = 20): similar to test items, the statement made by the speaker was evaluative, but there was no positive or negative context with numeric comparisons; (2) episodic (N = 20): daily communication episodes or Q&As; (3) scalar (N = 10): the scalar, numeric comparisons remained in the context, but no evaluative judgment was involved in the commentary clause; (4) comfort (N = 10): the context was negative through scalar comparisons but the statement made by the speaker was a comforting expression, and (5) dissatisfaction (N = 10): the context was positive through scalar comparisons but the statement made by the speaker showed his/her dissatisfaction toward the addressee. These fillers were added to minimize possible prediction of the literality of statements as participants got familiarized with the experimental procedure. All participants read the same filler items, so there were 32 test items plus 70 filler items for each list.
Procedure
Participants were tested individually in a sound-proof room. Eye movements were recorded through SR Eyelink 1000, with a sampling rate of 1,000 per second. Only the right eye was recorded. Materials were presented on a 21.5in monitor (dpi: 1,024 × 768, refresh rate: 100 Hz) 73 cm from the eyes. Prior to the eye-tracking experiment, participants were instructed to read the text on the monitor at their normal reading rate, and complete comprehension questions upon finishing reading. Participants were seated in front of the monitor with their heads positioned on the chin and forehead rest to minimize head movements. After the 9-point calibration procedure, a fixation point would occur on the left quadrant at the start of each trial. Text materials were presented when participants fixated on the point. If their fixation did not match with the point, they were required to have recalibration. Once they completed reading each trial, they pressed the space bar, and a yes-or-no comprehension question appeared on the screen. Participants were asked to answer the question based on the content of the text. Half of the correct answers in each list were "yes" and half were "no." Feedback was given in each trial to maintain the attention of the participants, and to help the experimenter remind the participants if they provided incorrect answers in consecutive trials. Data were considered valid for a participant when his/her overall accuracy of the response to the comprehension questions was above 75% (1.5 times above chance, Geng et al., 2020). To familiarize the experiment procedure, participants first conducted a practice session consisting of three practice items, which were similar to filler items. In the formal experiment, the whole items were presented in a pseudo-random manner to avoid the consecutive presentation of test items, and the first two trials presented were always filler items. Each character was displayed in a 26-point Song typeface and subtended at about 1 • visual angle. Triple spacing was adopted in the presentation.
After the eye-tracking experiment, participants were required to complete the Social Skill subscale (Baron-Cohen et al., 2001) online to assess their social skill performance. The subscale was excerpted from Baron-Cohen et al.'s AQ, an assessment consisting of five subscales: Social Skill, Attention Switching, Attention to Detail, Communication, and Imagination. The Chinese translation was provided with English originals attached for reference. 1 Each participant only completed the Social Skill subscale of AQ to investigate the relationship between irony understanding and participants' social skills.
Type Item Literal
High-to-low , , , :" / " critical / spill−over / Mr. Liu is shooting arrows with his boss Wang. People normally shoot for five or six points, while Mr. Liu normally shoots for nine or ten points. Boss Wang says to Mr. Liu: "You shoot/so precisely!" critical /and starts to think of spill−over /how he can shoot more precisely.
Low-to-high , , , :" / " critical / spill−over / Mr. Liu is shooting arrows with his boss Wang. People normally shoot for five or six points, while Boss Wang normally shoots for nine or ten points. Mr. Liu says to Boss Wang: "You shoot/so precisely!" critical /and starts to think of spill−over /how he can shoot more precisely.
Ironic
High-to-low , , , :" / " critical / spill−over / Mr. Liu is shooting arrows with his boss Wang. People normally shoot for five or six points, while Mr. Liu normally shoots for one or two points. Boss Wang says to Mr. Liu: "You shoot/so precisely!" critical /and starts to think of spill−over /how he can shoot more precisely.
Low-to-high , , , :" / " critical / spill−over / Mr. Liu is shooting arrows with his boss Wang. People normally shoot for five or six points, while Boss Wang normally shoots for one or two points. Mr. Liu says to Boss Wang: "You shoot/so precisely!" critical /, and starts to think of spill−over /how he can shoot more precisely.
Question
Are they throwing javelins?
Filler , , , :" " Mr. Song is visiting Beijing. His friends invite him to have Beijing roast duck. When the duck is served, Mr. Song says to his friends: "The duck smells so good!" Then he has a taste.
Question
Does the friend treat Mr. Song to rolling donkey*?
*Rolling donkey: a snack in Beijing, consisting of glutinous rice rolls covered by bean flour.
Data analysis
Two target regions were involved in the analysis, as shown in Table 1. The critical region was formed by a part of the commentary statements that disambiguated literal or ironic interpretations. The spill-over region was the five characters following the critical region, as the reading time difference in the critical region may influence the processing of subsequent words (Shvartsman et al., 2014). For each region, four reading time measures (in milliseconds) were included: first fixation duration (the duration of the first fixation within the current region), gaze duration (or first-pass fixation duration, the sum of the fixation duration of the first run within the current region before the fixation point moves out of the region), regression path duration (the sum of fixations within the current region and the fixations in the prior regions if re-reading occurs in the current region), and total reading time (the sum of all fixation durations within the current region during the entire reading process). These measures showed the possible time course of processing differences between literal and ironic expressions. Specifically, first fixation duration and gaze duration revealed the early processing of the text. Regression path duration showed the difficulty to integrate the words with the current interpretation, and total reading time provided the general processing difficulty of the region. In the preprocessing stage, fixations under 80 ms or above 1,200 ms were filtered, and fixations from 80 to 140 ms were merged with the neighboring fixations. Trials were eliminated if the first fixation duration in the current region of analysis was zero. This procedure removed 9.29% of the data in the critical region, as well as 12.59% in the spillover region. Logarithmic transformation of the reading time durations was applied in the further analysis to obtain generally normally distributed residuals. Fixations were further trimmed if the standard residual fixation time in the current region was over 2.5. For the critical region, this trimming procedure consisted of 2.20% in first fixation duration, 1.82% in gaze duration, 2.68% in regression path duration, and 1.72% in total reading time of the remaining data. For the spill-over region, it covered 2.78% in first fixation duration, 2.38% in gaze duration, 2.18% in regression path duration, and 1.79% in total reading time.
Analyses of literality and status effects were conducted for the four measures in both critical and spill-over regions, using linear mixed effects (LME) models via the lme4 package (Bates et al., 2014) in R (Version 1.3.1093), with literality (literal vs. ironic), status (high-to-low vs. low-to-high), or their interaction as fixed effects, plus item and participant as random effects. Effect size (partial eta-squared) was calculated using the effectsize package (Ben-Shachar et al., 2020). Following Spotorno and Noveck (2014), data were reanalyzed with the interaction between participant's performance on the Social Skill subscale and literality as a fixed effect, and item as a random effect. This aimed to examine whether individual social skill affects the reading time in ironic relative to literal condition. Besides, if a status effect was reported for irony, analyses of the social skill effect would be conducted to examine whether social skill played a role in different status information within ironic trials. Apart from examining the social skill effect on the overall data, the whole trials were divided into two halves based on the order of presentation (i.e., trials 1-51 and 52-102). This was in line with Spotorno and Noveck (2014) to investigate whether an early-late effect can be reported as the experiment proceeded.
Results
The mean accuracy of correct answers to the comprehension question for each participant was 94.93%, all over the presupposed 75%, suggesting that they all completed the eyetracking experiment with attention. Figure 1 shows the mean and standard error of reading times for each condition in each target region. The summary of models is presented in Table 2.
Social skill effect
The coefficient alpha (Cronbach's alpha) for the Social Skill subscale in the present study was 0.768, indicating that the internal consistency reliability is higher than the "adequate" value of 0.7 (Kline, 2016). The average score of the Social Skill subscale was 3.78 (SD = 2.64), a score lower than autistic individuals (Baron-Cohen et al., 2001), suggesting that participants were less likely to have autistic traits.
Analyses of linear mixed effect models reported that in both the overall and the halved data, the effect of literality x social skill interaction was insignificant with all the measures in the critical and the spill-over regions (ps > 0.05). For status x social skill interaction within irony, the total reading time of the second half of the experiment in the critical region was significant (B = 0.06, t = 2.218, p = 0.027). In this analysis, the reading time data of one participant (social skill scores: 10) were deleted due to a lack of data in irony with the low-tohigh conditions. As shown in Figure 2, in the first half of the experiment, reading time did not vary significantly in terms of readers' social skill performance (high-to-low: p = 0.830; low-to-high: p = 0.294). Post hoc simple linear effect analysis showed that in the second half, readers with lower social skills tended to spend a shorter time than those with higher social skills to process irony targeting a lower status person (t = −2.31, p = 0.022), while reading time difference remained insignificant for irony with low-to-high condition (t = 0.75, p = 0.453).
Bonferroni-corrected significance
As suggested by a reviewer, the multiple comparisons of eye-tracking measures would increase the likelihood of Type I error (Von der Malsburg and Angele, 2017), so corrections of alpha value should be applied. Following the recommendation of von der Malsburg and Angele, Bonferroni correction was conducted to keep the false positivity rate at 0.05. In the present study, four measures in two regions were tested, so that the corrected alpha threshold was 0.00625. Under the strict correction, significant effects of literality were kept for first fixation duration (p = 0.003), regression path duration (p < 0.001), and total reading time in the critical region (p < 0.001), as well as for total reading time in the spill-over region (p = 0.004). The effect of status or literality x status interaction was gone for all measures in the two regions. In the analyses of the social skill effect, the status × social skill interaction in total reading time in the second half session turned insignificant when Bonferroni corrections were made.
Literality effect
By adopting an eye-tracking reading paradigm, the present study aimed to examine the real-time processing of irony, and Frontiers in Psychology 06 frontiersin.org Reading times for the literal and ironic expressions in high-to-low or low-to-high conditions in critical (A) and spill-over (B) regions. The x-axis represents the reading time measures and conditions, and the y-axis represents the mean value of reading times (in milliseconds). The error bar shows the standard error. particularly a time-window analysis of the effects of the literality cue and social status cue. Compared with literal statements, results showed that irony took a reliably longer time to process in the four reading measures in the critical region, indicating that understanding irony is more demanding relative to literal statements. As the reading time for irony in first fixation duration and gaze duration was longer relative to a literal condition in critical regions, readers can immediately perceive the incongruency between the valence of the previous context and the literal meaning of the ironic statements. Besides, the Social skill and logarithmic total reading time of irony in the first (A) and the second half (B) of the experiment in critical region. The plots represent the data point for each condition (light green: high-to-low, dark green: low-to-high). The light and dark green lines are the regression lines of irony with high-to-low and low-to-high conditions, respectively.
main effect of literality in the regression path duration of both critical and the spill-over regions showed there was an integration difficulty relative to literal statements when the literal meaning did not match the previous context (Filik et al., 2014), and the effect of literality can be consistent. Results were in line with the Constraint-Satisfaction Model (Katz, 2005;Pexman, 2008), where the contextual constraint came at the early stage of processing. The findings were less compatible with the Direct Access View (Gibbs, 1986(Gibbs, , 2002, which predicts comparable processing effort for irony and literal statements. Meanwhile, it was not in accordance with the Standard Pragmatic View (Grice, 1975;Searle, 1979), assuming an extra processing effort of irony after the activation of literal meaning, while the processing difference between irony and literal statements occurred early in the present study. As for the Gradient Salience Hypothesis (Giora, 1997;Giora and Fein, 1999), the present study did not strictly manipulate the salience of ironic meaning. The present study supported an early processing effect of contextual constraint; the question concerning the interaction of the salience of irony and contextual constraint guarantees further studies.
Social status effect
The interaction of literality and status in the regression path duration in the spill-over region showed the asymmetric effect of social status relation: irony passed in a low-to-high direction required a longer time to process than the other way around, while literal statements had a similar processing time regardless of different social status relationship. Though this did not reach significance when the strict Bonferroni corrections were made, the p-value of literality × status interaction was still low (p = 0.009), with the status effect kept significant on irony (p = 0.001). The result was in line with the prediction in irony comprehension, as irony targeted at a superordinate is less appropriate (Jorgensen, 1996). In this case, readers can perceive the status information and apply the appropriateness of this information in their online reading. Besides, as predicted, the status effect on the comprehension of the literal statement was not found to be significant. This suggested that despite the more frequent occurrence of literal compliments when the recipient is a subordinate (10.7%) than a superordinate (4.9%) (Yu, 2005), readers were insensitive to the status information due to the overall few occurrences of status-unequal compliments. Meanwhile, there might be a general preference for hearing compliments relative to criticism, regardless of social status relation or situational background (Deutsch, 1961). Irony, though milder than literal criticism (Dews et al., 1995;Thompson et al., 2016), still has a damaging effect on account of its critical nature. There was also a significant status effect in the total reading time in the spill-over region, so a statement toward a higher-status person was more difficult to process than toward a lower-status addressee, irrespective of its literality.
Interestingly, the effect of status processing was only observed in the spill-over region, which revealed that the processing of social status information came after the detection of literal/ironic meaning, as the literality effect was already involved in the critical region. The result was similar to a twostage processing pattern, and the parallel Constraint-Satisfaction Model might be taken into more consideration. In some ERP studies, the comprehension of irony might involve N400-or P600-like (late positivity) effects, where the N400 effect was usually interpreted as the semantic integration between context and literal meaning, while P600 might refer to the pragmatic inference of the ironic intent (Cornejo et al., 2007;Regel et al., 2011;Spotorno et al., 2013;Filik et al., 2014;Caffarra et al., 2019;Mauchand et al., 2021). Despite the fact that the literality and the social status effect occurred in different regions in the current study, the time course of literality and status processing was similar in principle to the N400 and P600 effects found in previous studies. Therefore, it is likely that after readers figured out the ironic nature, they might move on to integrate the status relation to reason the communicative intent or motivation behind the ironic statement.
One possible reason for the later effect of social status was that status information may not be weighted as heavily in the prediction of irony as literality. Literality is mostly an overwhelming factor in ironic interpretation (Deliens et al., 2018), while status information mainly adjusts the degree or appropriateness of irony, having little effect on the literal/ironic judgment. This can be further evidenced in view of the effect significance after Bonferroni correction, where only the literality effect was observed, while the status effect became insignificant for all measures in both regions. Results were not in line with the early effect of sibling relationships reported in children (Whalen et al., 2020). This may be explained that for children, internal state language (e.g., expressions about emotions, beliefs, and desires) constitutes an important part of sibling relations (Howe, 1991), thus making irony, a typical internal state language expressing the belief and intent (Dews and Winner, 1997), possibly more predictable when children receive it from their siblings. Therefore, literality might be privileged in processing relative to status information.
Social skill effect
As for the individual differences as measured in social skill effect, the present study failed to report any social skill effect in processing ironic vs. literal statements, when analyzed as a whole or into two halves. This was contrary to the findings in Spotorno and Noveck (2014), where social skill played a part in the anticipation of irony as the experiment unfolded: the socially unskillful participants tended to maintain the reading time difference in the second half of the experiment, while the socially skillful performed alike in processing literal and ironic sentences. One possible explanation might be that their study constructed a one-to-one mapping between the negative context and irony, so that the socially skillful can gradually anticipate the occurrence of irony. In the present study, the well-designed filler items (e.g., the comforting statements) obscured the prediction of irony.
As for the interaction between the social skill and the social status for ironic statements, only in the second half of the experiment in total reading time of the critical region, the reading time of irony passed in a high-to-low direction was negatively correlated with AQ scores. That is, those who were socially skillful tended to have longer reading time than the socially unskillful when they read irony directed to a subordinate. This might be attributed to the frequent use of indirect criticism (including irony) in Chinese culture (Tang, 2016;Lin, 2020), and the function of face protection in irony transmitted in a high-to-low direction (Gucman, 2016). Since individuals having higher scores in the Social Skill subscale (i.e., lower social skill competence) are less likely to be extravert and agreeable (Austin, 2005), they might be more welcome or expect such moderate commentary statements when placed in a negative context, thus having shorter reading time than socially skillful readers. Nevertheless, irony toward the superordinate violated the general social norm, so that it was less expected for readers regardless of their social skills. But generally, the social skill effect might be interpreted with caution, given that it was only reported in the second half of the total reading time of the critical region, and these effects turned insignificant when the strict Bonferroni corrections were made. It is possible that the group with richer and more complicated social experiences than the participants in the present study (i.e., university students) may be more sensitive to the status information, thus having a more prominent effect when testing their social skills and irony comprehension. Still, the social skill and status interaction for irony found in the critical region suggested that the time of the involvement of social status processing may vary across participants, as the main effect of status was only reported in the spill-over region. Taken together with the literality and status effects discussed earlier, the relative predictive power of irony caused by available constraints (Deliens et al., 2018; e.g., literality, prosody, facial expression, and sociocultural information) may vary across individuals, and hence the Constraint-Satisfaction Model (Katz, 2005;Pexman, 2008) can be further discussed in terms of the priority of these constraints.
Conclusion
The current eye-tracking study examined the role that social status information played in the time course of online irony comprehension, also addressing the current processing models on irony. Results showed an early and long-lasting effect of literality, indicating more effortful processing of irony compared with literal statements. The findings are more consistent with the Constraint-Satisfaction Model (Katz, 2005;Pexman, 2008). However, the social status had a delayed effect following the literality effect, with longer reading time for irony targeting a superordinate than at a subordinate, suggesting the violation of social norms would cause processing difficulty, and the predictability of irony from social status cue may not be as powerful as context-literal incongruency (i.e., literality cues). Finally, individual social skills revealed the individual perceptual variation of status information in the critical region in the second half of the trials, indicating that the current processing models shall be further investigated in terms of individual variations.
Data availability statement
The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Ethics statement
The studies involving human participants were reviewed and approved by Ethics Committee of School of Foreign Languages, Shanghai Jiao Tong University. The patients/participants provided their written informed consent to participate in this study.
Author contributions
ZW collected and analyzed the data. Both authors collaborated on the experimental design, interpreted the data, conducted manuscript writing, and approved the submitted version. | 2022-09-06T13:56:39.858Z | 2022-09-06T00:00:00.000 | {
"year": 2022,
"sha1": "5dc1219d5fa7beace9d0aab5e2697f019b3a08cb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "5dc1219d5fa7beace9d0aab5e2697f019b3a08cb",
"s2fieldsofstudy": [
"Linguistics",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210472577 | pes2o/s2orc | v3-fos-license | A new sigmoidal fractional derivative for regularization
In this paper, we propose a new fractional derivative, which is based on a Caputo-type derivative with a smooth kernel. We show that the proposed fractional derivative reduces to the classical derivative and has a smoothing effect which is compatible with $\ell_{1}$ regularization. Moreover, it satisfies some classical properties.
Introduction
Fractional calculus has undergone significant developments in recent years and has found use in physics, engineering, economics, etc [1,2,3]. Classical results about the Riemann-Liouville and Caputo derivatives as well as fractional differential equations can be found in [4,5,6]. In [11] and [48], Caputo and Fabrizio suggested a new fractional derivative, whose properties were investigated by Losada and Nieto [18]. This fractional derivative was utilized in various applications, including the fractional Nagumo equation in Alqahtani et al. [36], coupled systems of timefractional differential problems in Alsaedi et al. [37] and Fischer's reaction-diffusion equation in Atangana et al [38]. More applications of the Caputo-Fabrizio fractional derivative can be found in Aydogan et al [39]. and Atangana et al [40].
For 0 ≤ α ≤ 1, −∞ < a < t, f ∈ H 1 (a, b) and b > a, the Caputo fractional derivative is defined by By replacing the term The Caputo-Fabrizio fractional derivative of a constant vanishes as does the usual Caputo derivative, however the new kernel exp −α α−1 ) is no longer singular for s = t. Caputo and Fabrizio try to extend their definition in [11] to functions in L 1 by Algahtani et al. [36] show that the nonlinear Nagumo equation given by where 0 < α < 1 and β, γ, δ are constant, subject to the boundary conditions has an exact solution. The authors show that this PDE can be reformulated in terms of a Lipschitz kernel. Existence of the exact solution is shown using a fixed point approach and uniqueness is provided, given that suitable assumptions are made about the Lipschitz constant. Their study claims that an exponential kernel is in some sense a better kernel than a power function, since the lack of a singularity provides a better filtration effect. In the context of fractional differential equation applications, since the associated functions are not defined in a Banach space, only approximate solutions to certain fractional differential equations can be investigated. The methods used to handle fractional differential problems such as CF D α f (t) = g(t, f (t)), cannot be extended to the problems resembling In Baleanu et al [17], the Caputo-Fabrizio fractional derivative on the Banach space C R [0, 1] is considered in the context of higher order series-type fractional integrodifferential equations. More precisely, an extended Caputo-Fabrizio type fractional derivative is provided of order These authors use a standard fixed point approach to establish uniqueness of solutions to fractional series-type differential problems such as with initial condition f (0) = 0 and α, γ, δ, ρ ∈ (0, 1).
An extension of this type which is compatible with orders beyond (0, 1) has yet to be provided.
The Caputo-Fabrizio fractional derivative is discussed in the setting of distributions in [41]. Other types of fractional derivatives can be found in Katugampola [35] and Oliveira et al [42]. In de Oliveira [12], it is shown that the choice of kernel in a Caputo-type fractional derivative is connected to the Laplace transform via convolution.
Let I denote the Schwarz class of smooth test functions whose derivatives decay at infinity. Moreover, let I ′ denote the space of continuous linear functionals on I . The distributional derivative for all smooth compactly supported test functions φ on R.The distributional Laplace transform is given by where s = σ + iµ, µ < 0 and φ(t)e −σt ∈ I ′ . Suppose that f is supported on (0, ∞) such that σ > 0 and f (t)e −σt ∈ I ′ . It follows that the Laplace transform of the derivative is given by Let L denote the distributional Laplace transform defined by One can define a more general fractional derivative as follows. Suppose that Φ(s, α) is a fractional integrodifferential operator and K(t, s) : R 2 → R is a continuous kernel. Let the corresponding operator φ(s, α) be defined for some fractional derivative D α such that where Φ(s, 1) = s, Φ(s, −1) = 1 s and Φ(s, 0) = 1. Then, letting Φ(s, α) = sL (K(s, t, α)). Proceeding with the Convolution Theorem, we are left with a Caputo-type fractional operator of the form which is dependent on the choice of kernel K. For f ∈ H 1 (a, b), and n ∈ N, we can spot commonly used kernels such as the Caputo kernel The memory principle for fractional derivatives describes the history of f (t) near the terminal point t = a. Let L denote the memory length, satisfying a + L ≤ t ≤ b. Define the error in approximating the fractional derivative by is as in (5). If f ′ (t) ≤ M for a < t < b and 0 < α < 1, we have the following error estimate for the Caputo fractional derivative Therefore, the Caputo fractional derivative with terminal a can be approximated by the corresponding fractional derivative with lower limit t − L, with the level of accuracy described above.
In this work, we propose a different fractional derivative that has a smooth kernel. Our primary interest in defining this fractional derivative is the improvement of machine learning algorithms. Caputo-type fractional derivatives have been applied in machine learning, such as in Pu et al [10]. In particular, fractional order gradient methods have been considered in order to improve the performance of the integer order methods. For example, suppose that f : R n → R is convex and differentiable with a Lipschitz gradient, then the integer order gradient method defined by has a linear convergence rate. Improving the performance of the integer-order gradient method is critical in optimization problems. In recent literature, fractional calculus has been thought to improve the integer order gradient method due to nonlocality and the memory principle. Fractional order gradient methods have been proposed based on the Caputo fractional derivative that offer competitive convergence rates. For example, in [28], a Caputo fractional gradient method is proposed that is shown to be monotone and exhibit strong convergence.
Fractional derivatives were used in the backpropagation algorithm for feedforward neural networks and convolutional neural networks in [32,46]. In both studies, the rate of convergence was shown to exceed the rate of integer-order methods. Fractional-order methods have been used to investigate complex-valued neural networks in [24] and recurrent neural network models in [44]. In [28] and [22], gradients based on the Caputo fractional derivative are used to update parameters while integer order gradients are used to handle backpropagation allowing for simpler computation. The experiments therein are shown to improve the accuracy of the neural network's performance compared to integer-order methods while being equally costly.
In the training of machine learning models, one often needs to obtain weights of the features which optimize the training data. In the case of maximum likelihood training, regularization is typically needed so that the model does not overfit the training data. In ℓ p regularization, the weight vector is penalized by its ℓ p norm. While the case for p = 1 and p = 2 are very common and result in similar levels of accuracy, ℓ 1 regularization is much more practical. Due to its sparsity, ℓ 1 regularization is less memory intensive and more time-effective than ℓ 2 regularization. On the other hand, ℓ 1 regularization is problematic in that during the update process, the gradient of the regularization term is not differentiable at the origin as the error function given below has classical derivative A typical remedy to this problem is to use the stochastic gradient descent method, which approximates the gradient using the training data. Although time efficient for training, when the dimension of the feature space is large, the update process slows down significantly. Furthermore, the model becomes less sparse after training the data. The discontinuity induced by the regularizer proves to be problematic as it adjusts the direction of descent. The use of sigmoids in regularization problems has been previously explored as in Krutikov [43], but not in the context of fractional derivatives. Another remedy to the aforementioned problem is the use of fractional gradients over the classical descent methods. These methods are still in their infancy and problematic in that convergence to the local optimum is not always guaranteed, even when the algorithm converges as in [9]. Furthermore, these methods often require an adjustment to the fractional derivative by truncation and methods based on memory principle (6) due to the computational expense and the failure of the Caputo kernel to be smooth.
We would also like our operator to be nonlocal. In [13], it is shown that unlike the Caputo derivative, the Caputo-Fabrizio fractional derivative is not a nonlocal operator. The linear fractional differential equation is shown to reduce to a first-order ordinary differential equation. This means that the Caputo-Fabrizio derivative cannot sufficiently describe processes with nonlocality and memory. With the correct choice of kernel, this complication can be avoided.
Main results
In this section, we define a new left-sided fractional derivative. We show that the proposed fractional derivative reduces to the H 1 derivative as the order approaches 1. In the results to follow, for 0 < α ≤ 1, we will let C 1 (α) denote a normalization constant C(α) ((a, b)), t > a and {f (t)} ′ denotes the H 1 distributional derivative as in (4). We define a new fractional derivative by Now, we show that the left sigmoidal fractional derivative reduces to the H 1 derivative.
where the last result follows from the observation that δ(t) is the Dirac distribution.
In the following theorem, we show that this left sigmoidal fractional derivative is commutative with respect to the classical derivative.
Theorem 2.2. Suppose that f is at least twice continuously differentiable and
where 0 < α < 1.
Proof. Proof. From (8), integrating by parts yields
so we have appealing to the Leibniz integral rule From (11) and (12), the desired result is obtained.
In the next theorem, we show that the left sigmoidal fractional derivative does not satisfy the memory principle in the sense of (6). More precisely, the next theorem implies that we show that the left sigmoidal fractional derivative can be approximated by the corresponding fractional derivative with lower limit t − L with increased accuracy for orders in which C 1 (α) is large.
Proof. Proof. Making use of the inequality we have and the result follows.
In the theorem below, we show that our new fractional derivative provides a sigmoidal approximation to functions that have a piecewise linear H 1 distributional derivative. For instance, the proposed left sigmoidal fractional derivative is compatible with ℓ 1 -regularization. In the case of the ℓ 1 norm, it can be used to define a fractional gradient, which approximates its classical gradient via a family of sigmoids as α approaches 1. This is promising in the context of gradient descent algorithms.
Theorem 2.4 (Norm-1 compatibility) σ D α a provides a smooth approximation to the ℓ 1 norm defined by as α → 1 in the sense that for the error function E given in (7), σ D α a E ℓ1 (x j ) is given by where a > 0.
Proof. Proof. The result follows from the observation that where H(t) is the Heaviside function.
Theorem 2.5. (Mittag-Leffler function). Suppose that γ, η > 0 and 0 < a < t. Then Theorem 2.6. Suppose that f ≥ 0, 1 < p < ∞, 0 < α < 1 and 0 < t ≤ T . If f ≥ 0 is differentiable with f ′ ∈ L p (R) and M is the maximal operator of f given by The next theorem describes the effect of the Laplace and Fourier transforms, which can extend to distributions as in de Oliveira [6]. The Convolution Theorem connects our choice of kernel as in (5) via the operator Φ(s, α) = sL (K(s, t, α)). In this case, Φ(s, α) depends on the digamma function Ψ(z) = Γ ′ (z) Γ(z) . This shows that the leftsigmoidal fractional derivative does not reduce to the left-sided Riemann-Liouville fractional derivative.
The transform L (tanh t) is handled as follows Because of the absolute convergence of the monotone decreasing sum ∞ k=0 (−1) k e −2kt dt and the nondecreasing nature of its partial sums, we can exchange integration and summation using the Lebesgue Monotone Convergence Theorem. Continuing, we have The identity used above comes from the Lerch transcendent, defined by where |z| < 1, a = 0, −1, −2, ... and using the dilation property once more, the result follows.
(b) We proceed as in (a).
Theorem 2.8. Suppose that f is differentiable and 0 < α < 1. Then Proof. Proof. Using the inequality we have which results in the leftmost inequality. Noticing that cosh 2 x ≥ 1 + x 2 , we have that finishing the last three inequalities.
has the solution Proof. Proof. Differentiating the differential equation above, the problem above reduces to which can be integrated to obtain the result.
is a contraction. By the Banach fixed-point theorem, it has a unique fixed point, finishing the proof.
We note that this result is advantageous in that the analogous existence and uniqueness result as in fractional differential systems defined by the Caputo derivative is highly dependent on initial conditions imposed on the primary function of interest and its classical derivatives [4].
We now shift our attention to a gradient descent method. Suppose that f (x) has a bounded derivative and unique critical point t * such that f ′ (t * ) = 0. For a ≤ t ≤ b, 0 < α < 1, define the scalar left sigmoidal fractional gradient descent method by where 0 < µ < 1 is the learning rate.
Theorem 2.11 (Fractional Gradient Descent). Let f be as in (13). Then, the left-sigmoidal fractional-order gradient method (13) converges to the true critical point t * .
Conclusion
In this paper, we defined a new sigmoidal fractional derivative, which is compatible with certain weakly differentiable functions. We showed that this fractional derivative satisfies some forms of classical properties and is compatible with the ℓ 1 norm by a sigmoidal approximation. For further research, we will investigate this operator in optimization and machine learning. We note that the left-sigmoidal fractional derivative can be applied in the context of gradient descent, which has applications in optimization and machine learning [7,8]. Recently, backpropagation and convolution neural networks have been studied in the context of fractional derivatives, typically of the Caputo-type are being used for gradient descent. This idea is still novel and needs to see improvements. For example, the gradient descent method has been handled by Sheng et al.; [32], [33], Wang et al. ; [28], Wei et al.; [9] and Bao et al [22]. These methods are still early in development. The following topics still need to be fully addressed: convergence to an extreme point, extending the available range of fractional order, more complicated neural networks, loss function compatibility and the usage of the chain rule. | 2020-01-03T10:24:41.000Z | 2020-01-03T00:00:00.000 | {
"year": 2020,
"sha1": "4912f3ca767e90c6a81f87dd122111f4d8e49515",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d07917943890a02495e87a37946d64cc254ef504",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
38044598 | pes2o/s2orc | v3-fos-license | Evolution on the bright side of life: microorganisms and the evolution of mutualism
Mutualistic interactions, where two interacting species have a net beneficial effect on each other's fitness, play a crucial role in the survival and evolution of many species. Despite substantial empirical and theoretical work in past decades, the impact of these interactions on natural selection is not fully understood. In addition, mutualisms between microorganisms have been largely ignored, even though they are ecologically important and can be used as tools to bridge the gap between theory and empirical work. Here, I describe two problems with our current understanding of natural selection in mutualism and highlight the properties of microbial mutualisms that could help solve them. One problem is that bias and methodological problems have limited our understanding of the variety of mechanisms by which species may adapt to mutualism. Another problem is that it is rare for experiments testing coevolution in mutualism to address whether each species has adapted to evolutionary changes in its partner. These problems can be addressed with genome resequencing and time‐shift experiments, techniques that are easier to perform in microorganisms. In addition, microbial mutualisms may inspire novel insights and hypotheses about natural selection in mutualism.
Introduction
Mutualisms are interactions between species where each has a net positive impact on the fitness of the other. 1 They affect the survival of practically all living organisms. In plants, mutualistic descendants of bacteria convert energy from the sun into chemical energy for plant cells. These plants are visited by insects that provide a means of efficient sexual reproduction in exchange for nectar, while microorganisms in the soil provide the roots with phosphorous in exchange for organic carbon. In animals and insects, mutualists aid in digestion and protection from antagonists and may even control behavior. [2][3][4][5] For microorganisms, survival often depends on the beneficial exchange of metabolites within communities. 6,7 Clearly, if we want to understand how any of these species evolves, we must understand mutualism as a source of selection that affects the dynamics and products of evolutionary change.
During the last several decades, many evolutionary biologists have addressed this challenge, producing a wealth of theoretical and empirical data on adaptations to mutualism. 1 Much of this work has aimed at learning how mutualists minimize the likelihood that their partner will negatively affect their fitness by cheating, but the possibility of coevolution has also been studied. 1,8,9 This research has relied heavily on several naturally occurring ancient mutualisms, where at least one of the partners was a multicellular eukaryote. 1 As a result, the development of theory and its application has been biased toward problems and contexts that may be most important for a relatively narrow range of species. In addition, this approach has limited the experimental power that can be applied to understanding mutualism.
More recently, scientists have begun experimenting with mutualistic interactions between microorganisms. [10][11][12][13][14][15][16] Diverse mutualisms have been engineered or evolved in the laboratory with doi: 10.1111/nyas.13515 the goal of understanding microbial physiology and metabolism and understanding how mutualisms can evolve and be stable. 10,12,13,17,18 These systems can be outstanding tools for experimentation, yet they have rarely been used to address some of the most pressing questions about the selection environment in mutualism.
The goal of this review is to facilitate the integration of these two parallel approaches to understanding mutualism. I do this by first describing microbial mutualisms and explaining why it is important to understand their evolution. I then provide an overview of our current understanding of adaptation and coevolution in mutualism, highlighting topics needing further study owing to neglect or experimental limitations and providing some suggestions for further experiments. Throughout this review, I use microbiological examples wherever possible.
A primer on microbial mutualisms
Why study microbial mutualisms?
The most important reason to study microbial mutualisms is that the vast majority of living beings are microorganisms. 19,20 If they are ignored, we cannot have a comprehensive theory of mutualism as an interaction that affects all of life. Instead, we will end up with a narrow theory of mutualism that only applies reliably to a small part of one domain of life, encouraging microbiologists to unnecessarily create a separate conceptual framework for understanding beneficial interactions between species.
There are also practical reasons to study the evolution of mutualistic interactions between microorganisms. Positive interactions, such as mutualism and commensalism, are thought to play a fundamental role in the functioning and flow of carbon through microbial communities. [21][22][23] Because they can often evolve so quickly, evolutionary changes in microbial populations are likely to be a part of the ecological processes affected by these interactions. [24][25][26][27] Thus, studying the evolution of microbial mutualisms may provide insight into numerous processes involving microbial communities. Such processes include behavior, immunity, and digestion in animals or plants, biogeochemical processes controlling the flux of greenhouse gasses like carbon dioxide and methane, and degradation of hazardous waste. 3,5,[28][29][30][31][32][33] Finally, microorganisms can be powerful tools for studying evolution because of the ease of genetic manipulation and genome sequencing, of controlling environmental variation in the laboratory, of storing evolutionary intermediates in a dormant state, and of manipulating population size to alter the impact of natural selection. 34 Their use in evolution experiments has allowed scientists to address fundamental questions about the process of evolution, 35,36 foraging theory, 37-39 origins of multicellarity, 40 coevolution in host-parasite interactions, 41 and the evolution of cooperation. 27,[42][43][44] However, experimental evolution has only recently been applied to understanding mutualism. 10,11,15 Mutualistic interactions between microorganisms In mutualistic interactions, one species provides a resource to another in exchange for a second resource or service that is provided by its partner. 1,45 In microbial mutualisms, this typically involves the provision of an essential metabolite in exchange for a different one or for a service, such as swimming or the removal of toxic by-products. For example, in the transition from aerobic to anaerobic zones in many freshwater lakes, two-thirds of the biomass is composed of aggregates of phototrophs surrounding a heterotroph that provides the service of swimming in exchange for photosynthate (Fig. 1A). [46][47][48] In the ocean, a significant portion of methane is removed as a result of a mutualistic interaction between bacteria and archaea. 49,50 The bacteria use the waste electrons generated by anaerobic methane-oxidizing archaea to obtain energy, which also serves to allow the archaea to gain energy from anaerobic methane oxidation. 50 Microbiologists have studied the physiology and biochemistry of these complex microbial mutualisms for decades. Here, I focus on two broad categories of microbial mutualisms that have been the focus of more recent work aimed at understanding their ecology and evolution: cross-feeding and a specialized form of cross-feeding called syntrophy.
Cross-feeding. Most microorganisms are not capable of synthesizing all of the amino acids and other molecular precursors that are required to make copies of themselves. Instead, they rely on other species to make and secrete them or release them through cell lysis. These interactions can be mutualistic if both species are providing resources to each other. Cross-feeding interactions can be pairwise, but in microbial communities, there may be complex networks of species releasing and using each other's metabolites. 22,23 Metabolites can be transferred through diffusion or through nanotubes that connect the cytoplasms of mutualists, or as a result of endosymbiosis of a bacteria by a protist (Fig. 1B and 1C). 10,11,13,14,[50][51][52][53] Variation in nutritional requirements and gene content across clones and species of microorganisms suggest that cross-feeding is common in most microbial communities and probably the cause of our inability to obtain pure cultures of microbes. 23,54 Research has shown that cross-feeding can be a phenotypically plastic response to changes in the environment, or it can result from genetic changes. Numerous cross-feeding interactions can arise as a result of acclimation, a phenomenon that can be predicted by genome-scale models, although their stability may depend on the density of each species or removal of toxic by-products. [54][55][56][57][58] Other species may be completely reliant on crossfeeding because of erosion of the functional genes required for metabolite production. Two hypotheses have been developed to explain the evolution of obligate cross-feeding. In the Black Queen hypothesis, the loss of genes is beneficial in an environment where nutrients are scarce, making DNA replication costly. Frequency-dependence fitness maintains metabolite producers. 6,59 The farm-and-forage hypothesis predicts neutral erosion of metabolic genes in nutrient-rich environments where they are not needed. When resources are scarce, these mutants survive by overproducing a nutrient like organic carbon to stimulate the growth of producer populations. 60 Empirical support exists for both of these hypotheses, but theory suggests that over-producing cross-fed nutrients and relying on other populations is inefficient relative to growing autonomously in most conditions. 10,59,61 Syntrophy. Syntrophy is a term that literally means "feeding together." Unlike other crossfeeding interactions, syntrophs feed together by working together to complete one energy-yielding reaction. 62 In oxygen-free environments with limited availability of electron acceptors (molecules used to capture and remove electrons during respiration, such as O 2 , NO 3 , SO 4 , and oxidized metals), a special kind of cross-feeding interaction is responsible for reducing carbon completely to methane, allowing degradation processes at higher levels of the food chain to continue. 7,62,63 In these interactions, one species produces a by-product that becomes toxic unless it is kept at very low concentrations. The second species consumes this toxic by-product because it is its only source of energy. The toxic by-product is essentially excess electrons that are removed from the cell via hydrogen, formate, or other molecules, and they are toxic to the producer because of the extremely low energy yield of the fermentation reaction. 7 Some species are obligate syntrophs that have limited or no options for surviving outside of the interaction, 64 while others may grow syntrophically when electron acceptors have been used up and then revert to a more independent mode of metabolism when electron acceptors are available again. 62,65,66 Some syntrophs aggregate 13,67 and produce wires to transfer electrons directly, 13,50 while others interact in guilds where electrons are transferred in the form of hydrogen or formate through diffusion. 14,66 Syntrophies play crucial roles in the functioning of anaerobic communities in digestors used to degrade solids in wastewater (Fig. 1D), 68 in the rumen of cows, in lake sediments, and in rice paddies, 62 and they are primary producers in methane seeps in the depths of the ocean. 67
Mechanisms for adaptation to mutualism and their application to microorganisms
In by-product mutualisms, such as syntrophy, the resources being traded are waste materials that just happen to benefit another species in the community, so there are no fitness costs to participating in this interaction. 61,[69][70][71] In most other mutualisms, producing and delivering resources is thought to come at a cost to fitness. 71,72 Mutualists that maximize the ratio between benefits and costs are expected to have higher fitness in comparison with competitors that do not, all else being equal. 73,74 A schematic representation of a putative relationship between benefits, costs, and resource transfer for a mutualist A is presented in Figure 2 and explained below, along with examples of such adaptations in microbes.
In the schematic representation depicted in Figure 2, the costs incurred by A depend on how much resource it produces and provides to its partner B. Assuming a fixed cost to producing and delivering the resource, the total cost of mutualism is increasingly higher if A provides more resource to B (area of dark gray triangle increases with increasing quantity of resource produced by A). Similarly, the benefits it receives from mutualism depend on the quantity of resource provided by its partner. In panel A, the black dots and dotted lines indicate that species A gets enough resource from B to benefit from the mutualism despite the cost of providing resources to B. Stable resource exchange rates (e.g., Fig. 2A) in microbial mutualisms have been defined for several interactions through a combi-nation of metabolic or population modeling and experimental tests of stability. 53,66,75 In addition, Douglas et al. 76 showed that the costs and benefits of methionine production in a cross-feeding mutualism varied among alleles that were substituted in independently evolving populations. As expected, there was an inverse relationship between the cost of production and benefit to the growth of the community. 76 Mutualist A in Figure 2B can increase the fitness benefits it receives from mutualism in one of the following ways. First, it can become more efficient at using, producing, or transferring resources. This kind of adaptation would change the relationship between the quantity of resources transferred and the benefits or costs received, causing the shape of the cost and benefit triangles to change. Efficiency improvements in resource transfer may decrease costs for both species and hence enhance their fitness. Changes in the use or production of resources by A could have indirect effects on the fitness of B if it results in more individuals of A that are producing benefits to B. Second, mutualist A could invest more in the mutualism, providing more resource to B so that it can get more resources from a larger population of B (Fig. 2C).
In microbial mutualisms, improvements in resource transfer or use have been observed in the lab within the first few hundred generations of evolution for several newly formed mutualisms. 10,11,13,14,76 In some cases, new structures evolved that allow metabolite transfer to occur without loss of material through diffusion. For example, when two species of Geobacter were forced to cooperate through the exchange of electrons, they adapted by producing wires coated in cytochromes that pass electrons directly from one species to the next. In a cross-feeding mutualism, metabolites were transferred through tubes connecting the cytoplasms of two species. 51 However, new structures for resource transfer were not required to improve the productivity of some microbial mutualisms. In the first 300 generations of evolution of the syntrophy I study, changes in both Desulfovibrio vulgaris and Methanococcus maripaludis contributed to substantial increases in the number of cells produced from a fixed amount of a single limiting resource and resulted in substantial increases in growth of the community. 11 By 1000 generations, the communities grew about three times faster than the ancestor, a change that was correlated in D. vulgaris with loss of its ability to respire sulfate, a trait that has a substantial impact on survival outside syntrophy and may impede performance in syntrophy. 18,77 Another way for mutualist A in Figure 2 to adapt would be to alter the rate of resource exchange between the partners so that it provides less while taking more, as depicted in Figure 2D. If this adaptation increases the fitness of A while simultaneously reducing the fitness of B, it may be referred to as cheating. 8,71,78 Left unchecked, such cheaters could potentially destabilize the mutualism, causing it to go extinct. 79 Finally, mutualist A could adapt by developing a mechanism to protect itself from cheating by B. Several putative defenses against cheating have been described theoretically, 71,80 and some have been empirically tested in diverse species, including microorganisms. 10,[81][82][83][84] Laboratory experiments with cross-feeding microorganisms have repeatedly documented selection for cheating within mutualism and characterized the impact of exploiters on pairs of mutualists. 10,84,85 However, most of this work has focused on identifying mechanisms for suppressing the effects of cheaters, often focusing on the impact of spatial structure and aggregation. 10,15,84 This research has shown that spatial structure and aggregation keep cooperative pairings together for multiple generations, allowing them to flourish through partner-fidelity feedback. 10,15,84 In partnerfidelity feedback, mutualists that are more cooperative will tend to be more productive and have higher fitness because they are providing more resources to each other compared with less-cooperative pairings. These higher levels of resource provisioning increase the fitness of the interacting partner more (e.g., Fig. 2C), allowing them to flourish and provide even more benefits in return. 71 Spatial structure also affects the distribution of resources that are being exchanged by mutualists and the potential impact of competition and exploitation on their stability. 61,69,75,[85][86][87] Cross-feeding can be inhibited by spatial structure if metabolites must diffuse too far between species. 61,75,87 In addition, the extent to which an exploiter can invade a population depends on whether cross-fed metabolites are universally shared or whether the producer privatizes them by keeping some or most of the metabolites for itself. 87 Finally, research with microbial mutualisms has revealed novel mechanisms for suppression of cheating. 12 Theory predicts that mutualists can become immune to cheating as a result of adapting to cheaters during the early stages of their evolution, although it does not describe exactly what these adaptations are. 88 An example of such immunization was observed by Waite and Shou 12 when they engineered yeast to cross-feed and then propagated them in conditions where no other known mechanism of resistance to cheating could occur. As the populations containing cheaters and cooperators adapted, they acquired new mutations. Cooperators dominated some populations, driving cheaters extinct, because they acquired new beneficial mutations before the cheaters, giving them a fitness advantage. 12 The examples described above imply that there is only one mechanism by which any given trait might affect fitness in mutualism. In reality, a trait could affect fitness in mutualism by multiple mechanisms (Fig. 2), depending on the context in which the species are evolving. 89,90 For example, the wires and tubes produced by microorganisms that allow direct metabolite transfer between species are assumed to have been selected to increase the efficiency of chemical transfer (although neither the fitness effects nor the efficiency of transfer have been tested), but they could also have other effects. 13,51 For example, wires produced by one species could be used to force a partner genotype to interact with it, even though that genotype would have higher fitness with another partner. Alternatively, like aggregation or spatial structure, wires and tubes may help keep genotypes together for multiple generations, enhancing partner-fidelity feedback. 70 In addition, while it seems unlikely for wires and tubes, the possibility that a trait has no effect on fitness, or is a spandrel that is a by-product of the construction of another trait, must always be considered when studying adaptation. 90 Distinguishing between these possibilities requires measuring the effects of the trait on the fitness of both partners as well as testing what the trait can do.
Challenges affecting the study of adaptation to mutualism
Despite an abundance of theory and the existence of traits thought to be adaptations for mutualism, we are far from having a comprehensive understanding of how mutualism causes selection in various circumstances. Here, I highlight two problems with the current approach to understanding adaptation to mutualism and how they may be addressed in microbial mutualisms.
The first problem with the current approach is bias in the choice of traits tested as putative adaptations. Because of the complexity of organisms, it is difficult to examine every feature simultaneously, so biologists must make an educated guess about features likely to be under selection and those that are not. This process can result in bias about what kind of adaptations are studied. For example, in recent years, there has been a nearly singular focus on how mutualists defend themselves from cheaters, with very little work addressing other mechanisms of adaptation that are outlined in Figure 2. The reasons for this are unclear. Perhaps there has been an underlying assumption that the most commonly studied mutualisms have been evolving for so long that both species have optimized resource production and transfer efficiency. This may be true, but recent research suggests that fitness can continue increasing for longer than 60,000 generations, even for a simple organism evolving on a single limiting resource in a constant abiotic environment with no other species to interact with. 91 Does one expect mutualists to have a constant optimum and to somehow reach it faster? Alternatively, perhaps researchers have assumed that cheating would provide the greatest fitness advantage in mutualism because the cheater pays no cost, and as a result cheating dominates mutualism evolution and should be the primary focus of studies 92,93 Recent research has questioned the validity of this assumption for legumes and nitrogen-fixing symbionts. Although selection on cheating has been demonstrated in one community of these mutualists, a broader analysis of data on these interactions suggests that cheating is rare and not a major cause of selection. 89,94,95 The second problem is that empirical studies of adaptation to mutualism have not consistently linked what a trait can do with how and why it actually affects fitness. 78 As a result, the validity of the most prominent examples of cheating in mutualism have been called into question. 78 As Jones et al. 78 explain, cheaters do not simply provide less resource to a partner. They must be able to procure enough from their partner to maintain a high benefit/cost ratio (i.e., in Fig. 2, the triangle for resources received must remain large) at the expense of their partner's fitness. In addition, to affect the stability of the association, cheaters must arise from within the mutualism. 78 Some commonly described examples of cheating in mutualism, however, have fallen short of these criteria. Some have been descriptions of outside species that exploit a mutualism. In other cases, cheaters were identified solely by their inability to provide resources to a partner without the tests of fitness effects that would be critical for ruling out alternative explanations, such as the possibility that they are poor mutualists. 8,78,94,96 Research on microbial mutualisms can contribute to generating a broader, rigorously tested picture of the selection environment caused by mutualism in numerous ways. First, synthetic biology can be used to test specific hypotheses and assumptions about when selection on cheating or other adaptations is expected to be strongest. For example, mechanisms for excluding cheaters can be removed to see if cheaters are more likely to evolve and become prevalent than they would otherwise. 12 Alternatively, the possibility of cheating could be removed to see if defenses against it quickly erode. These experiments could be performed with mutualisms of varying dependencies.
Second, evolution experiments with mutualisms can be combined with genome sequencing and genetic engineering to perform rigorous, openended tests of the effects of mutualism on adaptation. Genome sequencing provides an unbiased picture of all the mutations (potential adaptations) substituted during evolution. 97 The fitness effects of all or a random subset of these mutations could be tested by removing or moving each mutation into the unevolved ancestor and then comparing the fitness of both partners. 98,99 Such experiments may provide an estimate of which kinds of adaptations are most commonly selected: those that decrease the fitness of a partner or those that do not.
Coevolution of mutualism
In the previous section, I explained how a mutualist could adapt to its interaction by maximizing the ratio of benefits to costs. When one species adapts to mutualism, how does that affect its partner's evolution? One possibility is that adaptation in one species affects the fitness of its partner, but it does not change the relationship between its partner's phenotype and fitness. For example, a methanogen adapting to syntrophy may become more efficient at converting energy from the interaction into biomass, increasing its abundance and capacity to remove hydrogen, and thereby benefitting both partners' fitnesses. Such a change may affect all genotypes in the fermenter population equally regardless of their phenotype. Alternatively, some fermenter phenotypes may be better able to profit from faster growth of the methanogen, giving them a competitive advantage in the presence of the evolved methanogen, resulting in evolution in the fermenter population. In this second scenario, evolution of the methanogen changes the relationships between phenotype and fitness in the fermenter population, causing the fermenter population to evolve. If partners adapt to each other's adaptations repeatedly, then they are coevolving. 100 There are several lines of evidence suggesting mutualisms coevolve, but few definitive tests demonstrating it, especially in interactions between microorganisms. Perhaps the most convincing evidence for coevolution comes from the existence in ancient mutualisms of complex structures and behaviors in each species that seem to match each other. 9 There may be multiple alternative explanations for such observations, but phylogenies of yucca and yucca moths and of Mycorrhizae and plants have demonstrated reciprocal evolution of morphologies mediating the interaction of both species. [101][102][103][104][105] Other researchers have demonstrated the potential for ongoing coevolution. In the interaction between Rhizobia and legumes, GXG interactions were observed, where the fitness of a genotype in one species differs depending on the genotype of its interacting partner. [106][107][108] Coevolution between flies and flowers was suggested by a correlation between fly proboscis length and floral tube length, and further supported by experiments estimating the fitness effects of floral tube length in the presence of varying fly proboscis sizes. 109 Other scientists have claimed that coevolution was occurring based on codiversification or correlations between traits of interacting species, but this could result from two species adapting in parallel to similar changes in an environment. 9 In microorganisms, intergenomic epistasis and adaptation to a mutualist partner have been observed, but ongoing coevolution in mutualism has rarely been described. [10][11][12]110 This work suggests that coevolution can shape mutualisms and continue to affect their evolution, but it is far from providing a clear picture of when and how it will happen in diverse species. Achieving this will require more theoretical work focused on coevolution in mutualism, along with rigorous experiments that test those theories. Some of this can be achieved by using genomics to test for molecular dynamics consistent with coevolution in natural populations of mutualists. 111,112 In microorganisms, time-shift experiments can be used to test whether and how coevolution is occurring at the phenotypic level. 41,113 In a time-shift experiment, contemporary populations of one interacting species are tested against populations of their partners from the past and sometimes the future. 113 Such experiments have been used successfully to test whether and how host-parasite interactions have coevolved in both natural and laboratorybased populations. Host-parasite time-shift experiments have been performed with microorganisms and with macroorganisms that have dormant stages. [113][114][115][116][117] They have also been used to understand the coexistence of cross-feeders in experimental populations of Escherichia coli. 118,119 Apart from these latter experiments as well as my own research on mutualism, time-shift experiments have rarely been applied to mutualism. 11,41 Below, a series of literal toy models are used to describe three mechanisms for coevolution in mutualism, their effects on evolutionary dynamics, and how they can be distinguished from one another and from coadaptation through time-shift experiments.
Coevolution and natural selection in mutualism
If a mutualist has coadapted with its partner but its partner has not evolved in response, its adaptations will have a similar effect on the fitness of all of the partners it has encountered during its evolution in mutualism. Thus, the pattern observed in a timeshift experiment will be a flat line, as depicted in Figure 3C. If a mutualist coevolved, however, then its fitness will vary depending on which partners it is paired with from the past or future (Figs. 3-5).
The observed patterns will depend on the kind of selection that is being caused by coevolution, as described below.
Coevolution in the early stages of adaptation to mutualism. Populations may coevolve during the early stages of adaptation to a new mutualism, as each species is acquiring or repurposing traits that maximize fitness in the interaction. An example of this process is presented in the literal toy model that is depicted in Figure 3A. This mutualism consists of species A and B, which are each capable of consuming nutrients produced by their partner, similar to a cross-feeding interaction. The process of coevolution begins on the left side of the figure, where the species are capable of trading nutrients but have not yet adapted to this new interaction. Species A first adapts to the mutualism by acquiring a trait that allows it to make a rudimentary vehicle to transfer nutrients more quickly between partners. Now that this rudimentary vehicle exists, species B is able to acquire a new set of traits. The figure follows the evolution of this nascent association as each species acquires a new trait that might affect resource exchange via changes to the vehicle. During the process, the vehicle becomes increasingly complex as the species build upon one another's adaptations over time. This process is similar to escalating coevolution. A key feature of escalating coevolution is that new adaptations are substituted successively as each species responds to changes in the other, resulting in correlated selective sweeps across species (Fig. 3B).
This scenario is analogous to any mutualism where each species appears to have acquired multiple adaptations affecting the interaction and where these evolutionary changes seem to affect each other. It could represent sequential acquisition, repurposing, and optimization of the numerous genes in legumes and Rhizobia that allow the bacteria to fix nitrogen in nodules within a root instead of near the root's surface. 120 It could also apply to the evolutionary history of ancient associations, such as Mycorrhizal fungi and plants. 105,121 Fungi first became endophytes in response to the presence of root exudates, then plants evolved recognition mechanisms to exclude pathogens, followed by the evolution of specialized plant and fungal cells. 105 It could describe the initial stages of adaptation to a hypothetical cross-feeding interaction where one species might adapt by evolving the ability to make nanotubes. Subsequent evolutionary steps in such associations have yet to be documented, but a logical hypothesis might include changes in cell surface structures in the nanotube recipient, followed by changes to the shape or length of nanotubes made by the producer species. When escalating coevolution is occurring through the processes modeled in Figure 3, the results of a time-shift experiment will depend on the relationships between the newly evolved traits as well as the sign of their effects on fitness. Population B1 (Fig. 3) can form an increasingly better car when it is paired with populations A,A1, and A1,2 from the evolutionary trajectory of species A. Assuming each new adaptation by A has a positive effect on the fitness of B, fitness in those pairings is expected to increase with each vehicle improvement (Fig. 3). This pattern can change when population B1 is paired with partners farther into its future, such as population A6. This population has adapted to pieces produced by descendents of B1, but which B1 has not yet evolved the capacity to produce. There are basically three potential effects of the pieces that genotype A6 produces. First, they may improve the utility of the vehicle despite the lack of supporting pieces from B, resulting in increasingly higher fitness for B1 in pairings with A populations that have more vehicle adaptations (Fig. 3D). Second, the pieces contributed by A6 may not be produced in the absence of the proper supporting pieces, so that the most advanced vehicle produced is B1/A1,2, regardless of which future A partner B1 is paired with. In this case, the fitness of B1 with future partners will be the same as it was with A1,2, making a straight line. Third, the extra pieces produced by A6 could ruin the function of the vehicle without the supporting pieces produced by future populations of B, causing the fitness of B1 to be lower with each partner that produces more pieces. mutualistic interactions. Coevolving populations may substitute new variants in response to their partners' adaptations or cycle between multiple existing variants, or coevolution may cause continuous purging of variants with unusual phenotypes. 41,111,121 The toy model in Figure 4 depicts ongoing escalating selection. In this model, selection favors a tower that is taller than that of its interacting partner. Initially, the tower of species A is one brick high, while the tower of species B is two bricks tall. The two-brick tower of species B causes selection for a taller tower in species A, causing variant A1 to become common in the species A population. Species B then responds by evolving a taller tower, and this evolution of increasingly taller towers in both species continues. This process results in multiple correlated selective sweeps in species A and B, as shown in Figure 4B. 113 In a time-shift experiment, pairing genotype B1 with all the evolutionary intermediates of species A, the fitness of B1 would be highest when paired with A, and its fitness would decline as its height advantage decreases and becomes negative, as shown in Figure 4C. Escalating coevolution in mutualism can occur whenever more extreme phenotypes are favored. A commonly described example is the relationships between pollinators and flowers, where longer pollinator tongue lengths select for longer floral tube lengths, which selects for longer tongue lengths, and so on until one or both species reach the maximum that is viable. 9 In cross-feeding mutualisms, escalating selection could hypothetically result from successive losses of genes for secreted metabolites, the successive rewiring of metabolic pathways in both species as they maximize production and consumption of cross-fed metabolites, or optimization of the size and composition of aggregates.
Coevolution can also occur when the relative fitnesses of genotypes within a mutualist population are frequency dependent. Interactions between species can cause negative frequency-dependent fitness, where the fitness of a genotype is higher when it is rare. 41,111 This situation can cause fluctuations in the abundance of genotypes in one or both species populations, especially if there are constraints on the available phenotypes. Such dynamics have been observed in host-parasite interactions where the host immune system selects for rare parasite variants. 122 In mutualism, coevolution can cause fluctuating selection in two mutualist partners if there are conflicts between species about which variant has the largest positive impact on fitness. [123][124][125] In Figure 5A, for example, blue or green genotypes of the circle species (A1 and A2) have higher relative fitness when their square mutualist partner population is of the opposite color. The square species B, however, has the opposite preference. This lack of congruence between the fitness interests of each species results in fluctuating selection in both species (Fig. 5). 123 Fitness conflicts like this may explain the prevalence of poor-quality mutualists or cheaters. 96,112 For example, the blue and green circles (A1 and A2) could both be poor mutualists embedded within a population of high-quality mutualists (black circles, not shown). In this scenario, imagine the green square (B1) is able to avoid the green circle (A2) but not the blue circle (A1) (e.g., because of variation in molecular signals produced by A genotypes and recognized by B genotypes, as proposed in Ref. 96), so the blue circle (A1) has the advantage of acquiring more partners when the green square (B1) is common, but green square has less access to high-quality partners than its competitor (B2), because it cannot exclude low-quality partners. The blue square is at a similar disadvantage when the green square is common.
In microbial mutualisms, asymmetry in partner preferences could hypothetically arise for a few reasons. In syntrophy, some partners could have higher fitness with a soluble electron carrier like formate than with hydrogen, while it may be more optimal for the fermenting species to produce hydrogen. 66,126 In cross-feeding mutualisms, the optimal rate at which a producer secretes a metabolite may not match what is optimal for a recipient, and vice versa. If genetic variation exists within populations for these features, then asymmetry in partner preferences could result. Physical interactions between microbial mutualists could also cause asymmetries in partner preference. Perhaps the optimal aggregate size or nanowire length or width for variants of one species is not optimal for that which is produced by the other. There could also be variation in the shape of molecules enabling attachment of appendages or cells, and the variant that attaches best may not be the variant that produces metabolite the best.
These descriptions of how coevolution can affect selection are simplified in two respects. First, with the exception of the frequency-dependent selection example, my descriptions have all been based on the assumption that evolution of one species has a positive effect on the other. However, mutualists 98 Ann. N.Y. Acad. Sci. 1422 (2018) 88-103 C 2017 The Authors Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences may also adapt in such a way that they decrease their partner's fitness. This situation could result in evolution of defenses in the partner and coevolution that resembles that observed in host-parasite interactions, causing escalating selection or fluctuating selection that can be detected with time-shift experiments. 41,113 Time-shift experiment results for coevolution resembling the vehicle toy interaction described in Figure 3 could be especially complicated if some adaptations have the effect of decreasing a partner's fitness while others improve it. Second, my previous explanations are based on simple pairwise interactions, but several common features of natural communities are likely to affect these processes. Mutualistic interactions may commonly occur between guilds composed of functionally similar species, especially in microbial communities. 6,[127][128][129] It is theoretically possible for a species to coevolve with a guild, but the effects of coevolution may vary considerably among species within a guild. 127 In addition, coevolutionary outcomes can be altered by predators, competitors, or mimics. 85,[130][131][132][133] Another reason coevolution may be more complicated is that mutualists likely also adapt to the abiotic environment or interactions with other species, and this may cause selective sweeps (Fig. 3B, black lines) that interrupt escalating selection. 79,121,134 Finally, the relative abundance of these third parties and abiotic variables affecting the ecological success of mutualists often vary geographically, resulting in varying interactions and evolutionary outcomes across a species range. 135
Conclusions and future directions
Despite decades of progress on our understanding of mutualisms and how they evolve, we are still far from a comprehensive understanding of the impact of mutualism on adaptation and coevolution. Biologists have identified adaptations to mutualism that seem to fit theoretical expectations, but more must be identified and tested appropriately before the field can move from describing how mutualists can adapt to explaining how they will. In addition, while several models of coevolution in mutualism have been described, there are few examples where such models have been rigorously tested.
Throughout this review, I have described how microbial mutualisms could be used to address these gaps between theory and empirical research by providing in vivo representations of models. 136 I suggested manipulating the possibilities for cheating or defenses against it in order to learn how each affects adaptation to mutualism. In addition, genome sequencing of evolved mutualists may provide an unbiased list of putative adaptations to mutualism, allowing researchers to gain a more comprehensive view of how mutualists adapt. I also suggested the use of time-shift experiments to test whether and how coevolution affects the evolution of mutualists. These suggestions are just a sampling of what researchers could do with microbial mutualisms to increase the breadth of our understanding of how mutualisms evolve. Realizing the full potential of microbial mutualisms as tools requires more scientists to use them. Those already studying microbes must also increase their focus on testing broader theories about mutualism.
The use of microorganisms as tools, however, is not the only way in which they can benefit the field as a whole. Phenomena occurring within microbial communities has already inspired new theories about how mutual dependencies evolve through the loss of traits, theories that may apply to other organisms. 6,60 As improvements in technology help microbiologists to gain a better view of the inner workings of microbial communities, who knows what will be discovered and how it could shape our understanding of evolution in mutualism. | 2018-04-03T04:10:51.138Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "83c4e8059b5f96266228904b2d84d8e99b76f39d",
"oa_license": "CCBYNC",
"oa_url": "https://nyaspubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/nyas.13515",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "83c4e8059b5f96266228904b2d84d8e99b76f39d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
210839273 | pes2o/s2orc | v3-fos-license | Thin-film transistor electrical performance of hybrid MoS2- P3HT semiconductor layers
The hole carrier field-effect mobility of hybrid molybdenum disulfide (MoS2) nanoparticles suspended in poly(3hexylthiophene) (P3HT) thin film transistor (TFT) was found to be enhanced by nearly 10× compared to P3HT-only TFTs. The improvement in the hole charge transport was found to be a function of the concentration of MoS2 in P3HT with high MoS2 concentrations resulting in an increase in the on-current of the device. Both the hybrid and conventional polymeric TFTs exhibited a threshold voltage of 2 V and an on-off ratio of > 10.
I. INTRODUCTION
Organic semiconductors have been widely used for solutionbased fabrication such as spin-coating and printing due to their good solubility in conventional solvents. The benefit of solution processing is its cost-effective, large area, non-vacuum, lowtemperature fabrication and particularly, its effectiveness in additive processes [1], [2]. Solution-processed electronics include sensors, displays, and photovoltaics [3]- [8]. Meanwhile, thin-film transistor (TFT) is a fundamental component of electronics. The field-effect mobility and the threshold voltage of TFT are the most important factors to assess the device performance [9]. Whereas, when it comes to channel materials for organic TFTs (OTFTs), as compared to inorganic materials, organic materials have inherent limitations such as low field-effect mobility [10].
A novel approach to overcome this limitation is through an organic/inorganic hybrid semiconductor composite, which we expect to use as ink for printed OTFT. Through this approach, the limitations, such as low field-effect mobility and nonsoluble process of both the organic and inorganic materials can be overcome. Previous works of blending organic semiconductors with various inorganic materials such as carbon nanotubes [11], [12], zinc oxide (ZnO) nanorods [13], titania (TiO2) nanorods [14], and graphene [15], [16] are reported. However, the main challenges of these works are low on/off ratio less than three orders of magnitude [12], [13], [15], and huge threshold voltage shift, over ~20 V [11], [12], [14]. To address these problems, we newly suggest a solution processible molybdenum disulphide (MoS2) suspended in poly(3-hexylthiophene) (P3HT). The nanocomposite solution was used for an active channel layer of TFTs to fabricate a bottom gate TFT by spin-coating (Fig. 1). The dependence on the device performance on the different concentrations of MoS2 is presented. II. MATERIALS P3HT is well known and widely studied in last few decades as p-type polymeric semiconductor which is used in OTFT since it is quite stable in ambient and has high mobility [17], [18]. Fig. 2a shows the molecule structure of P3HT. It has longrange intermolecular side-chain and it forms inter-chain interaction as a highly ordered pi-stacked polymer which is shown in Fig. 2b. High pi-stacking interactions are directly associated with the crystallization of P3HT which attributes to efficient charge transport. Organic semiconductors follow hopping mechanism due to the high density of impurities and traps which are known as localized sites [19]. To increase the charge transport, the polymer should have high molecular ordering. It can be achieved by surface treatment method using self-assembly monolayer (SAM) such as hexamethyldisilazane (HMDS). The wettability of the substrate reduces after HMDS treatment which affects the molecular ordering of polymers at dielectric/polymer interface [20], [21]. The interface between insulator and organic semiconductor is known to have a high density of trap states. The low surface energy of HMDS-treated dielectric surface enables polymer to be more crystallized which result in fewer interface states [19].
One of the transition metal dichalcogenides materials (TMDCs), MoS2 has been attracted great attention due to its various superior electrical and optical characteristics. Distinct characteristics of MoS2 include high field-effect mobility on its two-dimensional state, good flexibility, transparency, and highair stability thanks to its atomically thin layer. There were many attempts to use it in a soluble process by exfoliating bulk material into thin nanoflake. Layered MoS2 is coupled via weak van der Waals force between interlayers and the layers can be separated by exfoliation which is shown in Fig. 2c. In this work, MoS2 presents as nanoparticles in the P3HT channel layer to improve overall electrical characteristics of OTFTs. in the channel layer effectively helps the charge transport of organic film of P3HT.
The hole field-effect mobility of each device in the saturation regime was calculated by the gradual-channel approximation: where Cox and µ are the gate oxide capacitance and field-effect mobility, respectively. VG and VT are gate voltage and threshold voltage, respectively. The threshold voltage, VT, was determined using a linear fit to the square root of drain current versus gate voltage. These calculated values are summarized in Table 1. The field-effect mobility also continuously increases with increasing concentration of MoS2. The field-effect mobility of MoS2-P3HT nanocomposite TFT (P3HT/1.0 wt% MoS2) is 1.43×10 -2 cm 2 /V-s which is five times higher than the baseline device. The mobility of P3HT film based on our baseline TFT is 2.60×10 -3 cm 2 /V-s. This measurement result shows that the fabricated TFT has a smaller variation of the threshold voltage and better the on/off ratio compare with previous studies [11]- [15]. According to the previous study, reported hole mobility in monolayer sheets of MoS2 was 96.62 cm 2 /V-s [22]. Hence, we expect that due to the much higher hole field-effect mobility of MoS2 compared to P3HT film, MoS2 works as a high transport region within the organic film. From the energy band diagram of the device which is shown in Fig. 5, both valance band edge of MoS2 and HOMO level of P3HT can be found around 5.2 eV. Moreover, Au has a high work function of 5.1eV which is suitable with the HOMO level of P3HT. We find that MoS2 and Au have the proper energy level for hole transport and injection. Furthermore, in the linear regime, the drain current depicts a linear I-V characteristic indicating these devices are unaffected by the source/drain contact resistance. In the saturation regime, it was observed that nanocomposite TFT has a higher on-state current than P3HT-only TFT. These results provide experimental evidence of the MoS2 enhancing the charge transport characteristic of the organic TFT device performance.
Meanwhile, the increase of the concentration of MoS2 in the organic film does not increase proportionally when comparing different MoS2 concentrations suspended in P3HT. We speculate that the existence of inorganic nanoparticles may hinder the molecular ordering of P3HT film, which will be the subject of future studies to find optimum concentrations of MoS2 in the composite film. IV. CONCLUSIONS MoS2 nanoparticles are introduced in P3HT TFTs. The effect of various concentrations of MoS2 on organic TFT as an active layer was investigated. The I-V characteristics showed the saturation field-effect mobility of the TFT increases with the concentration of MoS2 while the on/off ratio stays at the same level of magnitude. The hybrid MoS2/P3HT films have higher field-effect mobility compared to P3HT films alone. We expect that the MoS2 suspensions help alignment of long-range ordered P3HT. Furthermore, the MoS2 nanoparticles work as a conducting-bridge in the channel layer to enhance the charge transport. As the solution-based approach, our hybrid ink will be applied and used to realize enhanced OTFT for printing electronics. | 2020-01-22T02:01:13.106Z | 2020-01-21T00:00:00.000 | {
"year": 2020,
"sha1": "941ccf21469ad5e23666b5219ff0d45fbb5e0b43",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3079f92914c1c1da410a62f6cb118e3f1ecaad0c",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
268335068 | pes2o/s2orc | v3-fos-license | Complex analysis of the national Hereditary angioedema cohort in Slovakia – Identification of 12 novel variants in SERPING1 gene
Background Hereditary angioedema (HAE) is a rare autosomal dominant genetic disease characterised by acute episodes of non-pruritic skin and submucosal swelling caused by increase in vascular permeability. Objective Here we present the first complex analysis of the National HAE Slovakian cohort with the detection of 12 previously un-published genetic variants in SERPING1 gene. Methods In patients diagnosed with hereditary angioedema caused by deficiency or dysfunction of C1 inhibitor (C1–INH-HAE) based on clinical manifestation and complement measurements, SERPING1 gene was tested by DNA sequencing (Sanger sequencing/massive parallel sequencing) and/or multiplex ligation-dependent probe amplification for detection of large rearrangements. Results The Slovakian national cohort consisted of 132 living patients with confirmed HAE. We identified 51 index cases (32 families, 19 sporadic patients/112 adults, 20 children). One hundred seventeen patients had HAE caused by deficiency of C1 inhibitor (C1–INH-HAE-1) and 15 patients had HAE caused by dysfunction of C1 inhibitor (C1–INH-HAE-2). The prevalence of HAE in Slovakia has recently been calculated to 1:41 280 which is higher than average calculated prevalence. The estimated incidence was 1:1360 000. Molecular-genetic testing of the SERPING1 gene found 22 unique causal variants in 26 index cases, including 12 previously undescribed and unreported. Conclusion The first complex report about epidemiology and genetics of the Slovakian national HAE cohort expands the knowledge of the C1–INH-HAE genetics. Twelve novel causal variants were present in the half of the index cases. A higher percentage of inframe variants comparing to other studies was observed. Heterozygous deletion of exon 3 found in a large C1–INH-HAE-1 family probably causes the dysregulation of the splicing isoforms balance and leads to the decrease of full-length C1–INH level.
INTRODUCTION
Hereditary angioedema (HAE, OMIM 106100) is a rare autosomal dominant genetic disease characterised by acute episodes of non-pruritic skin and submucosal swelling caused by the transient increase in vascular permeability.HAE can be caused by deficiency or dysfunction of C1 inhibitor (C1-INH-HAE-1 and C1-INH-HAE-2, respectively) due to genetic defects in SERPING1 gene. 1 The prevalence of C1-INH-HAE varies from 1:50 000 to 1:100 000. 2 C1 inhibitor (C1-INH), protease inhibitor, belongs to serpins and is responsible for the inhibition of the complement system. 3In the state of C1-INH dysfunction, the kallikrein-kinin system is overactivated and produces large amounts of bradykinin, which by binding to bradykinin B2 receptors increases the vascular permeability. 4C1-INH is encoded by SERPING1, and defects in this gene may lead to misfolded or truncated C1-INH, or to mRNA degradation by nonsense-mediated decay (NMD) preventing forming protein at all.Haploinsufficiency is a common feature in C1-INH-HAE, and, additionally, a variant product inactivates the wild-type allele in a dominant-negative manner. 5cording to Drouet et al, 809 pathogenic or likely pathogenic variants were identified in SERPING1 gene that affect 1494 families. 55.6% of causal variants originate de novo.Missense variants account for 32.2% of all variants and small deletions/duplications/insertions with subsequent frameshift for 36.2%. 5 C1-INH-HAE-2 occurs due to missense variants in exon 8 of the SERPING1 gene which affect the reactive loop (active site of the molecule) and reduces the inhibitory effect of C1-INH on target proteins. 1In the study of Czech National HAE cohort (neighbouring country of Slovakia), missense variants carried 35.3% of probands, splicing variants 22.4%, frameshift variants 18.8%, gross deletions 16.5%, and nonsense variants 4.7%.Causative or probably causative variants were detected in 206 out of 207 (56 unique pathogenic or likely pathogenic sequence variants were found). 6In the study of Hungarian HAE patients (neighbouring country of Slovakia), missense variants were found in 30.1%, large deletions or duplications in 20.6%, frameshift variants in 19.1%, nonsense variants in 17.6%, and splicing variants in 11.8% of the index cases. 7cording to the actual valid diagnostic criteria, the genetic confirmation in SERPING1 is not necessary for the final estimation of C1-INH-HAE diagnosis.However, the genetic analysis could significantly contribute to the diagnosis confirmation, especially in unclear or conflicting clinical and laboratory results. 8E has been found to be caused by different mechanisms due to deficiency in other genesfactor XII (F12), 9 angiopoietin-1 (ANGPT1), 10 plasminogen (PLG), 11 kininogen (KNG1), 12 myoferlin (MYOF), 13 and heparan sulfateglucosamine 3-O-sulfotransferase 6 (HS3ST6) 14 in small amounts of patients (13.2%), 5 Genomic DNA for genetic testing was extracted from EDTA-anticoagulated whole blood.Sequencing of the coding region (exons 2-8) and exon-intron boundaries of SERPING1 (NM_000062.3) was performed using standard Sanger sequencing protocols or massive parallel sequencing (MPS) with a minimum of 20x depth of coverage, Clinical Exome Solution Kit, Sophia Genetics (Illumina NextSeq 550, San Diego, California, USA) with in silico Copy Number Variation analysis.Data were analyzed using Sophia DDM analysis software.Sanger sequencing analysis was used for confirmation of identified variants after MPS.Primer sequences and polymerase chain reaction (PCR) conditions are available on request.Multiplex ligand-dependent probe amplification (MLPA) was performed after the negative result of sequencing using SALSA MLPA P243 SERPING1 Kit (MRC Holland, The Netherlands) in order to search for large deletions or duplications.When the diseasecausing variant of the family was identified, DNAs from family members were investigated by direct sequencing of the region (exon) carrying the variant or by MLPA as relevant.Identified variants nomenclature follows Human Genomic Variation Society (HGVS) recommendations. 15oding DNA nucleotide numbering and protein sequence numbering were compared to GenBank reference sequence NM_000062.3and NP_000053.2.Interpretation of identified variants was based on the criteria established by the American College of Medical Genetics and Genomics (ACMG) 16 using Varsome database, 17 InterVar tool 18 and Uniprot database. 19
RESULTS
The Slovakian national C1-INH-HAE cohort consisted of 132 living patients (56 males and 76 females) with confirmed HAE.We identified 51 index cases (32 families and 19 sporadic patients/ 112 adults and 20 children).Among these, 117 patients were clinically, and laboratory confirmed to have C1-INH-HAE-1, while 15 patients had C1-INH-HAE-2.Deceased patients (n ¼ 21) were excluded from the study, with 2 of them experiencing complications of HAE leading to death (laryngeal oedema).Fifteen patients died due to asphyxia before the clinical diagnosis of HAE.None of the patients died during the follow-up by National Centre for Hereditary Angioedema in University Teaching Hospital in Martin.
Molecular-genetic testing of the SERPING1 gene found 22 unique causal variants in 26 index cases, including 12 previously undescribed variants.Novel causal variants were present in 50% of index cases (n ¼ 13) from our cohort and represent 54.6% of identified variants.In 44 patients, molecular-genetic testing was not performed due to their low compliance with the care and clinical management.We identified causal variants in 72 patients from 88 patients tested (81.8%).Patients with negative family history represented 30.1% of index cases.All variants were detected in a heterozygous state.
Both frameshift and missense variants were the most common in index cases (both n ¼ 9, 34.6%), followed by gross deletions (n ¼ 3, 11.5%) (Fig. 1).All frameshift variants led to incorporation of termination codon.We did not find any typical nonsense variants.From the other point of view, small deletions/duplications were present in 46.1% of cases.
Exon 8 was affected in one-third of all cases (n ¼ 9, 34.6%).Exons 3 and 7 were the second most affected (n ¼ 6, 23.1%) (Fig. 2).The localization of the variants identified in Slovakian HAE patients on SERPING1 gene is shown in Fig. 3.
Table 1 presents the identified variants in SERPING1 gene within the Slovakian HAE cohort Whole gene deletion was associated with a severe course of disease.The patients had early We report novel gross deletion (exon 3 deletion) in a large family with C1-INH-HAE-1 which is evaluated as a pathogenic according to ACMG criteria (PVS1, PM2, PP1, PP4).Patients in this family had early onset of symptoms (from 2 to 8 years of age), severe course of disease (10-25 attacks per year) with angioedemas of extremities, face, larynx, and abdominal symptoms.
Five of the novel variants introduce premature stop codons by creating frameshifts due to multiple nucleotide deletion (c.82_95del), single nucleotide deletions (c.954del, c.1038del, c.1127del c.1127del), or a single nucleotide duplication (c.1331dup).These variants are predicted to lead to degradation by nonsense mediated mRNA decay (NMD) leaving no transcripts for protein production.
Inframe duplication (c.1189_1191dup) caused onset of symptoms in early adulthood (from 17 to 20 years of age) and severe course of disease (20-30 attacks per year) with abdominal, orofacial, and laryngeal oedemas.
We report 3 novel missense variants.Novel missense variant c.1346T > A (p.Leu449Gln) was found in a large family with clinical diagnosis of C1-INH-HAE-1 (Fig. 4).Patients had an onset of symptoms between 2 to 26 years of age; male patient had only 1 attack per year, female patients from 20 to 51 attacks per year (the highest number in the whole cohort).Two other pathogenic missense variants at this nucleotide position were already reported: c.1346T > C 22,26 and c.1346T > G. 22,26,41,48 The amino acid (p.Leu449) is a part of the serpin domain (Fig. 3) and forms beta strand (breach region of the protein) assuming structural damage of the SERPING1 protein by amino acid change due to these missense variants.Leucine which has hydrophobic side chain is changed to glutamine with polar uncharged side chain due to c.1346T > A. Variant is evaluated as a likely pathogenic according to ACMG criteria (PM1, PM2, PP1, PP3, PP4).We evaluate this variant as causal considering segregation analysis in affected family, ACMG criteria, the presence of 2 missense pathogenic variants at the same nucleotide position, and amino acid change that probably affect protein function.
Novel missense variant c.416A > G was found in sporadic female patient with the diagnosis of C1-INH-HAE-1.The patient has been symptomatic from 32 years of age (her current age is 44 years).She experienced abdominal symptoms (abdominal pain, diarrhoea) with the frequency of 2 attacks per year and showed low (nearly zero) C1-INH concentration and its function.It codes amino acid with polar uncharged side chain p.Glu139.This amino acid is a part of the serpin domain (Fig. 3) and forms one of the helical structures.Variant is evaluated as a likely pathogenic according to ACMG criteria (PM1, PM2, PM6, PP4).We suppose the change to neutral and compact amino acid glycine affects the protein structure.We evaluate variant c.416A > G as causal considering ACMG criteria and amino acid change that probably affect protein function due to defect forming of the helical structure.
Novel missense variant c.517A > C was found in a family with the clinical diagnosis of C1-INH-HAE-1 (Fig. 4).It codes the amino acid p.Ser173Arg which is a part of the serpin domain (Fig. 3) and forms the helical structure.LOVD and Clinvar databases contain variant c.518G > A, which is located in the same codon as c.517A > C, causes protein change p.Ser173Asn and is not considered to affect protein function (VUS).Serin and asparagine both have polar uncharged side chain.On the other side, arginine is amino acid with positive electrically charged side chain.Variant is evaluated as a likely pathogenic according to ACMG criteria (PM1, PM2, PP1, PP3, PP4).We evaluate variant c.517A > C as causal considering segregation analysis in affected family and amino acid change that probably affect protein function due to defect forming of the helical structure.
We identified 2 variants (c.1397G > A, c.5C > T) in 1 patient with C1-INH-HAE-1.Variant c.5C > T, pAla2Val 39,49 is classified as a variant of uncertain significance according the ACMG criteria (high allele frequency in European [non-Finnish] population 0.00127 and South Asian population 0.00358; ClinVar interpretation: 3x likely benign, 1x uncertain significance).We also Index cases Although segregation analysis of c.5C > T in the family was not possible, we suppose this variant having no impact on clinical and laboratory manifestation in our patient (variant was not integrated in Table 1).
DISCUSSION
The Slovakian national cohort consisted of 132 living patients with confirmed HAE.We identified 51 index cases (32 families and 19 sporadic patients).One hundred seventeen patients had clinically and laboratory confirmed C1-INH-HAE-1 and 15 patients C1-INH-HAE-2.The prevalence of C1-INH-HAE in the Slovak Republic according to this study is currently 1:41 280 and the incidence 1:1 360 000 (total population: 5 449 270 according to the 2021 population census).The prevalence is higher in comparison to data from other European countries: Sweden -1:66 000, 50 Italy -1:65 000, 51 Denmark -1:70 900, 20 Greece À1:90 000, 52 Spain -1:91 700, 53 the Czech Republic -1:52 307, 6 and higher than average calculated prevalence 1:50 000. 2 Two patients from our cohort died due to complications of HAE (laryngeal oedema).Both patients had low compliance with care and clinical management.In consulting family history with the patients from our cohort, we found 15 family members experiencing asphyxia leading to death before the clinical diagnosis of HAE.These patients had clinical manifestations of HAE (tissue swelling, oedema).We consider these events as complications of HAE (laryngeal oedema).
Variation type distribution in our cohort is different compared to neighbouring countries. 6,7e found a higher proportion of frameshift and inframe variants.Proportion of missense variants and gross deletions is similar to distribution according to the LOVD database. 5We found a lower proportion of splicing variants (7.7%).We assume that examination with RNA-based approach could help with identifying a causal splicing variant in a part of molecular-undiagnosed cases (16 HAE patients) from our cohort.A large HAE family with exon 3 deletion is another interesting case to discuss.Exon 3 is involved in the formation of all 3 SERPING1 domains (Fig. 3).SERPING1 is a naturally alternatively spliced gene.Exon 3 skipping in hepatic cells and monocytes of healthy humans was already reported in the literature with presence in approximately one-third of all transcripts. 45,54The proportion of full-length and exon 3 skipped splicing isoforms has probably impact on the overall C1-INH level. 54We suppose that this variant identified in our cohort causes the dysregulation of the splicing isoforms balance which leads to the decrease of full-length protein level and the development of severe course of the disease.
Variants that cause premature introduction of a stop codon or NMD and gross deletions are assumed pathogenic and causal for developing HAE.In our cohort, these variants were present in 53.8% of cases.Determination of causality in missense variants is more problematic.The position of the variant is an important criterion.Modification of the peptide sequence within the serpin domain has a great impact on C1-INH dysfunction. 5Interestingly, 3 out of 4 missense variants causing C1-INH-HAE-1 detected in our cohort were novel.All our novel missense variants are rated as likely pathogenic according to ACMG criteria considering family history and segregation analysis.
A similar situation of problematic causality determination is characteristic for inframe variants.][7] Interpretation is challenging due to limitations of in-silico prediction tools and evaluation of impact on protein structure.All our novel inframe variants are rated as likely pathogenic according to ACMG criteria considering family history and segregation analysis.
CONCLUSIONS
Mutational heterogeneity of SERPING1 gene with high proportion of de novo variants was observed in many countries as well as in the Slovak Republic.Twenty-two unique causal variants including 12 previously undescribed expands the knowledge of the C1-INH-HAE genetics.Novel variants were present in the half of the index cases.
A higher percentage of inframe variants comparing to other studies was observed.Three out of 4 missense variants causing C1-INH-HAE-1 detected in our cohort were novel.We report as first the heterozygous deletion of exon 3 in a large C1-INH-HAE-1 family with severe disease course which probably causes the dysregulation of the splicing isoforms balance and leads to the decrease of full-length C1-INH level.The identification of 12 previously unreported variants in SERPING1 gene could contribute to the current genetic databases and enlarge the understanding of the genetic background of C1-INH-HAE and help in the diagnostic approach in the patients with suspected HAE.
with normal concentration and function of C1-INH (HAE with normal C1-INH).
METHODS
Patients were recruited from a national survey of HAE in Slovak Republic with the diagnosis C1-INH-HAE.Most of the patients are followed by National Centre for Hereditary Angioedema in University Teaching Hospital in Martin (Slovakia) and were referred after national online survey by general practitioners, immunologists, dermatologists, or individuals who directly contacted the centre for evaluation, treatment, and genetic testing.Diagnosis of C1-INH-HAE was established according to international consensus guidelines, based on clinical symptoms and serum levels of functional and antigenic C1-INH. 8Patients were diagnosed as C1-INH-HAE-1 when functional and antigenic C1-INH were 50% of normal values.Patients were classified as C1-INH-HAE-2 when functional C1-INH was 50% and antigenic C1-INH was >50% of normal values.Informed written consent to mutational analysis from all patients was archived.
Fig. 1
Fig.1Distribution of causal variants in SERPING1 gene according to the variant type.Frameshift and missense variants were the most common in index cases, followed by gross deletions.All frameshift variants led to incorporation of termination codon.We didn't find any typical nonsense variants
Fig. 2
Fig. 2 Distribution of causal variants of SERPING1 gene according to affected exons.Exon 8 was affected in one-third of all cases.Exons 3 and 7 were the second most affected, followed by exon 6 and intronic variants
Fig. 3
Fig. 3 Localization of variants identified in Slovakian HAE patients on SERPING1 gene.The upper part of the figure shows exons (colourful boxes) and introns (white boxes) of the SERPING1 gene with marked variants identified in our cohort; the lower part represents equivalent domains of the C1 inhibitor protein; UTRuntranslated region
Table 1 .
(Continued) Causal variants identified in the SERPING1 gene in Slovakian HAE cohort.Index case represents the source patient, in whom the origin of the causal variant is observed (sporadic patient or first documented and genetically confirmed patient in the family).The cells in Effect on protein column is empty in case of splicing variants and gross deletions because of the variant effect nature.The resources are indicated in the Reference column.Related patients column contains the number of affected members in family with description of relations or reference to Fig. 4 where pedigrees are shown.Without genetic confirmation note indicates HAE association based on only typical clinical manifestation.The cell in this column is empty by the variants with the occurrence in sporadic patients.Missense variants were evaluated by in silico tool Combined Annotation Dependent Depletion (CADD).Other variant types were not evaluated by CADD, thus there are empty cells in the CADD column.Variant c.1371_1373del was not published in literature but was presented on European Society for Immunodeficiencies 2014 oral presentations -marked with asterisk (*).Abbreviations: ACMG -American College of Medical Genetics and Genomics, CADD -Combined Annotation Dependent Depletion, C1-INH-HAE-1 -Hereditary angioedema caused by deficiency of C1 inhibitor, C1-INH-HAE-2 -Hereditary angioedema caused by dysfunction of C1 inhibitor, HAE -Hereditary angioedema identified a pathogenic variant c.1397G > A that fully explained the clinical manifestation.
Fig. 4
Fig. 4 Pedigrees of large HAE families with specific causal variant in SERPING1 gene HAE-affected family members carrying the causal variant are shown in black and healthy individuals are depicted by a blank symbol.Deceased family members are shown with a sloping line through the symbol.Divorce is shown as horizontal line connecting 2 symbols with 2 diagonal hash marks.Individuals with only clinical symptoms indicating HAE who were not genetically tested are marked by asterisk (*).Variant c.1220del was found in 2 large families which are divided and marked as 1) and 2). | 2024-03-12T15:38:55.964Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "0f1dc8590a40b6b940ad1a566be43428ed4e7f98",
"oa_license": "CCBYNCND",
"oa_url": "http://www.worldallergyorganizationjournal.org/article/S1939455124000164/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "30fa7ab8ba82c5ad859af08986430e2dcacb8874",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13705671 | pes2o/s2orc | v3-fos-license | University of Huddersfield Repository Suspended manufacture of biological structures
A method for the production of complex cell-laden structures is reported, which allows high-levels of spatial control over mechanical and chemical properties. The potential of this method for producing complicated tissues is demonstrated by manufacturing a complex hard/soft tissue interface and demonstrating that cell phenotype can be maintained over four weeks of culture.
DOI: 10.1002/adma.201605594
the molecular level. Local variations in organization and biochemistry mean that the encapsulated populations of cells are exposed to environments that differ both mechanically and chemically. These environments have been shown to play a strong role in shaping cell phenotype. [3] For some time now researchers have sought to recapitulate tissue structure using a combination of isolated cells and polymeric hydrogels that have a structural resemblance to the ECM. [4] Such specimens have been manufactured using the process of gel-casting; this allows for gross geometrical control, yet provides little control over the microscale geometry and spatial and mechanical cues important to controlling cell behavior. [5] Additive layer manufacturing (ALM) offers the tantalizing possibility of creating structures with a greater level of complexity than traditional processing methods such as casting, and some degree of control over the distribution of cells and other important components throughout the structure. While the ALM of hard materials is relatively mature and a number of industries now utilize such technologies, at present the ALM of soft materials remains challenging. ALM using soft materials has been reported in the literature since the mid 2000s when Boland et al. published on the production of "nose-like" specimens from alginate. [6] In the years since many research groups have published on the manufacture of structures from soft solids some of which allow for the incorporation of cells. [7] Most recently, Hwang et al. reported the production of a cartilage-like structure for auricular reconstruction. [8] Notably, the majority of additively manufactured soft-solid structures exhibit relatively low complexity [9] and are typically broader at their base than at their peak to reduce the risk of the structure collapsing. A number of research groups are working on the development of novel polymers for ALM, [7,10] but in the most part, the structures they form with these polymers are highly simplistic, with a self-supporting "waffle arrangement" frequently being used to demonstrate process resolution. [7,9,11] Some papers report the use of harder materials, such as poly(caprolactone) (PCL) and hydroxyapatite, to support the structure [12,13] or have extruded materials into high viscosity liquids, for example Pluronic F-127 hydrogel. [13] Additionally, there have been reports of additive manufacturing using a suspending medium that consists of either a shearthinning synthetic hydrogel [14] or a slurry of gelatin particles, [15] respectively. These elegant approaches resulted in structures of previously unprecedented complexity, but neither group managed to codeposit multiple cell types or could demonstrate any localized modification in mechanical properties of chemistry, both of which are critical to biological performance. Furthermore, neither of these methods is conducive to the manufacture of structures that are suitable for the clinic, since the suspending medium would be very challenging to completely remove from the finished part. In this study, we have addressed these issues by using a self-healing particulate or fluid gel material, which is stable at room temperature and in culture conditions, as a In this study, we describe a novel method of suspended manufacture for the production of complex soft structures of closely defined morphology, mechanical properties, and chemistry. The process conditions are sufficiently mild that embedded populations of cells maintain high levels of viability and retain phenotype. Given the simplicity of the process, it can be used for all existing gel materials without special modification. The method of manufacturing uses a "bed" of micrometer sized gel particles (often referred to as fluid or sheared gels), [1] which behave in bulk as a viscoelastic fluid and can self-heal thereby providing support to the complete part. [2] The final structure is formed through the dispersion of a gelling material into the interstices between the supporting fluid gel particles. This enables relatively complex structuring while providing sufficient support to prevent the structure collapsing under its own weight. Once the scaffold structure has been formed the supporting phase may be removed through the gentle application of shear. This manufacturing process allows for the use of a wide range of polymeric materials, including many already approved by regulatory bodies. Ultimately it has the potential to produce structures that could make their way into clinical trial in the relative short term. Here we demonstrate the power of this method by manufacturing anisotropic structures with spatially controlled mechanical and chemical properties, which support a coculture of viable cells. These scaffolds could be used for the production of osteochondral plugs for the augmentation of full-thickness cartilage defects.
Tissues are formed of populations of cells distributed within an extracellular matrix (ECM), which is structured down to (2 of 6) 1605594 Adv. Mater. 2017, 29,1605594 supporting media. The strong surface interactions between gel particles form shortrange adhesions when in close contact causing the paste-like material to thicken. [16] The inter actions formed between the particles allow the particulate material to support a secondary phase of similar (or in some cases higher) density. This "true-gel" (G′ >>> G′′) particulate (Figures S1-S3, Supporting Information) microstructure makes this system physically distinct from highly viscous fluids, such as commercial shower gels, that are formed almost exclusively by polymer entanglement. [17] Importantly, since the gel particles are discrete entities they do not contaminate the surface of the manufactured sample and can actually be formed from the same material as that extruded into the particle bed (Figure 1) likely simplifying the translational pathway. Using particulate gels as a suspending agent, supports the fragile construct as it is formed, in a similar manner to the way amniotic fluid suspends the developing fetus. Using an XYZ stage, it was possible to (with 100 µm resolution) deposit the hydrogel polymer in a discrete 3D location, the resolution of which is limited only by the size of the droplet from the end of the extruding needle and the viscosity of the supporting medium.
A variety of hydrogel materials may be used for the production of the final part and the supporting bed. Initial experimentation demonstrated that it was possible to generate structures using combinations of gelatin, gellan, collagen, hyaluronic acid, agarose, and alginate. As a consequence of its relative robustness and capacity for physical modification using seeded hydroxyapatite, [18] gellan was selected for further use as the final part and agarose as the supporting bed. The supporting bed, formed from agarose with particles in the size range 2-11 µm ( Figure S4, Supporting Information), was of sufficient robustness to suspend a cross-linked gellan gum structure such as the helix illustrated in Figure 1. This helical structure was loaded with colloidal hydroxyapatite nanocrystals in order to increase radio-opacity enabling micro-computed tomography (CT) imaging. Following treatment with calcium chloride solution this helical structure was removed from the particle bed and was shown to be self-supporting ( Figure 1). The shear forces applied during the extrusion process were not of sufficient magnitude to cause phase separation and were sufficiently mild that it was possible to maintain the viability of a population of human primary chondrocytes within the cultures (Figure 2). To investigate the influence of supporting matrix viscosity on the resolution of the printing method, samples were made using a controlled concentration of gellan gum (1.5%) and a hypodermic needle of internal diameter 337 µm. An increase in the viscosity of the supporting medium resulted in a monotonic increase in resolution in the XY dimensions, but interestingly a smaller reduction in resolution in the Z dimension ( Figure 2). At this scale, resolution is ultimately limited by droplet size, which is controlled by the internal dimensions, flow rate of the extruding aperture and other parameters of deposition, such as the viscosity of the extruded solutions. To further investigate factors that may influence resolution, structures were formed using a range of needle diameters and it was demonstrated that resolution was www.advancedsciencenews.com www.advmat.de Figure 1. A-D) A schematic showing the manufacturing process for a 3D soft solid structure manufactured using the suspended deposition method. A) Briefly, a supporting "fluid-gel" matrix is created in a vessel. B) A secondary phase may then be extruded into the particle bed. C) The self-healing, fluid gel supports the gel structure during the cross-linking process. D) Once cross-linked, the object may be removed from the particle bed. E) This was manipulated to fabricate a simple helix loaded with hydroxyapatite nanoparticles and imaged with micro-CT (scale bars = 5 mm).
directly related to needle diameter, increasing up to the point that the hydrocolloid could no longer be extruded ( Figure 2). The peak resolution achievable for the 1.5% gellan gum and the agarose supporting medium was 250 µm. Given the scale of the tissues to be produced for osteochondral repair and the need for cell viability, the needle diameter was set at 337 µm and extrusion rate no more than 125 µL s −1 . To demonstrate the complexity achievable with the suspended manufacturing process, scaffolds that mimic the structuring and cellular organization of an osteochondral defect were manufactured. This complex tissue region lies between articular cartilage and bone on an articulating joint surface [19] and may be severely damaged following trauma [20] or can deteriorate during the progression of osteoarthritis. [21] At present the standard of care is microfracture in the knee [22] or the transplantation of tissue that has been isolated from a cadaver or nonarticulating region of the joint. [22] Neither method has been shown to be absolutely successful and this has driven research into the development of a range of synthetic osteochondral plugs. [23] The main reason for failure of these synthetic grafts is through delamination at the hardsoft tissue interface. [24] Native osteochondral tissue exhibits a gradual structural change from disordered mineralized collagen at the subchondral bone, [25] through to collagen II and glycosaminoglycan (GAG) -rich cartilage [26] allowing stress to be distributed across the interface without stress localization and delamination occurring. [26,27] The region consists of four principle cells types, which secrete and organize their local environments. [25,26] Although a number of groups have attempted this in vitro [28] the processes that they have employed did not mimic the structuring of this complex structure at a length-scale that is appropriate to the size of the defects encountered clinically.
Here, the suspended manufacturing process was used to form composite hydrogel structures with anisotropic mechanical properties mimicking the native osteochondral environment. Femoral condyle tissue was retrieved from patients following knee replacement and an osteochondral defect was introduced using a surgical drill. Excess retrieved tissues were digested to release the cells from the cartilage and bone samples. The structure was scanned using micro-CT and a 3D model of the defect was created. This 3D model was used to guide the manufacture of an osteochondral implant where the lower surface was loaded with sol-HA, gellan, and osteoblast cells (Figure 3). The upper surface of the construct was manufactured using gellan gum alone, loaded with populations of chondrocytes ( Figure 3). The suspended manufacturing process allowed for the production of osteochondral structures that fit tightly into the defects and matched the layer thicknesses for the bone and cartilage components. These samples were then placed in culture for a period of four weeks in order to identify whether the cell types in the different regions of the defect maintained phenotype. Over the course of four weeks of in vitro culture the osteochondral plugs maintained their structural integrity; they could be easily handled and extracted from the defect without deterioration (Figure 3).
Mechanical spectra of the osteochondral constructs highlight the successful integration of two different materials into a single structure (Figure 3). Constructs were sliced into four regions and stress sweeps were conducted on each section to determine mechanical strength and elasticity. Samples were subjected to increasing stress (0.1-1000 Pa) and a range of mechanical properties was observed throughout the construct. The weakest areas with the shortest linear viscoelastic www.advancedsciencenews.com www.advmat.de region were the chondral region and the uppermost surface of the construct (Regions A and D). Region C exhibited significantly higher gel strength and elasticity. This can be attributed to the nanocrystalline hydroxyapatite (nano-HA) interacting with gellan helices during gelation to create a highly homogenous structure exhibiting higher strength in comparison with unloaded gellan (Figure 3). Interestingly, the incorporation of HA into the gellan hydrogel resulted in a more rapid relaxation response than the gellan alone ( Figure 3C). This is significant since matrices of elastic modulus >17 kPa that exhibit more rapid stress relaxation encourage mineralization to a greater extent when compared with those with slower stress response. [29] Region A was comprised entirely of gellan gum without nano-HA, which explains the lower gel strength. It is likely that the nano-HA began to sediment prior to gelation due to its higher density compared with the gel phase (3.16 compared with ≈1 g cm −3 ) resulting in the top of the osteogenic region showing a lower modulus. At the interface (Region B), the construct exhibited mechanical properties intermediate to Regions A and C providing evidence for a successful integration of the two different materials (Figure 3). Interestingly, the trend in mechanical properties observed in regions A-C shows some similarity to reported changes in modulus across osteochondral tissue (Figure 3). A 2012 study by Campbell et al. outlined indentation moduli of three osteochondral regions, namely subchondral bone, hyaline cartilage, and the osteochondral interface. [27] Subchondral bone exhibited the highest modulus, hyaline cartilage the lowest with calcified cartilage (the interface) falling between the two, albeit closer to the modulus of bone. Indentation moduli of tissue regions were orders of magnitudes greater than storage moduli of respective construct regions and methods used to determine both differed greatly. However, parallels between the two trends highlight the level of control exhibited over mechanical properties within each region of osteochondral constructs.
Polymerase chain reaction (PCR) data collected from the retrieved samples demonstrated that the expression of both collagen type II and aggrecan (ACAN) (both markers of cartilage formation) was highest in the chondral region of the scaffold ( Figure 3) and collagen type I expression was lowest at this point. Immunohistochemical (IHC) analysis of the samples demonstrated the presence of aggrecan around the encapsulated www.advancedsciencenews.com www.advmat.de Figure 3. A) Samples manufactured using suspended manufacturing were cultured before being cut with a razor blade and mechanically characterized using a rheometer. B) The storage modulus of the construct reduced significantly from the core "boney" area of the structure (Regions C and D) into the chondral region (Regions A and B). Mechanical data reflected trends seen in native tissue with an increase in modulus from hyaline cartilage (Region A) through the osteochondral interface (Region B) to subchondral bone (Regions C and D). [20] This demonstrates that it is possible to not only define geometric but also the mechanical properties exhibited by the resulting structure. C) Stress relaxation measurements show that the addition of hydroxyapatite (GG + HAp) results in a faster relaxation response than gellan gum alone (GG). D) Following 4 weeks of culture within the human tissue defects (n = 6), the construct was removed and cells within the cartilage (CH), interfacial (IF) and bone (OB) regions were recovered for RNA isolation and mRNA was analyzed by qRT-PCR. The cells in the cartilaginous section of the scaffold expressed the highest levels of coll II and aggrecan (ACAN) and the bone region expressed significantly more coll IA1 (mean ± SEM). This suggests that the cells deposited into discrete regions maintained not only viability but also their phenotype (*: P < 0.05, τ: P = 0.0793). E) Fluorescent immunohistochemistry (IHC) (DAPI (4,6-Ddamidino-2-phenylindole) = blue, aggrecan = green) shows the production of aggrecan in the cartilaginous region of the structure (scale bars = 200 µm).
cells in this area of the scaffold (Figure 3). Remarkably, the ratio of collagen II to collagen I changed gradually throughout the structure inline with what would be expected with the native tissue region. This indicated that while the two sections of the osteochondral scaffold were well integrated, the embedded cell population retained their native phenotype. This is something that has proven challenging with existing technologies for tissue structuring. In comparison with the majority of ALM methods, where high temperatures, pressures, or cross-linking agents are a necessity, the suspended manufacture method allowed us to maintain viability and behavior while subtly modifying the local composition of the matrix. In this paper, a new method was reported to manufacture comparatively complex soft-solid structures by extruding a gelling polymer into a supporting particle-based matrix. The method allowed the structuring of soft-solid materials such that they exhibited distinct chemical and physical properties on the microscale. It was shown that suspended manufacture could recapitulate the structure of the osteochondral region as defined by CT scanning. The printed structure maintained its morphology and mechanical robustness over a period of four weeks of culture during which the encapsulated cells retained their phenotype. Our findings suggest that this novel method of producing 3D tissue-like structures has significant promise for the regeneration and study of complex tissue structures and interfaces.
Experimental Section
Fluid Gel Formulation: Fluid gels were manufactured by cooling solutions of 0.5% w/w agarose from 85 to 20 °C under constant shear using a magnetic stirrer rotating at 700 rpm. This created a constant angular velocity of 74 rad s −1 . Fluid gels were sterilized for cell culture applications by autoclaving agarose solutions prior to cooling.
Suspension of Helical Polymeric Structures: Aliquots of fluid gel were prepared in 6 mL Bijoux tubes. Solutions of 1.5% w/w low acyl gellan mixed with 10% nanocrystalline hydroxyapatite/HA at 60 °C (formulated by a precipitation method) [30] were extruded into fluid gel samples through a hypodermic needle with a 337 µm inner diameter using a 5 mL syringe. During extrusion, the syringe was manipulated precisely with respect to geometric position to enable the generation of the helix. The suspensions were then left at room temperature for 40 min to enable gelation to occur. Prior to extraction, samples were observed using micro-CT (Bruker Skyscan 1172-Bruker, Belgium) and reconstructed data were visualized in 3D using CTVox software (Bruker). Helices were then extracted and excess fluid gel was washed away with deionized water.
Tuning Resolution of Suspended Constructs: Low acyl gellan gum solutions of varying viscosity (as controlled by polymer concentration) were extruded into separate aliquots of fluid gel (contained in petri dishes of 60 mm diameter and 15 mm depth). Gelation was triggered by temperature and ionic interaction via injection of 200 × 10 −3 m CaCl 2 around constructs at 20 °C. After 30 min, gelled structures were extracted and the resolution of each construct was measured.
Evaluation of Cell Culture Applications: Osteochondral tissue was donated by patients undergoing elective knee replacement surgery. This study was approved by the United Kingdom National Ethics Research Committee (Hertfordshire Research Ethics Committee 12/ EE/0136). Articular cartilage was removed from human femoral condyle tissue before mincing and digestion by 2 mg mL −1 collagenase for 4 h under agitation at 37 °C for release of chondrocytes. Bone chips (4-5 mm 3 ) from subchondral trabecular bone were cultured for release of osteoblasts. Both cell types were cultured in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum, 1% l-glutamine, 1% PenStrep, and 1% nonessential amino acids. At passage 1, cells were trypsinized, counted, and resuspended at a density of 3 × 10 6 cells mL −1 before being mixed with sterile 1.5% low acyl gellan gum. Cell-laden gellan gum was extruded into sterile agarose fluid gel to create linear constructs. Gelation at 20 °C was triggered with 200 × 10 −3 m CaCl 2 and excess calcium ions were washed away after 30 min using Dulbecco's phosphate buffered saline (PBS). Cell-loaded constructs were cultured at 37 °C/5% CO 2 in culture media (as above). Cell viability was visualized using Calcein-acetoxymethyl (AM) and ethidium homodimer-1 fluorescent dyes.
Defect Formation and Reconstruction: Defects were introduced into femoral condyle tissue following surgery using an orthopedic drill. The resulting tissue was imaged using microCT (Bruker Skyscan 1172) and reconstructed data were viewed using CTVox software (Bruker). The defect was then measured for reconstruction of the defect space in Simpleware (Synopsys, UK).
Implant Fabrication and Culture: Prior to implant fabrication cells were isolated and cultured as above. Primary human osteoblasts and chondrocytes were trypsinized, counted, and resuspended at a density of 1 × 10 6 cells mL −1 . Osteoblasts were loaded into 1.5% low acyl gellan mixed with 5% nano-HA while chondrocytes were mixed with 1.5% gellan. Guided by dimensions obtained from defect reconstruction, single implants were fabricated containing a layer of chondrocyteloaded gellan and a thicker layer of osteoblast-loaded gellan/HA via extrusion into sterile agarose fluid gel. Gelation at 20 °C was triggered with injection of 200 × 10 −3 m CaCl 2 around each suspended structure and constructs were extracted after 30 min. Excess fluid gel was washed away and constructs were implanted into tissue defects. The constructfilled defects were then cultured as above in a humidified incubator at 37 °C, 5% CO 2 for 30 d (n = 6).
Mechanical Spectra of Implants: Layered constructs were sliced laterally into four separate regions (see Figure 3-mechanical spectra figure). Stress sweeps were conducted on each region using a Bohlin Gemini rheometer (Malvern, UK) with 25 mm serrated parallel plate geometry. Elastic and viscous moduli (G′ and G′′, respectively) were analyzed in response to increasing stress from 1 to 100 Pa at a constant temperature of 37 °C. For stress relaxation, gellan gum and gellan gum/hydroxyapatite constructs (height 8 mm, diameter 14 mm) were displaced 2 mm and held for 300 s while load was recorded (Bose ElectroForce 5500).
Rheological Measurements: All rheological measurements were performed on a Bohlin Gemini rheometer (Malvern, UK) using a 55 mm 2° cone and plate geometry at an isothermal temperature of 37 °C which was maintained by a Peltier controlled lower plate.
Stress Sweeps: Samples of 0.5% agarose fluid gels were prepared and loaded onto the bottom plate of the rheometer. The samples were then (6 of 6) 1605594 Adv. Mater. 2017, 29, 1605594 www.advancedsciencenews.com www.advmat.de subjected to a shear stress range of 0.1-100 Pa at a constant oscillatory frequency of 10 Rad s −1 . Elastic and viscous moduli were measured in response to increasing shear stress. Results were analyzed to determine the linear viscoelastic region. Frequency Sweeps: Elastic and viscous moduli of 0.5% agarose fluid gels were analyzed in response to increasing oscillatory frequencies from 0.1 to 10 Rad s −1 at a constant strain of 0.05%.
Shear Sweeps: Shear ramps were performed at 37 °C on 0.5% agarose fluid gel samples. Shear rate was increased from 0.001 to 100 s −1 over a 10 min period and dynamic viscosity in response to increasing shear rate was subsequently analyzed.
Particle Size Distribution: Fluid gel samples were loaded onto glass slides and allowed to dry under a coverslip for 10 min. Samples were then visualized on Keyence VHX 2000 digital microscope (Keyence, UK). Particle sizes were analyzed with VHX 2000 communication software. Particle size distribution was evaluated using images of fluid gel particles within an area of 135 µm × 120 µm. Images were divided into 12 grids of 11.25 µm × 10 µm. Within each grid, the number of particles was recorded and divided into categories based on size. A total of 96 grids and ≈2300 particles were counted. Particle size distribution was subsequently determined by comparing the number of particles within each size range and calculating cumulative undersize.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 2022-04-30T09:40:22.007Z | 2017-04-01T00:00:00.000 | {
"year": 2017,
"sha1": "85530d1a457ac83cbb36a8dcc8828d7f6e8a51c6",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adma.201605594",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "85530d1a457ac83cbb36a8dcc8828d7f6e8a51c6",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
52031249 | pes2o/s2orc | v3-fos-license | Enhanced-generation of atom-photon entanglement by using FPGA-based feedback protocol
The enhanced-generation of entanglement between one atomic collective excitation and a single photon (atom-photon) is very important for practical quantum repeaters and quantum networks based on atomic ensembles and linear optics. We present a feedback-loop algorithm based on field programmable gate array (FPGA) to obtain 21.6-fold increase of the generation rate of atom-photon entanglement at the storage time of 51 μs comparing with no feedback protocol. The generation rate of the atom-photon entanglement is ~3190/s (2100/s) for the excitation probability of 1.65% at the storage time of 1 μs (51 μs). The Bell parameter and the fidelity of atom-photon entanglement at the storage time of 1 μs are 2.40 ± 0.02 and 85.5% ± 0.6%, respectively. The detailed FPGA-based feedback-loop algorithm can be flexibly extended to the multiplexing of atom-photon entanglement, which is expected to further increase the generation rate of atom-photon entanglement. © 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (270.0270) Quantum optics; (210.4680) Optical memories; (270.5565) Quantum communications; (270.5585) Quantum information and processing. References and links 1. L. M. Duan, M. D. Lukin, J. I. Cirac, and P. Zoller, “Long-distance quantum communication with atomic ensembles and linear optics,” Nature 414(6862), 413–418 (2001). 2. N. Sangouard, C. Simon, H. de Riedmatten, and N. Gisin, “Quantum repeaters based on atomic ensembles and linear optics,” Rev. Mod. Phys. 83(1), 33–80 (2011). 3. Z. S. Yuan, Y. A. Chen, B. Zhao, S. Chen, J. Schmiedmayer, and J. W. Pan, “Experimental demonstration of a BDCZ quantum repeater node,” Nature 454(7208), 1098–1101 (2008). 4. N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, “Quantum cryptography,” Rev. Mod. Phys. 74(1), 145–195 (2002). 5. C. Clausen, I. Usmani, F. Bussières, N. Sangouard, M. Afzelius, H. de Riedmatten, and N. Gisin, “Quantum storage of photonic entanglement in a crystal,” Nature 469(7331), 508–511 (2011). 6. H. Zhang, X. M. Jin, J. Yang, H. N. Dai, S. J. Yang, T. M. Zhao, J. Rui, Y. He, X. Jiang, F. Yang, G. S. Pan, Z. S. Yuan, Y. Deng, Z. B. Chen, X. H. Bao, S. Chen, B. Zhao, and J. W. Pan, “Preparation and storage of frequency-uncorrelated entangled photons from cavity-enhanced spontaneous parametric downconversion,” Nat. Photonics 5(10), 628–632 (2011). 7. D. C. Burnham and D. L. Weinberg, “Observation of simultaneity in parametric production of optical photon pairs,” Phys. Rev. Lett. 25(2), 84–87 (1970). 8. P. G. Kwiat, K. Mattle, H. Weinfurter, A. Zeilinger, A. V. Sergienko, and Y. Shih, “New high-intensity source of polarization-entangled photon pairs,” Phys. Rev. Lett. 75(24), 4337–4341 (1995). 9. P. G. Kwiat, E. Waks, A. G. White, I. Appelbaum, and P. H. Eberhard, “Ultrabright source of polarizationentangled photons,” Phys. Rev. A 60(2), R773–R776 (1999). 10. F. König, E. J. Mason, F. N. C. Wong, and M. A. Albota, “Efficient and spectrally bright source of polarizationentangled photons,” Phys. Rev. A 71(3), 033805 (2005). 11. D. N. Matsukevich, T. Chanelière, M. Bhattacharya, S. Y. Lan, S. D. Jenkins, T. A. Kennedy, and A. Kuzmich, “Entanglement of a photon and a collective atomic excitation,” Phys. Rev. Lett. 95(4), 040405 (2005). 12. D. N. Matsukevich, T. Chanelière, S. D. Jenkins, S. Y. Lan, T. A. Kennedy, and A. Kuzmich, “Entanglement of remote atomic qubits,” Phys. Rev. Lett. 96(3), 030405 (2006). Vol. 26, No. 16 | 6 Aug 2018 | OPTICS EXPRESS 20160 #331981 https://doi.org/10.1364/OE.26.020160 Journal © 2018 Received 17 May 2018; revised 22 Jun 2018; accepted 23 Jun 2018; published 25 Jul 2018 13. H. de Riedmatten, J. Laurat, C. W. Chou, E. W. Schomburg, D. Felinto, and H. J. Kimble, “Direct measurement of decoherence for entanglement between a photon and stored atomic excitation,” Phys. Rev. Lett. 97(11), 113603 (2006). 14. S. J. Yang, X. J. Wang, J. Li, J. Rui, X. H. Bao, and J. W. Pan, “Highly retrievable spin-wave-photon entanglement source,” Phys. Rev. Lett. 114(21), 210501 (2015). 15. S. Chen, Y. A. Chen, B. Zhao, Z. S. Yuan, J. Schmiedmayer, and J. W. Pan, “Demonstration of a stable atomphoton entanglement source for quantum repeaters,” Phys. Rev. Lett. 99(18), 180505 (2007). 16. M. Dąbrowski, M. Parniak, and W. Wasilewski, “Einstein-Podolsky-Rosen paradox in a hybrid bipartite system,” Optica 4(2), 272–275 (2017). 17. P. Farrera, G. Heinze, and H. de Riedmatten, “Entanglement between a photonic time-bin qubit and a collective atomic spin excitation,” Phys. Rev. Lett. 120(10), 100501 (2018). 18. R. Ikuta, T. Kobayashi, T. Kawakami, S. Miki, M. Yabuno, T. Yamashita, H. Terai, M. Koashi, T. Mukai, T. Yamamoto, and N. Imoto, “Polarization insensitive frequency conversion for an atom-photon entanglement distribution via a telecom network,” Nat. Commun. 9(1), 1997 (2018). 19. C. Laplane, P. Jobez, J. Etesse, N. Timoney, N. Gisin, and M. Afzelius, “Multiplexed on-demand storage of polarization qubits in a crystalz,” New J. Phys. 18(1), 013006 (2015). 20. C. Xiong, X. Zhang, Z. Liu, M. J. Collins, A. Mahendra, L. G. Helt, M. J. Steel, D. Y. Choi, C. J. Chae, P. H. W. Leong, and B. J. Eggleton, “Active temporal multiplexing of indistinguishable heralded single photons,” Nat. Commun. 7(1), 10853 (2016). 21. O. A. Collins, S. D. Jenkins, A. Kuzmich, and T. A. B. Kennedy, “Multiplexed memory-insensitive quantum repeaters,” Phys. Rev. Lett. 98(6), 060502 (2007). 22. S. Y. Lan, A. G. Radnaev, O. A. Collins, D. N. Matsukevich, T. A. B. Kennedy, and A. Kuzmich, “A multiplexed quantum memory,” Opt. Express 17(16), 13639–13645 (2009). 23. M. Razavi, M. Piani, and N. Lütkenhaus, “Quantum repeaters with imperfect memories: cost and scalability,” Phys. Rev. A 80(3), 032301 (2009). 24. R. Chrapkiewicz, M. Dąbrowski, and W. Wasilewski, “High-capacity angularly multiplexed holographic memory operating at the single-photon level,” Phys. Rev. Lett. 118(6), 063603 (2017). 25. Y. F. Pu, N. Jiang, W. Chang, H. X. Yang, C. Li, and L. M. Duan, “Experimental realization of a multiplexed quantum memory with 225 individually accessible memory cells,” Nat. Commun. 8(1), 15359 (2017). 26. M. Parniak, M. Dąbrowski, M. Mazelanik, A. Leszczyński, M. Lipka, and W. Wasilewski, “Wavevector multiplexed atomic quantum memory via spatially-resolved single-photon detection,” Nat. Commun. 8(1), 2140 (2017). 27. N. Sinclair, E. Saglamyurek, H. Mallahzadeh, J. A. Slater, M. George, R. Ricken, M. P. Hedges, D. Oblak, C. Simon, W. Sohler, and W. Tittel, “Spectral multiplexing for scalable quantum photonics using an atomic frequency comb quantum memory and feed-forward control,” Phys. Rev. Lett. 113(5), 053603 (2014). 28. P. Jobez, N. Timoney, C. Laplane, J. Etesse, A. Ferrier, P. Goldner, N. Gisin, and M. Afzelius, “Towards highly multimode optical quantum memory for quantum repeaters,” Phys. Rev. A 93(3), 032327 (2016). 29. M. Bonarota, J. L. Le Gouët, and T. Chaneliere, “Highly multimode storage in a crystal,” New J. Phys. 13(1), 013013 (2011). 30. D. N. Matsukevich, T. Chanelière, S. D. Jenkins, S. Y. Lan, T. A. B. Kennedy, and A. Kuzmich, “Deterministic single photons via conditional quantum evolution,” Phys. Rev. Lett. 97(1), 013601 (2006). 31. S. Chen, Y. A. Chen, T. Strassel, Z. S. Yuan, B. Zhao, J. Schmiedmayer, and J. W. Pan, “Deterministic and storable single-photon source based on a quantum memory,” Phys. Rev. Lett. 97(17), 173004 (2006). 32. Z. S. Yuan, Y. A. Chen, S. Chen, B. Zhao, M. Koch, T. Strassel, Y. Zhao, G. J. Zhu, J. Schmiedmayer, and J. W. Pan, “Synchronized independent narrow-band single photons and efficient generation of photonic entanglement,” Phys. Rev. Lett. 98(18), 180503 (2007). 33. X. S. Ma, S. Zotter, J. Kofler, T. Jennewein, and A. Zeilinger, “Experimental generation of single photons via active multiplexing,” Phys. Rev. A 83(4), 043814 (2011). 34. L. Tian, Z. Xu, L. Chen, W. Ge, H. Yuan, Y. Wen, S. Wang, S. Li, and H. Wang, “Spatial multiplexing of atomphoton entanglement sources using feedforward control and switching networks,” Phys. Rev. Lett. 119(13), 130505 (2017). 35. M. Mazelanik, M. Dąbrowski, and W. Wasilewski, “Correlation steering in the angularly multimode Raman atomic memory,” Opt. Express 24(19), 21995–22003 (2016). 36. Y. L. Wu, L. Tian, Z. X. Xu, W. Ge, L. R. Chen, S. J. Li, H. X. Yuan, Y. F. Wen, H. Wang, C. D. Xie, and K. C. Peng, “Simultaneous generation of two spin-wave-photon entangled states in an atomic ensemble,” Phys. Rev. A 93(5), 052327 (2016). 37. B. Zhao, Y. A. Chen, X. H. Bao, T. Strassel, C. S. Chuu, X. M. Jin, J. Schmiedmayer, Z. S. Yuan, S. Chen, and J. W. Pan, “A millisecond quantum memory for scalable quantum networks,” Nat. Phys. 5(2), 95–99 (2009). 38. S. J. Yang, X. J. Wang, X. H. Bao, and J. W. Pan, “An efficient quantum light–matter interface with sub-second lifetime,” Nat. Photonics 10(6), 381–384 (2016). 39. Z. Xu, Y. Wu, L. Tian, L. Chen, Z. Zhang, Z. Yan, S. Li, H. Wang, C. Xie, and K. Peng, “Long lifetime and high-fidelity quantum memory of photonic polarization qubit by lifting zeeman degeneracy,” Phys. Rev. Lett. 111(24), 240503 (2013). 40. J. Laurat, H. de Riedmatten, D. Felinto, C. W. Chou, E. W. Schomburg, and H. J. Kimble, “Efficient retrieval of a single excitation stored in an atomic ensemble,” Opt. Express 14(15), 6912–6918 (2006). Vol. 26, No. 16 | 6 Aug 2018 | OPTICS EXPRESS 20161
Introduction
Quantum repeater (QR), which can overcome the distance limit of direct quantum communication, is the basic unit for realizing long-distance quantum communication [1], large-scale quantum network [2,3] and quantum cryptography [4]. The potential of atomic ensembles, serve as quantum memories that is an essential building block for the QR, has recently attracted considerable attention. Motivated by the seminal proposal of Duan, Lukin, Cirac and Zoller (DLCZ), photon-photon entanglement and atom-photon entanglement have been demonstrated from an atomic ensemble. The generation rate of atom-photon entanglement is an important quantity that does not only determine the transfer rate of practical quantum network, but also restrict the maximum distance between two neighboring quantum nodes. To improve the performance of a practical quantum network, one has to increase the generation rate of atom-photon entanglement.
The atom-photon entanglement can be generated by storing one photon from entangled photon pairs, which can be generated through spontaneous parametric down-conversion (SPDC) [7][8][9][10], with an in-out quantum memory [5,6]. Spontaneous Raman scattering (SRS) provides a simpler method of generating the atom-photon entanglement [2]. The step is as follows: the atoms firstly interact with the writing light beam in the writing process of SRS, emitting the Stokes photons and creating spin-wave excitations at the same time. The spinwave excitations are imprinted on the atomic ensemble. After a controllable delay, the spinwave excitations are mapped into anti-Stokes photons in the reading process of SRS [1]. The scheme spurs intense experimental efforts related to the generation and improvement of the atom-photon entanglement [11][12][13][14][15][16][17] via SRS. What is more, these entangled photon pairs can be converted to telecom wavelength by the frequency downconversion process to implement the long-distance quantum communication based on the fiber-optic network [18].
However, limited by the multiexcitations, the generation rate of entangled atom-photon pairs in these experiments has to be kept at a very low level. It is always the key point to enhance the generation rate of atom-photon entanglement without introducing multiexcitation errors. One promising track of current research to increase the generation rate is the experimental implementation of multiplexed interfaces, including temporally multiplexed scheme [19,20], spatially multiplexed scheme [21][22][23][24][25][26], and spectrally multiplexed scheme [27,28]. Currently, spatially multiplexed quantum memory with more than 665 spatially separated modes is experimental demonstrated [26]. Based on the frequency and temporal multiplexed scheme, researchers have also achieved 500 frequency [27], 400 temporal modes quantum memory [29]. It can be inferred from the current multiplexing capacities that one could simultaneously store 10 8 qubits. These schemes, combining the simultaneous storage of multiple qubits, can enhance the total generation rate of the system on the premise of increasing the generation rate of single mode. The single atom-photon entanglement is building block for constructing the multiplexed interfaces. Here, we wish to increase the generation rate of single atom-photon entanglement to further improve the performance of the multiplexed interfaces.
The feedback circuit has been used to generate the atom-photon entanglement [13], deterministic single photons [30] and increase the generation rate of single-photon source by manipulating the time sequence of the write and read process [31]. Subsequently, the authors implement synchronized generation of two independent single-photon sources from two remote atomic ensembles [32]. The two synchronized single-photons are further used to demonstrate efficient generation of entangled photon pairs. Ma et al. demonstrated a fourfold enhancement of the output photon rate by routing four single-photon sources based on the feed-forward technique [33]. However, these feedback loops are limited in these applications, either the enhanced-generation of the single atom-photon entanglement or the router control of multiplexed interfaces.
Recently, our group demonstrates the spatial multiplexing of enhanced-generation of photonic entanglement by using the feedforward control and switching networks [34]. In the spatial multiplexing protocols, multiple Bell-state measurement (BSM) signals acquired from elementary links should be processed and performed as soon as possible. However, there does not mention the construction process of the feedback-loop algorithm.
In this paper, we present a feedback-loop algorithm based on field programmable gate array (FPGA) to realize the enhanced-generation of controllable atom-photon entanglement. The algorithm performs the multi-channel data acquisition and multi-threading parallel process by self-designed program. One of the threads is responsible for the acquisition, storage, judgement, and execution of feedback signal, the other thread performs the buffer reading, coincidence operating, and processed results transferring. The design improves simultaneously the performance of the single atom-photon entanglement source and the router control of multiplexed interfaces. By using the feedback-loop algorithm, we achieve a 21.6fold increase in generating atom-photon entanglement at the storage time of 51 μs comparing with the non-use of real-time feedback protocol. When the excitation probability is 1.65%, the enhanced generation rate of the atom-photon entanglement pairs is ~3190/s (2100/s), the measured Bell parameter is 2.40 ± 0.02 (2.23 ± 0.12), the fidelity of entanglement state is 85.5% ± 0.6% (83.7% ± 0.8%) at the storage time of 1 μs (51 μs). It is worth of noting that the scheme can be easily extended to the multiplexed atom-photon entanglement [34].
Experimental setup and analysis
The experimental setup is illustrated in Fig. 1(a). A cold 87 Rb atomic cloud with temperature of about 130 µK is prepared to generate atom-photon entanglement. The size and optical density of atomic cloud are ~5 × 2 × 2 mm 3 and ~7, respectively. After 22.5 ms of loading atoms into the magneto-optical trap (MOT), the cold atoms are first prepared in the initial level a by using cleaning beams (C), including a right-(σ + -) polarized laser beam (tuned on Both of the coupling efficiencies of the two fibers are ~80%. We collect Stokes/anti-Stokes photons in the direction that forms a 0.4° angle with the write/read beam. The strongly correlated photon pairs and polarization-entangled photon pairs should be generated under the condition of excitation probability χ<<1 [11,36]. In addition, the singlemode fibers, as mode selectors, introduce an extra relative phase difference between the horizontal (H) and vertical (V) polarizations of the optical field, which will degrade the fidelity of entanglement. In order to eliminate the phase-shift difference, a phase compensator (PC) is inserted into the optical path after the output coupler. The phase compensator is a combination of a quarter-wave plate (QWP), a half-wave plate (HWP) and a QWP, which can generate any unitary transformation. and R-L (σ + -σ − ) polarization settings. The WP S and WP AS are half-wave (quarter-wave) plates when analyzing the photon polarization in the D or A (σ + or σ − ) polarization setting and are removed when analyzing the photon polarization in H or V polarization setting. In the measurements of the Bell parameter, the WP S and WP AS are half-wave plates and used for setting the polarization angles. Finally, two couples of single photon detectors (SPD), with multimode fiber coupled input, are adopted to detect the emitted Stokes photons and anti-Stokes photons. The output of the SPD is acquired and analyzed by a FPGA (NI PXIe-7966R). Most importantly, the operation of the FPGA depends on the self-designed, coincidence count and multiplexing program. The generated entangled two-photon state is [11,13] The time sequence of the experimental cycle is shown in Fig. 2. The duty cycle repeats with a repetition rate of 30 Hz. One duty cycle includes 22.5 ms preparation time, 0.5 ms Sisyphus cooling time and 10 ms operational time. In every operational time, independent write sequences with a period of 1.5 μs are continuously applied to the atomic ensemble until a Stokes photon is detected. Each write sequences contain a cleaning pulse and a write pulse. We retrieve the spin-wave excitations with a fixed delay after a successful write, the spinwave excitations can be converted into anti-Stokes photons. The pulse width of the cleaning pulse, the write pulse and the read pulse is 250 ns, 80 ns, and 100 ns, respectively. In every write sequences, we release a cleaning pulse with a fixed delay T 1 = 670 ns after a write pulse. The time interval between two neighboring write sequences T 2 is 500 ns. We carry out the next write sequences with a time delay T 2 = 500 ns after a read pulse. The time sequence described above comes true by virtue of a self-designed operation control program, which is stored on FPGA hardware platform. The FPGA manipulates independent AOMs to actuate the on-off action of pulse sequences.
Without feedback protocol, the probability of having a Stokes (an anti-Stokes) photon in one duty cycle is S P′ ( AS P′ ), and that the coincident probability between the Stokes and anti-Stokes channels in one duty cycle is ,
S AS S AS
where, χ is the excitation probability, η S is the total detection efficiency for detecting the Stokes photons, η AS is the total detection efficiency for detecting the anti-Stokes photons and γ is the retrieval efficiency. We don't consider the background noise in each channel. With feedback protocol, only detecting a stokes photon do we perform the detection of the anti-Stokes photons. So, the probability of having a Stokes (an anti-Stokes) photon P S (P AS ) and the coincident probability between the Stokes and anti-Stokes channels P S,AS should be written as We focus on the generation rate of the Stokes (anti-Stokes) photons and the coincidence count in this paper, which are more important for actual quantum commutation. The rate of detected Stokes photons (coincidence count) in the H-V polarization setting can evaluate the preparation rate of the atom-photon (photon-photon) entanglement pairs. However, the generation of Stokes photons is probabilistic. The feedback control saves the time of releasing read pulse when the Stokes photon is not detected, which can greatly increase the generate rate of Stokes photon.
With feedback control, the average number of released write pulses N S and read pulses N AS in one second can be expressed by Based on Eqs. (3) and (5) It should be pointed out that the MOT duration time is not considered in the rate equations [Eqs. (4)(5)(6)(7)(8)(9)(10)]. Using these parameters of our atom-photon entanglement generation system (the coupling efficiency of the single-mode fiber 81%; the total transmission of the three etalons 80%; the coupling efficiency of the multimode fiber 95%; the quantum efficiency of the SPD 50%), the generation rate of Stokes/anti-Stokes photons can be theoretically calculated. Single atom-photon entanglement source is an important building block of the experimental implementation of multiplexed interfaces. The enhanced-generation rate of single atomphoton entanglement source is also the core of improving the performance of multiplexed interfaces. The FPGAs, due to the feature of high-speed, scalability and flexibility, become an attractive technique of achieving the feedback-loop algorithm.
FPGA implementation of feedback
Here, Our FPGA-based feedback system hardware composes of NI-PXIe-7966r FPGA module and NI-6581 digital I/O adapter module. The FPGA has 48 input/output channels that can send and receive 48 digital signals at the same time. The system can be extended to 48fold input/output channels by synchronizing more FPGA modules. So, the FPGA-based feedback system is enough to perform these experiments, such as temporally multiplexed [19,20], spatially multiplexed [22,24,25,34], and spectrally multiplexed interface [27].
The logic programmed onto the FPGA is compiled directly into hardware circuitry. The logic operation is generated by self-designed software program. In order to increase the operation efficiency, we design a multithread parallel data processing scheme to perform the feedback-loop algorithm. The detailed block diagram is shown in Fig. 3. There are two threads T A and T B in our FPGA feedback-loop algorithm, running in parallel. The FPGA receives a trigger signal from the analog output module PXI-6713 after the atoms are well prepared and the two threads start to run and last for 10 ms, corresponding to one measurement cycle. In the beginning of every measurement cycle, the thread T A releases the cleaning and write pulse and acquires the signal of Stokes photons. Then, the collected data of Stokes photons is written to a first-in-first-out (FIFO 1 ) buffer. Meanwhile, the thread T A judges whether a Stokes photon is detected base on acquisition signal. If a Stokes photon is detected, the read pulse will be released and the collected data of anti-Stokes photons will be written to another first-in-first-out (FIFO 2 ) buffer. Otherwise, the read pulse is omitted, the cleaning and write pulse is released again. This can potentially save a lot of time, increase the generated rate. While the thread T A is still running, the thread T B runs solely by reading the data stored in the FIFO 1 and FIFO 2 buffers. The two threads run in parallel, which further save the operation time. The thread T B consists of judging the buffer status, reading the data from two FIFOs buffer, performing coincidence measurement and writing the results to HOST computer.
Experimental results
The excitation probability in our experiment is χ≈1.65%. According to the Eqs. (4) and (6), we calculate the theoretical generation rate of Stokes photon as a function of storage time with feedback protocol (black curve in Fig. 4) and without feedback protocol (red curve in Fig. 4), respectively. These separation points is the experimental results. It can be seen that, from Fig. 4, the generation rate with and without feedback protocol decreases monotonously with the increase of the storage time. Especially for without feedback, the generation rate decreases sharply when the storage time is from 0 to 10 μs, then it is kept at a very low level. While there has small influence of the storage time on the generation rate with feedback protocol. In order to display clearly the explosive growth of the generation rate with feedback protocol, the S Ratio R as a function of storage time is also shown in Fig. 4. At the storage time of 1 μs, 26 μs and 51 μs, the generation rates with feedback protocol are 1.2, 13.3 and 21.6 -fold than those of without feedback protocol, respectively. These results indicate that the generate rate of atom-photon entanglement, in particular for long storage time, can be greatly enhanced by using a real-time FPGA-based feedback protocol.
where, P S,AS is coincidence count probability between the Stokes photon and anti-Stokes photon, P S is the probability of having a Stokes photon, η AS is the total detection efficiency of anti-Stokes photon. According to these parameters, which can be experimentally measured, the retrieval efficiency γ can be calculated and equal to 15.5% at the storage time τ = 1 μs.
The retrieval efficiency can be increased by using the high optical-depth cold atoms [40] or coupling the atoms into an optical cavity [41,42]. Figure 6 shows the PRs and total coincidence count rate versus excitation probability χ for polarization setting at H-V, D-A and R-L, respectively. The black squares represent the measured PR, the red circles represent the measured coincidence count rate, the solid line is fitting result. The vertical blue dash line across all the plots for χ≈1.65% is drawn in the Fig. 6, which indicates that the PR HV , PR DA and PR RL are 23.8, 8.5 and 8.1 at the χ≈1.65%, respectively. The results show that the PR HV is larger than PR DA and PR RL , which can be explained by the non-perfect phase compensation of SMF. When increasing the power of the write pulse, the PR HV (DA, RL) degrade due to the increase of the probability of multiexcitation noise. So it is infeasible to enhance the generation rate of the atom-photon entanglement by increasing largely the power of the write pulse.
Bell parameter of the Bell Clauser-Horne-Shimony-Holt (Bell-CHSH) inequality is used to evaluate the characteristic of atom-photon entanglement. The Bell parameter S is defined as θ θ θ π θ π θ π θ θ θ π θ θ θ π θ π θ π θ θ θ π θ θ + + + + where S θ and AS θ are the orientations of polarizers WP S and WP AS , , S A S N θ θ is the coincidence count rate. For any local realistic theory, the S cannot be larger than 2. The excitation probability χ is fixed to ~1.65% for measuring the Bell parameter. By adjusting the angle of WP S and WP AS , the ( , ) ) is set to obtain the maximum violation for Bell states. As shown in Fig. 7, we obtained S = 2.40 ± 0.02 at the storage time of 1 μs, which violates Bell-CHSH inequality by 20 standard deviations. The Bell parameter decreases with the increase of the storage time. At the storage time of 0 τ ′ = 51 μs, the Bell parameter reduces to 2.23 ± 0.12, which confirms the atom-photon entanglement. However, the Bell parameter reduces to 2.00 ± 0.10 at the storage time of 61 μs which indicated that the memory is not complete quantum. So, we treat the 0 τ ′ = 51 μs as the memory lifetime of the atom-photon entanglement rather than the fitted lifetime τ 0 = 60 μs in Fig. 5. The horizontal line at the level of S = 2 is shown in the Fig. 7, which gives the bound of the quantum region.
The fidelity is calculated by maximum likelihood estimation method, which needs to measure the coincidence count rate of 36 independent projection states [43,44], to further evaluate the generated entanglement state. In this experiment, these projection states are obtained by adjusting the WPs placed additionally in front of PBSs. The error bar of fidelity is calculated by Monte-Carlo method.
The calculated density matrix of atom-photon entanglement at the storage time 1 μs is reconstructed as follows: The formula of ( ) In our FPGA feedback system, the FPGA manipulates AOMs and releases ~644000 write pulses and ~3190 read pulse every second for the excitation probability of ~1.65% at the storage time of 1 μs, it has a sampling rate of 100 Mb/s. The data points obtained from every Stokes photon channel is 10.37 × 10 6 in one second. The feedback signal can be released within ~50 ns after the FPGA acquires multi-channel signals. The FPGA (NI PXIe-7966R) adopted in here has 48 I/O channels, which can obtain maximum 20 ports processing capacity simultaneously. By synchronizing more FPGA modules, it is expected to obtain a significant improvement in the process capacity.
Conclusion
In conclusion, we present a feedback-loop algorithm based on FPGA to increase the generation rate of controllable atom-photon entanglement pairs. The algorithm can perform the multi-channel data acquisition and multi-threading parallel process, which can be easily extended to the implementation of multiplexed interface. The multi-channel data acquisition and multi-threading parallel data process are realized by the FPGA hardware and selfdesigned software program, respectively. The program is composed of two threads: One is responsible for judging the buffer status, reading the data from two FIFOs buffer, performing coincidence measurement and writing the results to HOST computer. The feedback protocol does not only increase the generation rate of the single atom-photon entanglement, but also perform conveniently the router control of multiplexed interfaces. By using the feedback-loop algorithm, we achieve a 21.6-fold increase in generating atom-photon entanglement at the storage time of 51 μs comparing with the non-use of real-time feedback protocol. When the excitation probability is 1.65%, the generation rate of the atom-photon entanglement pairs is ~3190/s, the measured Bell parameter is 2.40 with the uncertainty of ± 0.02, the fidelity of entanglement state is 85.5% with the uncertainty of ± 0.6% at the storage time of 1 μs. When the storage time approaches the lifetime (51 μs), the generation rate, Bell parameter and fidelity of entanglement state are 2100/s, 2.23 ± 0.12 and 85.5% ± 0.6%, respectively.
By increasing the retrieval efficiency and lifetime, the generation rate of atom-photon entanglement can be further increased. It is worth noting that the scheme can be easily extended to the multiplexing of atom-photon entanglement, which is expected to further increase the generation rate of atom-photon entanglement. | 2018-08-19T21:18:32.365Z | 2018-07-25T00:00:00.000 | {
"year": 2018,
"sha1": "8215991bf65b5a0d9b89ae0e2244be262eb458c2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.26.020160",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "69daab9b5c4ce10ae77e807285b488730b8fecec",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
249760195 | pes2o/s2orc | v3-fos-license | Assessing the Potency of the Novel Tocolytics 2-APB, Glycyl-H-1152, and HC-067047 in Pregnant Human Myometrium
The intracellular signaling pathways that regulate myometrial contractions can be targeted by drugs for tocolysis. The agents, 2-APB, glycyl-H-1152, and HC-067047, have been identified as inhibitors of uterine contractility and may have tocolytic potential. However, the contraction-blocking potency of these novel tocolytics was yet to be comprehensively assessed and compared to agents that have seen greater scrutiny, such as the phosphodiesterase inhibitors, aminophylline and rolipram, or the clinically used tocolytics, nifedipine and indomethacin. We determined the IC50 concentrations (inhibit 50% of baseline contractility) for 2-APB, glycyl-H-1152, HC-067047, aminophylline, rolipram, nifedipine, and indomethacin against spontaneous ex vivo contractions in pregnant human myometrium, and then compared their tocolytic potency. Myometrial strips obtained from term, not-in-labor women, were treated with cumulative concentrations of the contraction-blocking agents. Comprehensive dose–response curves were generated. The IC50 concentrations were 53 µM for 2-APB, 18.2 µM for glycyl-H-1152, 48 µM for HC-067047, 318.5 µM for aminophylline, 4.3 µM for rolipram, 10 nM for nifedipine, and 59.5 µM for indomethacin. A single treatment with each drug at the determined IC50 concentration was confirmed to reduce contraction performance (AUC) by approximately 50%. Of the three novel tocolytics examined, glycyl-H-1152 was the most potent inhibitor. However, of all the drugs examined, the overall order of contraction-blocking potency in decreasing order was nifedipine > rolipram > glycyl-H-1152 > HC-067047 > 2-APB > indomethacin > aminophylline. These data provide greater insight into the contraction-blocking properties of some novel tocolytics, with glycyl-H-1152, in particular, emerging as a potential novel tocolytic for preventing preterm birth. Supplementary Information The online version contains supplementary material available at 10.1007/s43032-022-01000-2.
Introduction
Preterm birth (PTB), defined as birth before 37 completed weeks of gestation, is a significant determinant of neonatal mortality, disease, and disability in surviving children [1]. Spontaneous premature labor (PTL) is the leading cause of PTB, accounting for approximately 45% of cases [1]. The risk of PTL is increased by prior preterm birth, preterm premature rupture of the membranes (PPROM), uterine overdistension, stress, and immunologically mediated processes [1,2]. In attempts to prevent PTB, various tocolytics have been trialed for blocking spontaneous PTL, with the goal of extending pregnancy long enough for corticosteroid administration to mature the fetal lungs and improve birth outcomes. Tocolytic compounds examined include betamimetics, such as salbutamol and ritodrine, calcium ion (Ca 2+ ) channel blockers (CCBs), such as nifedipine (NIF), Ca 2+ competitors, such as magnesium sulfate, inhibitors of prostaglandin-endoperoxide synthase 2 (PTGS2), such as indomethacin (IND), oxytocin (OT) receptor antagonists (OTRA), such as Atosiban, and nitric oxide (NO) donors, such as nitroglycerine. These tocolytics are now well-characterized and work through either inhibiting pro-contraction signaling pathways, in particular by suppressing elevation of intracellular Ca 2+ levels and blocking the availability and actions of uterotonins, or through activating pro-relaxation signaling pathways, in particular by raising intracellular cyclic adenosine monophosphate (cAMP) levels. Several systematic reviews and meta-analyses have been conducted in different parts of the world to assess the relative effectiveness of different tocolytic agents [3][4][5][6][7][8]. As a consequence, the guidelines for tocolytic management differ internationally [3,9], and tocolytics licensed in one region of the world may or may not be licensed elsewhere. Where certain classes of tocolytics are not licensed, clinicians may utilize them offlabel as second-line therapy. Approximately 75% of drugs used in obstetrics for tocolysis are unlicensed [10], this is because there are no specific therapeutic agents explicitly developed for tocolytic management except atosiban. Most tocolytics currently in use were developed for other medical indications, but were found to have tocolytic actions [10]. Moreover, there is no single agent currently available as a first-line tocolytic that is not associated with risks of sideeffects [11]. Atosiban is associated with fewer side-effects than other tocolytics, but there is little evidence of efficacy [12]. All currently available tocolytic agents have relatively limited efficacy in postponing PTL and improving neonatal outcomes. As such, there remains a pressing need to evaluate the myometrial contraction blocking capabilities of novel drugs to improve tocolytic therapy.
Here, we report a comprehensive ex vivo analysis of three novel contraction-blocking agents: 2-aminoethoxydiphenyl borate (2-APB), glycyl-H-1152 dihydrochloride (GH), and HC-067047. We examined the potency of these "novel tocolytics" against spontaneous pregnant human myometrial contractions ex vivo and compared them to previously studied tocolytics, including NIF and IND, which have been used clinically for preventing PTB, as well as rolipram (ROL) and aminophylline (AMP), which have been previously investigated for uterine contraction inhibition.
2-APB was originally introduced as an inhibitor of inositol trisphosphate (IP 3 ) receptors (IP 3 R) [13], which are a family of closely related Ca 2+ channels embedded within the sarcoplasmic reticulum (SR). However, subsequent studies in intact cells have revealed that 2-APB also inhibits Ca 2+ entry via store-operated channels (SOC); an effect that is independent of IP 3 R inhibition [14][15][16]. Thus, 2-APB has non-specific inhibitory effects on both IP 3 R and SOC, as well as on other Ca 2+ transporters, e.g., sarcoplasmic Ca 2+ ATPase (SRCA) pumps and transient receptor potential (TRP) channels of the TRPC family [17][18][19][20]. In maintaining Ca 2+ homeostasis, G-protein coupled receptors, via activation of phospholipase C (PLC), generate intracellular IP 3 . IP 3 then diffuses rapidly within sarcoplasm (cytoplasm) to bind with IP 3 Rs and release intracellular Ca 2+ stores from the SR into the sarcoplasm [21][22][23]. The resulting increase in intracellular Ca 2+ triggers the slow activation of SOCs in the sarcolemma (plasma membrane), which mediates storeoperated Ca 2+ entry (SOCE) into the sarcoplasm. Entry of Ca 2+ through the sarcolemma contributes to the refilling of the SR Ca 2+ stores and to the total intracellular pool of free Ca 2+ in the sarcoplasm. Sarcoplasmic Ca 2+ then binds to calmodulin, which activates the canonical signal transduction pathway that culminates in myosin light chain (MLC) phosphorylation and the actin-myosin cross bridge cycling that generates contractility [24,25]. By inhibiting the activity of both SOC and IP 3 Rs, as well as other Ca 2+ transporters, 2-APB prevents the elevation of sarcoplasmic Ca 2+ levels, and in turn, inhibits myometrial contractility. Since its discovery [13], 2-APB has been tested on myometrial strips of rodents in several studies where it was found to inhibit both agonist-stimulated (OT, pennogenin tetraglycoside, Lannea acida plant extract, Ficus deltoidea plant extract) and spontaneous myometrial contractions [26][27][28][29][30][31][32]. However, despite potently inhibiting pregnant rodent uterine contractility, the effects of 2-APB on spontaneous pregnant human myometrial contractions were yet to be examined.
GH is a selective inhibitor of rho-kinase (ROCK). ROCK increases the sensitivity of uterine myocytes to Ca 2+ through increased phosphorylation of MLC [33]. ROCK expression (mRNA abundance and protein levels) is upregulated at the end of pregnancy and is likely involved in the processes that underpin the increased myometrial contractility at term [34]. Thus, by inhibiting ROCK, GH reduces the sensitivity of uterine myocytes to Ca 2+ . GH has been found to inhibit both spontaneous and OT-stimulated pregnant human myometrial contractions ex vivo [35,36]; however, comprehensive dose-response analyses were yet to be conducted to determine the potency of GH compared to other tocolytics.
HC-067047 is an inhibitor of transient receptor potential subfamily V, member 4 (TRPV4), a nonselective cation channel that is permeable to extracellular Ca 2+ [37][38][39][40]. TRPV4 is activated by physiological stimuli, including stretch, swelling, heat, and pressure that may be relevant to human labor [41,42]. Through inhibiting TRPV4 channels, HC-067047 prevents the influx of extracellular Ca 2+ through these channels, thus preventing elevation of intracellular Ca 2+ levels and myometrial contractility. TRPV4 is highly expressed in pregnant human myometrium and TRPV4 protein levels increase as gestation progresses [43].
These findings suggest that TRPV4 inhibition could be a potential novel tocolytic strategy [43,44]; however, the effects of TRPV4 inhibitors, such as HC-067047, on spontaneous pregnant human myometrial contractions were yet to be investigated.
We also examined the non-selective phosphodiesterase (PDE) inhibitor, AMP, and the selective PDE4 inhibitor, ROL. By inhibiting PDEs, which are responsible for the breakdown of cAMP, both AMP and ROL raise intracellular cAMP levels, which promotes uterine relaxation. The tocolytic effects of theophylline, the active ingredient of AMP, and ROL have been reported in pregnant rodent and human myometrium [45][46][47][48][49][50][51]. However, their potency in suppressing spontaneous pregnant human myometrial contractions was yet to be comprehensively assessed. We also determined the potency of NIF and IND, which have well-documented tocolytic effects.
We conducted comprehensive dose-response analyses for 2-APB, GH, HC-067047, AMP, ROL, NIF, and IND using strips of pregnant human myometrium undergoing spontaneous contractions ex vivo. The location at which each of these agents affects uterine myocyte contraction signaling pathways is shown in Fig. 1. We then compared the contraction-blocking potency of the agents to assess the tocolytic potential of 2-APB, GH, and HC-067047 as novel tocolytics.
Human Myometrial Specimens
Biopsy specimens of human myometrial tissue were obtained from women, undergoing elective cesarean section at the John Hunter Hospital, NSW, Australia. Biopsies were collected with the approval of the Hunter and New England Area Human Research Ethics Committees (2019/ ETH12330) and all participants gave informed written consent. Myometrial biopsies were obtained from the upper lip of the incision in the lower uterine segment. All myometrial biopsies were obtained from term pregnancies (37-40 weeks of gestation) where the woman was not-in-labor (NIL). The clinical indications for elective cesarean section were breech presentation or a previous cesarean. All women were examined clinically and those with signs of infection, or with diabetes mellitus, or treated with any medication other than prenatal vitamins were excluded from the study. Patient demographic data are shown in Table 1. Upon collection, the myometrial biopsies were placed in pre-chilled phosphate buffered saline (PBS) on ice for transportation and were used within 60 min to commence myometrial dose-response contraction assays.
Myometrial Contraction Assays
Myometrial contraction assays were performed as previously described [52][53][54] using an 8-channel Radnoti Tissue-Organ Bath System (Radnoti Glass Technology Inc., Monrovia, CA, USA) equipped with MLT0201 force transducers (ADInstruments, Bella Vista, NSW, Australia) and eight temperature-controlled organ baths. Human myometrial specimens were dissected into 8 × 1.5 × 1.5 mm tissue strips then connected to the force transducers using nylon thread and stainless-steel tissue clips (ADInstruments). Each strip was lowered into a separate organ bath containing 15 mL modified Krebs-Henseleit buffer solution (KREBS) (no Ca 2+ , no NaHCO 3 ) (Sigma-Aldrich, cat no K3753-10X1L) supplemented to 2.5 mM CaCl 2 and 25 mM NaHCO 3 . Organ baths were maintained at 37 °C and KREBS continuously gassed with 95% O 2 and 5% CO 2 . The transducer position was adjusted to apply 1 g of tension to each strip. The strips were then equilibrated whereby every 10 min for a total of 30 min, the organ baths were drained then refilled with 15 mL of KREBS (tissue strips washed). Due to tissue creep during stabilization, tissue length was increased to return tension to 1 g. Washing and re-tensioning to 1 g was repeated twice more (each strip tensioned to 1 g a total of 3 times). Thereafter, tension stabilized between 0.5 and 0.9 g and strips were left to develop spontaneous rhythmic contractions ex vivo. Under the described conditions, the myometrial strips took approximately 2 h to establish spontaneous contractions with consistent amplitude and frequency. Testing protocols were then begun, and strips were then maintained under isometric conditions for the remainder of the experimental run. In control experiments, tension was maintained near baseline tension for 6-7 h (Fig. S1). In experiments involving serial drug additions, some treatments resulted in changes of baseline tension. To accommodate these baseline changes, AUC were calculated from the tension between contractions observed immediately prior to drug addition. Contraction data were captured and visualized in real-time using a PowerLab 8/35 data-acquisition system and LabChart software (ADInstruments). Contraction traces were analyzed for key contraction parameters, including amplitude (g) and frequency (contractions/h), and integration of these values to determine the area under the curve (AUC) (g tension × sec). AUC was considered an index of contraction performance and was calculated based on the total area for all contractions generated during each 30 min treatment window (Fig. S2).
Longevity of Spontaneous Contractions in Pregnant Human Myometrium Ex Vivo
To ensure that myometrial strips were able to contract for the duration of the tocolytic studies, strips were allowed to generate spontaneous contractions ex vivo for 7 h. Traces were then analyzed (in 60 min blocks) to confirm that there was no significant change in the resting tension, contraction amplitude, contraction frequency, or AUC across the 7 h period of spontaneous ex vivo contractions.
Dose-Response Study
Before administering drug treatments, a contraction baseline was established for each tissue strip during which 1 h of contractions of consistent amplitude and frequency was recorded. Following the establishment of the baseline, treatments were added to the organ baths and the effects on contractility recorded. For each tissue strip, cumulative concentrations of drugs were administered at 30 min intervals. The effect of each drug was assessed against each strip's contraction baseline (each strip has an internal control). Seven drugs (2-APB, GH, AMP, HC-067047, ROL, NIF, IND) were analyzed against myometrium from different term pregnant NIL women (replicate numbers as indicated). To control for any effects of the drug vehicles (dimethyl sulfoxide (DMSO), Milli-Q water or KREBS buffer), equivalent cumulative volumes of vehicles were assessed against separate tissue strips during each contraction assay.
Data Analysis
Analysis of AUC was performed using LabChart 8.0 Pro with the dose-response module (ADInstruments). For each strip, the last 30 min of contractions immediately prior to commencing treatments was used as the baseline (100%). Effects of treatments were normalized against the baseline and data expressed as percent (%) of baseline contractility. Dose-response curves for AUC were generated using the non-linear regression model of GraphPad Prism 8.0 (GraphPad Software Inc., San Diego, CA, USA) and fitted through the (log inhibitor vs normalized response-variable slope) equation, Y = 100/(1 + 10^((Log IC 50 -X)*HillSlope)). The concentration of each drug required to inhibit ex vivo myometrial contractility by 50% (IC 50 ) was determined as being a 50% reduction in total AUC relative to the contraction baseline. An ordinary one-way ANOVA followed by Dunnett's multiple comparisons test was used to determine significant differences between the baseline and mean AUC of each dose used in the dose-response curve. A probability (P) value of < 0.05 was considered statistically significant.
Confirmation of IC 50
We sought to confirm the accuracy of the IC 50 values determined for each drug. Baseline contractility for tissue strips was recorded for 1 h. Each drug was then applied to individual contracting strips as a single treatment at the IC 50 determined for each drug (with exception of HC-067047, which could not be solubilized at the required concentration predicted to be the IC 50 ). Contractility was recorded for a further 1 h. The effect of administering each drug at the IC 50 on AUC was then determined.
Longevity of Spontaneous Contractions and Assessment of Drug Vehicles
We first sought to confirm that term NIL myometrial strips were able to maintain consistent spontaneous rhythmic contractions ex vivo for 7 h, which was the maximum duration of cumulative tocolytic treatments. The strips exhibited spontaneous rhythmic contractions within 2 h of the final equilibration wash/re-tension (Fig. S2, panel A). Contractions remained stable for over 7 h in that comparison of the 60 min periods revealed no significant changes in resting tension, contraction amplitude, frequency, or AUC (n = 5) across the 7 h period (Fig. S2, panels B-E). During the treatment time courses (2.5-3.5 h), the administration of cumulative doses of DMSO (maximum of 0.42%), Milli-Q water, or KREBS (drug vehicles) had no effect on the contraction amplitude, frequency, or AUC (Fig. S3, panel A-C and Fig. 3, panel A-G).
Dose-Response Analyses
Each of the drugs analyzed dose-dependently inhibited contractions in spontaneously contracting strips of pregnant human myometrium ex vivo. The appropriate dosing regimens for each drug were optimized by prior organ bath contraction studies to determine a dose-response regimen for each drug, whereby the lowest drug concentration had no significant effect, and the highest concentration exerted the maximal inhibitory effect on contractions (IC max ). For all drugs except HC-067047, IC max was the abolition of contractions (0% of baseline AUC) (data not shown). All drugs, except HC-067047, were therefore equally effective at abolishing spontaneous pregnant human myometrial contractions ex vivo but exhibited different potencies (different IC 50 values) (Fig. 2).
Tocolytic IC max
2-APB and GH abolished spontaneous ex vivo contractions at 120 and 80 µM, respectively, whereas HC-067047 failed to completely abolish contractions even at the highest cumulative concentration tested (300 µM). At concentrations of ≥ 100 µM, HC-067047 precipitated out of solution within the organ baths. As such, the contractility recorded at the 100, 200, and 300 µM concentrations does not accurately reflect the effects of HC-067047 against spontaneous pregnant human myometrial contractions ex vivo. AMP and ROL abolished spontaneous contractions (IC max ) at 800 and 150 µM, respectively. The traditional tocolytics, NIF and IND, abolished contractions at 50 nM and 120 µM, respectively.
The Tocolytic Effect of the Drugs Is Reversible
To validate that the contraction inhibition was mediated by drugs and not due to diminished cellular viability and/ or metabolic restriction of the tissue, the myometrial strips were washed with KREBS solution after tocolytic treatment to ascertain whether spontaneous contractions resumed. For all drugs analyzed, the strips resumed contracting after the washout procedure following a brief (< 30 min) recovery period (Fig. 5, panels A-G).
Discussion
Evidence suggests that PTL is a syndrome attributable to multiple pathological processes, including infection or inflammation, uterine overdistension, stress, ischemia or hemorrhage, endocrine disorders, immunologically mediated processes [55], and a gene expression pattern distinct from term labor [56]. Thus, PTL is a heterogeneous condition of multiple dysfunctions of preterm tissue that may lead to myometrial contractions, membrane/decidual activation, and/or cervical ripening. These three attributes constitute the common terminal pathway of both preterm and term birth. Hence, it is unclear whether spontaneous PTL results from premature activation of the term labor process, or due to pathological insults initiating uterine transformation from quiescence to overt labor [57]. Although evidence indicates that preterm and term labor are distinct processes [56], the preterm and term labor ultimately converge at the level of the myometrial contractile proteins. In this regard, there are no data available reporting that the levels of contractile proteins relevant to this study (IP 3 R, TRPV4, L-type Ca 2+ channels, and ROCK) change between preterm and term human myometrium. Therefore, in the absence of such data, tocolytic agents that target these proteins should not be dismissed, as they may yet have relevance to tocolysis during PTL. In this study, we performed comprehensive dose-response analyses to examine the contraction-blocking potency of three potential new tocolytics, 2-APB, GH, and HC-067047. We compared the IC 50 of these new tocolytics to the PDE inhibitors, AMP and ROL, and to the clinically deployed tocolytics, NIF and IND. In terms of inhibiting spontaneous pregnant human myometrial contraction ex vivo (measured as AUC), the order of potency of the tocolytics from highest to lowest was NIF > ROL > GH > H C-0 6 7047 > 2-APB > IND > AMP (Ta ble 3).
As a non-specific inhibitor of IP 3 Rs and SOC, 2-APB blocks the Ca 2+ entry from both intracellular stores [13] and extracellular space [14][15][16], which prevents the elevation of cytosolic Ca 2+ levels that drives contractions. Suppression of myometrial contractions in vitro by 2-APB was first reported by Ascher-Landsberg et al., who showed inhibition Contractility was measured as AUC and expressed relative to the contraction baseline. Data are presented as mean ± SEM. There was a significant reduction in AUC in response to the cumulative doses of drugs. Comparisons were made between baseline and mean AUC of each dose using ordinary one-way ANOVA followed by Dunnett's multiple comparisons test. A probability (P) value of < 0.05 was considered statistically significant. The asterisks indicate a significant difference, where 4 asterisks indicate P < 0.0001 of both OT-stimulated and spontaneous contractions in rat myometrium, with contractions ultimately abolished at 100 µM [26]. This generally aligns with other subsequent studies in rodents, where 2-APB inhibited uterine contractions stimulated by different agonists, including OT and pennogenin tetraglycoside [26][27][28][29][30][31][32]. In our study, 2-APB abolished spontaneous pregnant human myometrial contractions at 120 µM, which is largely consistent with Ascher-Landsberg et al. [26], and we identified an IC 50 (50% reduction in baseline AUC) for 2-APB of 53 µM. However, unlike Gravina et al., who reported a reduction in mouse myometrial contraction frequency following treatment [29], we observed that in pregnant human myometrium, contraction frequency increased as amplitude decreased in response to cumulative 2-APB treatments. Moreover, existing literature indicates that 2-APB exerts concentration-dependent biphasic effects on both SOC and IP 3 Rs. Patch-clamp studies on intact cell lines have shown that 2-APB has a stimulatory effect on Ca 2+ entry via SOC and the IP 3 R gating system at lower concentrations (< 10 µM), and thus caused a transient increase in the amplitude of Ca 2+ rise, whereas higher concentrations (> 10 µM) of 2-APB inhibits Ca 2+ entry [15,58,59]. However, a biphasic effect of 2-APB was not observed in our myometrial tissue strip studies. We generated concentration-response curves based on assessment of both AUC (Fig. 3) and amplitude alone (data not shown) and in both cases, we detected no stimulatory effect of 2-APB on myometrial contractility at low concentrations (1 µM). The inconsistency may be attributable to the different experimental models in that the biphasic effect of 2-APB was observed against a non-uterine myocyte cell in monoculture using patch-clamp analysis, whereas our study utilized strips of pregnant human myometrium in a contraction bioassay system [15,58,59]. In support of this, existing literature demonstrates that isoform expression of STIM (1-2), a SR membrane protein that induces the opening of the SOCs, and ORAI (1-3), a plasma membrane protein that forms the pore of SOCs, differs between myometrial cells in culture and myometrial tissue [60][61][62]. 2-APB is shown to have differential effects on different STIM and ORAI isoforms [63,64], which together mediate SOCE. Thus, the absence of a biphasic effect in the present study may be attributable to differences in expression of STIM and/or ORAI isoforms, compared to cell lines.
By inhibiting the isoenzyme, ROCK [65], GH reduces the Ca 2+ sensitivity of uterine myocytes. The myometrial contraction and relaxation cycle depends on the equilibrium between phosphorylation and dephosphorylation of MLC, where myosin light chain kinase (MLCK) phosphorylates MLC to promote contraction and myosin light chain phosphatase (MLCP) dephosphorylates MLC to promote relaxation. MLCK is a Ca 2+ -dependent enzyme that is activated by the formation of the Ca 2+ -calmodulin complex in response to an intracellular Ca 2+ surge [66][67][68][69]. In contrast, MLCP is negatively regulated by a Ca 2+ -independent mechanism where a regulatory subunit of MLCP is phosphorylated by ROCK, blocking the action MLCP and potentiating the effect of MLCK. Thus, the mechanism requires less Ca 2+ to regulate MLC phosphorylation; a phenomenon called Ca 2+ sensitization [33]. It has recently been demonstrated that ROCKmediated Ca 2+ sensitization of contractility can be induced by muscarinic or OT receptor stimulation in rat and human myometrium [70]. Hudson et al. and Aguilar et al. reported that cumulative concentrations and a single concentration (1 µM) of GH inhibited both OT-stimulated and spontaneous contraction of human myometrium ex vivo [35,36]. Our comprehensive dose-response analyses add to these data as we have shown that in spontaneously contracting pregnant human myometrium ex vivo, the IC 50 for GH is 18.2 µM and contractions are abolished at 80 µM. Our analyses also demonstrate that as GH inhibits contraction amplitude, contraction frequency increases, similar to 2-APB. The third novel tocolytic we examined was HC-067047, a selective inhibitor of TRPV4 channels. TRPV4 plays a role in extracellular Ca 2+ influx in response to various stimuli (stretch, swelling, heat, or pressure). There are limited studies examining the tocolytic effect of HC-067047. In mouse and rat uterine tissue strips, a single treatment with 1 µM HC-067047 inhibited contractions stimulated by OT and GSK1016790A [43,71]. However, when used to treat pregnant human myometrium at the same concentration (1 µM), HC-067047 had only a slight inhibitory effect against OTstimulated contractions [72].
Interestingly, Villegas et al. reported that activation of TRPV4 channels by the TRPV4 agonists, GSK1016790A and 4αPDD, resulted in the inhibition of OT-stimulated contractions in pregnant human myometrium [73]. This was attributed to an indirect effect whereby Ca 2+ influx through TRPV4 channels causes K + efflux via BK Ca -channel activation, which, in turn, causes membrane hyperpolarization that inhibits L-type Ca 2+ channels. This contrasts with the findings of Ying et al., who reported that GSK1016790A increased the contractility of mouse uterine tissue, which was then inhibited by HC-067047 [43]. Ying et al. also reported that HC-067047 inhibited OT-stimulated mouse uterine contractions, as well as delayed parturition in both RU486-and inflammation-induced mouse models of PTL [43]. Moreover, Singh et al. reported that HC-067047 inhibited GSK1016790A-induced contractility in murine strips ex vivo [71], while another ex vivo study with rat myometrial strips reported that the TRPV4 antagonist, RN1734, significantly decreased uterine contractility, whereas the TRPV4 agonist, RN1747, increased contractility [74]. The reason for the discrepancy in relation to the effects of the TRPV4 agonism is unclear but may be related to temporal and physical Ca 2+ compartmentalization that plays a role in fine tuning contractility [72]. Nonetheless, our findings are consistent with prior studies from the mouse and rat, in that TRPV4 antagonism by HC-067047 unequivocally inhibited ex vivo spontaneous pregnant human myometrial contractions in a concentration-dependent manner (1, 10, 100 µM). Precipitation of HC-067047 at higher concentrations (200, 300 µM) means that the potency of HC-067047 is likely higher than that determined during this study. HC-067047 may therefore be a novel avenue for tocolysis; however, an effective delivery strategy may be required that overcomes the low aqueous solubility, such as delivery via uterine-targeted nanoliposomes [53] or vaginally administered mucus penetrating nanoparticles [75].
As a non-selective PDE inhibitor, AMP leads to the intracellular accumulation of cAMP, which operates through various mechanisms to promote uterine myocyte relaxation (see Fig. 1). Prior studies have demonstrated smooth muscle and myometrial contraction inhibition by AMP in rodents [48,49,76] and human [51,[77][78][79][80][81]. In an ex vivo study with pregnant human uterine strips, Bird et al. reported that AMP (40 and 100 µM) produced concentration-dependent inhibition of OT-stimulated contractions [77]. In another study, Verli et al. reported that increasing AMP concentrations (0.01 nM-10 µM) reduced OT-stimulated human myometrial contractions by 25% of baseline contractility [51]. Leroy [81]. In the present study, AMP also inhibited spontaneous pregnant human myometrial contractions in a concentrationdependent manner; however, a high concentration (800 µM) of AMP was required to abolish contractility. Moreover, our determined IC 50 for AMP of 318.5 µM is higher than that reported by Leroy While AMP is a non-selective PDE inhibitor, ROL selectively inhibits PDE4, which is highly expressed in pregnant human myometrium at term [82]. Verli et al. reported that increasing concentrations of ROL (0.01 nM-10 µM) inhibited OT-stimulated contractions in pregnant human myometrial strips, and at the highest concentration tested of 10 µM, the authors found that ROL inhibited 62% of baseline contractility. Leroy et al., Bardou et al., and Martinez et al. reported that ROL inhibited spontaneous pregnant human myometrial contractions in a concentration-dependent manner, with 50% inhibition of contractility observed at 100 nM, 158 nM, and 22.7 µM, respectively [45,78,83]. In our analyses, ROL abolished ex vivo spontaneous pregnant human myometrial contractions at 150 µM and we determined the IC 50 to be 4.3 µM. We also noted that for both AMP and ROL treatment, as contraction amplitude decreased, contraction frequency decreased; this contrasts the effects of 2-APB, GH, and HC-067047, where we observed that contraction frequency increased as amplitude was inhibited.
As an inhibitor of voltage-gated L-type Ca 2+ channels, NIF blocks the influx of extracellular Ca 2+ that underpins uterine myocyte contractility [84,85]. The effects of NIF are well established [86,87] and it is currently used clinically for tocolysis in various countries [88]. In the present study, Within pregnant myometrium, IND blocks the synthesis of the prostaglandins that promote uterine contractions [92] and has been widely used as a tocolytic for many years. The tocolytic effect of IND was first reported by Vane et al. [93] and has been subsequently examined many times [94][95][96][97][98]. Across these studies, IND was reported to inhibit myometrial contractions at different concentrations and there is no emergent consensus as to an IC 50 . In our analyses, contractions were abolished by 120 µM IND and we determined the IND IC 50 to be 59.5 µM. During prior studies on spontaneously contracting pregnant human myometrium, Arrowsmith et al. [94] and Johnson et al. [96] reported IND IC 50 values of 35.4 and 278 µM, respectively, placing our findings within the range of these prior studies.
Strengths and Limitations
This study was the first to conduct a comprehensive dose-response study for novel tocolytics, 2-APB, GH, and HC-067047, to determine IC 50 concentrations and compare their potency with clinically used tocolytics. A strength of this study was our confirmation of determined IC 50 for each drug, where we confirmed that a ~ 50% reduction in baseline AUC was achieved (with exception of HC-067047 due to solubility limitations) when each drug was applied to contracting myometrial strips as a single treatment. These data support the accuracy of our determined IC 50 values within our experimental setting and may provide insight into the relative tocolytic potency that could be expected from the different drugs in a clinical setting.
The study has only examined tocolytic potency against unstimulated (spontaneous) ex vivo contractions in term, NIL pregnant human myometrium. The authors acknowledge that there is increasing evidence that there are distinct differences between preterm and term labor [56], which may call into question the relevance of these data gleaned from term NIL myometrium. However, as previously mentioned, there are no data available indicating that levels of IP 3 R [99][100][101], TRPV4 [43], and ROCK [34,102,103] change between preterm and term pregnant human myometrium. Additionally, changes in expression of L-type Ca 2+ channels [104][105][106][107], PTGS1 [108,109], PTGS2 [109], and PDE4 [51,110,111] have been reported between preterm and term myometrium; however, each of these proteins was reported to exhibit higher expression in preterm myometrium than at term, suggesting that tocolytics targeting these proteins may actually have greater relevance during PTL than labor at term.
Lastly, this unstimulated model is standard technique for elucidating contraction pathways; however, further valuable insight would be garnered by examining agonist-stimulated contractions, as well as term IL myometrium and preterm NIL and IL myometrium.
Final Remarks
This study represents a comprehensive analysis of the myometrial contraction-blocking potency of the novel tocolytics, 2-APB, glycyl-H-1152, and HC-067047, and their potency comparison against the traditional tocolytics, nifedipine and indomethacin, as well as other potential candidates, rolipram and aminophylline (Table 2). Among the novel tocolytics, glycyl-H-1152 was the most potent followed by HC-067047 and 2-APB. Glycyl-H-1152 was also found to be a more potent inhibitor of ex vivo myometrial contractions than indomethacin and aminophylline, but less potent Fig. 5 Reversibility of the tocolytics. Representative traces showing that after washout of the drugs, contractions spontaneously resumed in pregnant human myometrial strips treated with A 2-APB (n = 5), B GH (n = 5), C HC-067047 (n = 5), D AMP (n = 5), E ROL (n = 5), F NIF (n = 5), and G IND (n = 5). Red dotted lines indicate the points at which organ baths were drained then refilled with fresh KREBS buffer ◂ Table 3 Ran kin g of toc oly tic agents ac cor din g to their c ont rac tio n b loc kin g potency than nifedipine and rolipram, making glycyl-H-1152 the third most potent contraction blocker assessed (Table 3). These data provide us with greater insight into the contraction blocking potency of these drugs, with glycyl-H-1152 in particular emerging as a potential novel tocolytic due to its substantial potency. Glycyl-H-1152 may be an excellent candidate for encapsulation into uterine-targeted nanoliposomes [53] or vaginally administered mucus penetrating nanoparticles [75] as novel tocolytic strategies for preventing preterm birth. Such platforms may also facilitate the administration of hydrophobic drugs, such as HC-067047. Further studies are warranted to assess the tocolytic efficacy and safety of these agents in vivo using preterm birth models.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2022-06-18T13:25:05.111Z | 2022-06-17T00:00:00.000 | {
"year": 2022,
"sha1": "bf62b1c2b715a3259e8fdea41d250f043e129865",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s43032-022-01000-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf62b1c2b715a3259e8fdea41d250f043e129865",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214178746 | pes2o/s2orc | v3-fos-license | Nigeria Public Debt 1999 – 2017: An Analysis of Trend and Impact on Economic Growth and Development
This paper examines the trend of Nigeria’s public debt between 1999-2017. 1999 was the year of return to democratic governance. It critically explores the economic history of borrowing by Nigeria between 1999-2017, by taking an overview of the both the local and foreign public debts vis-a-vis the desired and actual economic development. It considers the reasons for the trend of the debt, types, sources, growth, management, challenges and the efforts and contributions made by the various successive administrations to resolve the public debt crisis. It touched on the nation’s fortune in agricultural sector which began dwindling since 1970 upon the discovery of “black gold” in the Eastern part of the Country. The study adopted ex-post-facto research design. Secondary data were collected and analyzed using descriptive and inferential statistics, applying SPSS version 22 to analyse the secondary data obtained from the Central Bank of Nigeria (CBN) and the Debt Management Office (DMO) respectively. The paper concluded that the economic growth and development of Nigeria is not commensurate with its level of public debt during the period of study and thus ineffective in line with expectation. Thus it was recommended that there should be a restructuring of the national economic plan and development parameters so that dependence on public borrowing is minimized and where necessary, borrowing objective should be economic, developmental, futuristic and pursued to logical and beneficial conclusion in the interest of the national economy.
the nation's budgeting system and forcing the successive governments to borrow uneconomically from within and outside the economy.
Ujuju &Oboro (2017) observed that the economy of a nation would be better off if domestic and foreign loans/debts are adequately mixed and also that the value of goods and services produced with such debts are in excess of the cost of the loan/debt. The cost here refers to interest charges and administrative expenses. Macro economically, public debt is beneficial if it promotes economic growth of the nation as well as increase the welfare of the citizens which are key functions of the government. While economists encourage borrowing by developing or developed countries of the world, it is of necessity that the objective of borrowing should be well defined, aggressively pursued, constantly reviewed and economically progressive in the interest of the borrowing nation. It is this writer's candid opinion that no lending nation or organization will assist a borrowing nation without implied benefits to herself. Therefore, a borrower is expected to be rational and economical in borrowing.
A very key objective of macro economy is striving to attain growth and development, particularly in a developing country like Nigeria with low level of domestic savings and investment. (Matthew & Mordecai, 2016). It is not practicable for an economy to survive in isolation. With scarcity of capital, it is expected that Nigeria would resort to borrowing, both within and outside its own economy to supplement its domestic savings. Therefore, the next alternative to capital formation in economic depression situation is arguably borrowing. Each time the Country borrows, it was hoped that a turnaround would take place in the international oil market in no distant time and that each loan obtained would be used in achieving a turnaround in domestic economy but such hopes do not materialize. Rather, at a point in the nation's life, borrowing was greater than the national income. (Aminu, Ahmadu &Salihu, 2013). Nigeria's public debt finally became a source of worry to the government and the citizenry. In 2006 and 2007, the Country (through the effort of the government led by Former President Olusegun Obasanjo) was able to successfully secure relief and cancellation of its debts from London and Paris Clubs. This relief was temporal as the debts later began to accumulate in the name of economic growth and development of necessary sectors of the economy.
Statement of the Problem
The public debt of Nigeria has political, economic, developmental and international implications. It beholds the national planners to ensure positive implications on all aspects to the overall advantage of the country. Authors, researchers, planners, statisticians, economists and accountants alike consider and propose the ways forward, almost on daily basis.
When a public debt escalates, the cost of servicing the debt becomes difficult to cope with, making it difficult for the country to achieve desired positive economic, monetary and fiscal objectives. It has directly or indirectly created serious obstacles to the growth of the nation. Adebiyi & Olowookere (2013).Many authors and writers had successfully come up with studies and conclusions on the public debts of Nigeria. None of the studies can be regarded as less relevant or ineffective, especially if ensuing recommendations had been effectively applied. Jhigan (2008) in a study, posits that Nigeria may purposely acquire debt to accelerate its economic development by capital goods, raw materials, spare parts importation as well as to finance certain strategic requirements meant for economic growth and development. In another study by Akujuobi (2007), it was concluded that Nigeria's economic growth decline was as a result of its external debt compared to domestic debt. According to the study, domestic debt contributed positively, while external debt did not. The cumulative effect of such contribution was not felt in the economy. The nation's successive governments have continued to consciously manage its debt without much success (Udo & Antai, 2014). It should be noted that servicing large debt has adversely affected investment and resulted in serious illiquidity. Resources have been grossly underutilized and Nigeria now has high incident of poverty and infrastructural decay. This paper attempts to deviate a little from previous studies by considering the historical trend of the history of borrowing between 1999-2017, studying both the local and foreign public debts of the Country in relation to her expected and actual economic development. The year1999, it should be noted, was the year of return to democracy and civilian administration in the nation.
Objectives of the Paper
The main objective of the paper is to determine the correlation between public debts of Nigeria and her economic growth and development between 1999 and 2017. Other objectives specifically include: To examine whether the external debt is beneficial to Nigerian economy between 1999 and 2017. To determine whether Nigeria's domestic debt is commensurate with growth and development achieved between 1999 and 2017.
To evaluate the effectiveness of total public debt on the growth and development of Nigeria between 1999 and 2017.
Hypotheses
H1: There is no significant relationship between Nigeria's external debt and its economic growth and development between 1999 and 2017 H2: Domestic debt is not significantly beneficial to Nigeria's economic growth and development between 1999 and 2017 H3: Nigeria's total public debt is not commensurate with growth and development achieved between 1999 and 2017
Research Questions
Is there significant correlation between Nigeria's external debt and its economic growth and development between 1999 and 2017? Is domestic debt significantly beneficial to Nigerian economy between 1999 and 2017? Is Nigeria's total public debt significantly commensurate with growth and development achieved between 1999 and 2017?
Review of Literature
As pointed out earlier, several efforts have been made in the past, aimed at critically looking at the recurring problem of Nigeria's debt profile and applicable solutions. This paper considers the earlier works of few scholarly authors, making reference to their studies and conclusions. In a recent study by Pettinger (2017), a conclusive attempt was made at answering the question "Why do governments borrow"? In providing solution to the question, he highlighted the following: To meet a temporary shortfall, rather than having to immediately cut back on spending when actual tax revenues are less than predicted tax figures. To act as automatic fiscal stabilizers, especially in a recession, when borrowing becomes inevitable. To invest in public goods like building schools, hospitals, better roads which may later boost productive capacity. Increase in spending commitments, particularly those ones made during electioneering campaigns. For Political reasons whereby the government does not want to increase taxes for instance. War situations, during which government spending is stretched, leading to higher borrowing. Cheapness in borrowing. When the economy experiences growth, which tends to reduce the real debt burden. Practically, the position of Pettinger (2017) is applicable to Nigeria. Aside of reason (vi) above, the other reasons stated could arguably be regarded as justifiable reasons why Nigeria can go into borrowing both locally and externally. Therefore, it can be safely posited that one or more of the factors could have contributed to the growth in Nigeria's debt burden since 90s, especially since 1999. The nation's former Minister for Finance, Okonjo-Iweala (2011) asserted that the domestic debt had seriously affected the Gross Domestic Product (DGP), which is the aggregate measure of the overall contribution of goods and services manufactured within the economy. She pointed out that its non-control could unfavourably crowd out the private sector, leading to poorer GDP.
Ujuju & Oboro (2017) in their study on the relationship between the structure of Nigerian public debt and the nation's economic performance over 1990-2015, applying the data from CBN Statistical Bulletin of various issues and using simple regression method of data analysis, concluded that Nigeria's public debts are valuable in predicting partially variations in her economic performance. In other words, the writers were of the opinion that it was possible to study the relationship between the public debts and the country's growth and development at a particular period in time. In a similar study conducted a year before Ujuju & Oboro (2017)'s by Matthew & Mordecai (2016), employing Augmented Dickey-Fuller test, Johansen co-integration test, Error Correction Method (ECM) and the Granger Causality, it was revealed that there existed a positive relationship between economic development (proxied with GDP per capita) and variables like external debts, domestic debts and servicing. It was further revealed that external debt had insignificant relationship while domestic debt had significant relationship with economic development of the nation. This position by Ujuju & Oboro (2017) corroborates the earlier position of Akujuobi (2007). In another study, Managing Nigerian Debt: The Practical Solutions undertaken by Adebiyi & Olowookere (2013) to investigate the implication of public debt on economic growth and development, using Ordinary Least Square (OLS) method in analyzing obtained data from Debt Management Office (DMO) and Central Bank of Nigeria (CBN) between 1990 and 2011, it was revealed that the public debt of the Country was far above healthy threshold, having negative correlation with economic growth. The fear was that the growth in Domestic debt/GDP may not sustain the existing economic policy of the nation. Economically, sustenance of policy is essential in pursuing growth in the GDP. Otherwise, the objective of indebtedness would sooner or later be defeated and diseconomy is achieved.
From a slightly different angle, Udoka & Ogege (2012) in their study examined the consequence of public debt crisis on Nigeria's economic development between 1970 and 2010. The study employed error correction framework and co-integration techniques in testing the correlation between macro economic variables and the GDP and concluded that political instability (which is similar to war situation, as deduced earlier) had positive correlation with Nigeria's growth and development. The study recommended reduction in public debt in order to avoid economic crisis. In an attempt to reveal the extent of damage done to the economy and growth of Nigeria, Udude, Itumo & Egwu (2015), examined the correlation between the failure of the debt management planners of the Country and the weak economy. Using explanatory, descriptive and analytical methods of analysis, the study concluded that strong positive correlation existed between Nigeria's weak economy and the failure of debt management planners. In other words, the failure of Nigeria's economic planners is attributable to the weak economy or the decline in economic growth. Udude et al (2015)'s conclusion is supported by this writer as it has always been the belief of most elites that the result of lack of (or inadequate) planning is failure. Economic failure or lack of growth does not ordinarily reflect poor loans or sources, but poor or inappropriate Vol 7 Issue 9 DOI No.: 10.24940/theijbm/2019/v7/i9/147190-360987-1-SM September, 2019 planning and application of such otherwise good loans to the economic advantage of the borrower. Earlier studies on this topic as considered above show that several factor can be attributed to the economic position of Nigeria as a developing nation. Apart from the factors itemized in (i) to (vii) earlier, others like debt servicing (domestic and external), magnitude of debt, existing economic policy, political instability, failure of debt management planners etc. could contribute to the volume and value of public debt and hence the economic growth and development.
Theoretical Review
Ideally, to measure the effectiveness of governments domestic or external debt requires an understanding of key macro-economic objectives and variables, especially as related to government's fiscal policy (Essien, Agboegbulem, Mba &Onumonu, 2016). Keynesian theorists suggest that in government's effort to achieve price stability, balance of payment, full employment etc., fiscal policies are applied to appropriately influence the economy's aggregate demand. During recession for instance, governments usually increase their spending while decreasing tax rate to ensure that aggregate demand is stimulated. Theoretically, the economic boom after a recession would take care of the deficits during the recession. Essien etal (2016) observed that governments can either slow down the pace of economic growth or stabilize prices during high inflation by using surplus budgets. Where surplus budget is not achieved, a deficit budget could be augmented by borrowing in order to ensure the achievement of budget objectives.
According to Neoclassical theorists, debt and economic growth are correlated as an optimally applied borrowed loan boosts investment, reduces instability and also encourages debt repayment (Matthew & Mordecai, 2016). Adversely, public debt reduces the resources available within the economy because of the requirement to service loans. Other negative effects of public debt include reduction in income flow and creation of economic burden on future generations through reduced capital accumulation. Dependency theorist assumes the flow of resources from underdeveloped nations to developed ones, making the latter to be wealthier and the former, poorer. It posits that the backwardness of the underdeveloped nations does not result from non-integration to the world system but as a result of their manner of integration. A logical point from this school of thought is that underdeveloped and developing nations cause their own domestic economic problems through capital diffusion, technological backwardness, bad leadership, economic and resource mismanagement, official corruption as well as relaxed and poor integration. (Abdullahi, Aliero & Abdullahi, 2013).
In the view of profligacy theorists, one of the causes of public debt is institutional weakness whereby resources are wasted through damaged standard of living as well as official corruption. In the process, prices are distorted, capital flight is encouraged and foreign economies are developed to the detriment of national economies. Adejumo & Adejumo (2014) in their studies, made reference to debt/growth model theorists which actually emphasises foreign borrowing for the reason of investment to cushion any gap between savings and investment at the macroeconomic level. This model considers the costs and benefits effect of debt while pursuing economic growth. Arguably, Adejumo etal (2014) maintained that a nation will effectively service its public debt if its cumulative debts overtime result positively in economic growth.
Empirical Review
Udoka & Ogege (2012) discovered that in Nigeria, there was a link in the long run between the total debt stock, political instability and payment of debt service. The study revealed that in the Country's situation, an unstable situation in the polity and the quantum and consistency of debt servicing add to the total debt stock and thereby making the gross domestic product (GDP) and the debt stock to exhibit positive relationship. The foreign investors through whose contributions the GDP would have improved are scared away by political upheavals and the much talked about foreign direct investment (FDI) has not been felt in the economy. The debt crisis, the causes, the effects and the efforts made by past governments have always been topical discussions. (Udude, Itumo &Egwu, 2015). The success made so far by successive administrations have become so insignificant in view of the collaboration with the creditors who directly or indirectly bank on the nation's abundant human and natural resources to exploit Nigeria further. It has been observed that exploitative policies of reforms suggested by the creditors have all along been applied to the economy without having regard to the generalityof interest and the future effects of such policies. Creditors exploitatively advance loans to debtor nations like Nigeria in order to tap from the abundantly available but uncherished resources available within the economy.According to Babatunde, Omotosho, Sani Bawa & Doguwa (2016), several arguments have been put forward to show that public debts have exhibited unimpressive growth and developmental effects. Empirical literature, particularly on the existing relationship between Nigeria's growth and its public debt shows that no optimal threshold level of public debt exists and if any, such threshold must have been disregarded by successive administrations.
Methodology
The study applies the explanatory, descriptive and analytical methods by using regression analysis via SPSS version 22,to determine the relationship existing between both the GDP as dependent variable and External Debt (ED), Domestic Debt (DD) and Total Debt (TD) as independent variables, in relation to the economic development expected and achieved. Result of the regression equation revealed that external debt (EXT DEBT) exert positive and strong effect on the gross domestic product (GDP) of Nigerian economy with probability values of .023. The correlation, r, for the regression represents the strength of the linear relationship between EXT DEBT and GDP. This means that there is significant evidence to infer that at least the explanatory variable (EXT DEBT) is linearly related to GDP and the model seems to have some validity. The significance level (or p-value=.023) for the test is less than 0.05, so the null hypothesis was failed to be accepted. Therefore, the alternative hypothesis that states that "Nigeria external debt have relationship on its economic growth and development between 1999 and 2017" was accepted. The value of R and R 2 are 0.736 and 0.541 respectively. The R value represents the correlation between Domestic debt (DD) and the GDP variables. The R 2 which indicates the explanatory power of the independent variable is 0.541. This means that 54.1% of the variation in GDP is explained by the independent variable. The R 2 value as revealed by the result is quite fair which means that about 45.9% of the variation in the dependent variable is unexplained by the model, denoting a strong relationship between the explanatory variable and GDP. This means that there is significant evidence to infer that the explanatory variable (DD) is linearly related to GDP. The significance level (or p-value=.000) for the test is less than 0.05, so the null hypothesis was failed to be accepted. Therefore, the alternative hypothesis that states that "Domestic debt is not beneficial to Nigeria's economic growth and development between 1999 and 2017" was accepted. Result of regression equation revealed that Total Debt (TD) exert positive and strong effect on the GDP with probability values of .000. The correlation, r, for the regression represents the strength of the linear relationship between TD and GDP. It also gives R 2 , which indicates how much of the variation in the response variable Y, is explained by the fitted regression line. i.e. the coefficient of determination is 0.531; therefore, about 53.1% of the variation in GDP is explained by TD. The regression equation appears to be very useful for making predictions since the value of r 2 is a fair correlation close to 1. The fitness of the model can also be explained by F-ratio (F) in the ANOVA table above. According to Andy (2000), "a good model should have a large F-ratio (greater than one at least)". The F-ratio in the model is 19.218, which is significant at p < 0.000. This means that there is significant evidence to infer that at least the explanatory variable (EXR) is linearly related to GDP and the model seems to have some validity. The significance level (or p-value=.000) for the test is less than 0.05, so the null hypothesis was failed to be accepted. Therefore, the alternative hypothesis that states "Nigeria's total public debt is not commensurate with growth and development achieved between 1999 and 2017" was accepted. From above, the regression equation is: GDP=-148575832.208 + 67.749TD + e
Conclusion
Resulting from the analysis, the three (3) independent variables namely External debt, Domestic debt and Total debt exhibited positive correlation with the GDP which was used as proxy for growth and development. In other words, the tested hypothesis revealed that the two major types of debt as well as the combination of both do not individually and | 2020-02-06T09:10:34.893Z | 2019-09-30T00:00:00.000 | {
"year": 2019,
"sha1": "1d426a33039f671796cff81c1b5f4fabc38f1400",
"oa_license": null,
"oa_url": "http://www.internationaljournalcorner.com/index.php/theijbm/article/download/147882/103559",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "44c8a23d12918c46a278550ec76126c2a5660425",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
1338349 | pes2o/s2orc | v3-fos-license | The Value of Knowing Your Enemy
Many auction settings implicitly or explicitly require that bidders are treated equally ex-ante. This may be because discrimination is philosophically or legally impermissible, or because it is practically difficult to implement or impossible to enforce. We study so-called {\em anonymous} auctions to understand the revenue tradeoffs and to develop simple anonymous auctions that are approximately optimal. We consider digital goods settings and show that the optimal anonymous, dominant strategy incentive compatible auction has an intuitive structure --- imagine that bidders are randomly permuted before the auction, then infer a posterior belief about bidder i's valuation from the values of other bidders and set a posted price that maximizes revenue given this posterior. We prove that no anonymous mechanism can guarantee an approximation better than O(n) to the optimal revenue in the worst case (or O(log n) for regular distributions) and that even posted price mechanisms match those guarantees. Understanding that the real power of anonymous mechanisms comes when the auctioneer can infer the bidder identities accurately, we show a tight O(k) approximation guarantee when each bidder can be confused with at most k"higher types". Moreover, we introduce a simple mechanism based on n target prices that is asymptotically optimal and build on this mechanism to extend our results to m-unit auctions and sponsored search.
Introduction
So it is said that if you know your enemies and know yourself, you can win a hundred battles without a single loss. If you only know yourself, but not your opponent, you may win or may lose. If you know neither yourself nor your enemy, you will always endanger yourself.
(Sun Tzu, The Art of War) In 1981, Myerson elegantly derived the revenue-optimal way to sell a single item [12] -each buyer's bid is transformed through a personalized virtual valuation function and then submitted to a standard second-price auction. Myerson's auction leverages precise prior beliefs in order to identify the bidder who generates the highest marginal expected revenue, allowing the seller to discriminate among bidders and extract more money from those with a higher willingness to pay.
For all its mathematical beauty, Myerson's optimal auction violates an inherently desirable property: fairness. One definition of fairness says that the auctioneer should not a priori discriminate among the auction's participants. It is a property that may be both desirable and necessary -it is undeniably philosophically important in many applications; moreover, many settings lack a strong notion of identity, precluding explicit discrimination.
Sponsored search illustrates the practical importance and limitations of treating bidders equally ex-ante. A typical sponsored search auction run by Google, Bing, or Yahoo matches bidders to ad slots on a page of search results -higher slots get more clicks, so higher bidders get higher slots. Suppose that the search engine identifies a group of queries where the market is thin, so the top bid is much higher than the second one. The search engine would like to enforce a premium price for the top slot; however, this effectively requires discriminating against the highest bidder. 1 Unfortunately, ex-ante discrimination may not be possible. Advertisers who are large will desire and demand "fair" treatment; due to their size, they will have the negotiating power to get it. Advertisers who are small lack the clout to demand equality; however, they are plentiful and could copy their accounts, blending into the masses to avoid explicit discrimination. As a result, search platforms like Google, Bing, and Yahoo may be prohibited from such discrimination out of necessity.
In this paper, we study the value of discriminating among your opponents in advance. Myerson's optimal auction critically requires that the seller know the identities of bidders ex-ante, so that he can price discriminate among them -our goal is to quantify tradeoff inherent in requiring ex-ante fairness in dominant strategy incentive compatible auctions.
Anonymous Mechanism
Design. An anonymous auction treats all bidders equally ex-ante. While the auctioneer may know information about the kinds of bidders who will participateeven knowing precise prior beliefs about bidders' values -this information cannot ex-ante be used to discriminate among them. Alternatively, one may say that the auctioneer knows precise priors but does not know which prior belongs to which bidder. Technically, an auction is anonymous if and only if it is symmetric in the sense that permuting bids will analogously permute allocations and prices.
To see the potential power of anonymous mechanisms, consider the following example: two bidders have values v 1 = $2 and v 2 = $1 for a digital good, and the auctioneer knows these values precisely. The optimal mechanism gives an item to each bidder, charges the first bidder p 1 = $2 and the second bidder p 2 = $1 for a total revenue of $3. What can anonymous mechanisms do? A simple posted price will make revenue at most $2, but the following mechanism will extract the optimal revenue: if one bidder bids $2 and another bids $1, give items to both and charge each bidder her value for a total revenue of $3; if both bidders bid $2, give items to both and charge them both $1 for a revenue of $2; otherwise, do not give anyone anything. It is easy to check that this mechanism is both symmetric and incentive compatible.
We begin by characterizing the optimal anonymous auction that is dominant strategy incentive compatible and ex-post individually rational. We show that it has a simple intuition in the digital goods setting: (1) imagine that bidders are relabeled uniformly at random before participating in the auction, then (2) use v −i to infer a posterior belief about v i and (3) choose a posted price for bidder i that maximizes revenue given this posterior. This intuition generalizes beyond the digital goods setting when the inferred posterior is regular. Some simple cases bear mention here: if the auctioneer's prior is the same for all bidders (an IID setting) or if it is impossible to confuse bidders, the optimal anonymous auction will correctly deduce everyone's identity and coincide with the unconstrained optimal auction.
With a basic understanding of anonymous auctions in hand, we study the performance of anonymous digital-goods auctions; our results are not immediately encouraging. We begin with a single-price mechanism -a simple and naturally anonymous auction -and show that it offers only a Θ(n) approximation in general and a Θ(log n) approximation when priors are regular. Moreover, we show that the above results are tight even for the class of all anonymous mechanisms: prior beliefs exist so that no anonymous auction can guarantee revenue approximation better than Θ(n) to the revenue of Myerson's optimal auction while if bidders' values are known to be drawn from uniform distributions, we can prove a lower-bound of Ω(log n). Together, these suggest that general anonymous mechanisms cannot achieve better asymptotic guarantees than pricing in general settings and can be very far from optimal.
Having shown that anonymity can hurt revenue substantially in the worst case, we ask whether there are particular conditions under which anonymous auctions perform well. Our characterization of the optimal mechanism gives us hope: if all bidders are almost identical or almost perfectly distinguishable, then the optimal anonymous mechanism should be close to the unconstrained optimal one. In order to formalize this observation, we consider k-ambiguous distributions where each bidder can be confused with at most k bidders with "higher ranked distributions" and show that anonymous mechanisms can guarantee a Θ(k) approximation to the optimal revenue. Moreover, we introduce the decreasing price mechanism, a simple mechanism that naturally generalizes single price mechanisms and matches the asymptotic guarantees of the best anonymous auction. Intuitively, the mechanism is succinctly defined by a set of n prices p 1 ≥ · · · ≥ p n , where p i is the price that the i-th-highest bidder should pay. The decreasing price mechanism implements this idea with the minimal modifications required to maintain incentive compatibility. Notably, this auction has linear description complexity, whereas the description complexity of the true optimal anonymous mechanism may be exponential or even unbounded for continuous distributions since it might offer a wide range of different prices to a bidder depending on what others bid.
Finally, we show how our decreasing price mechanism can be extended to anonymous mechanisms for m-unit auctions and sponsored search with the same Θ(k) guarantee for k-ambiguous distributions. As motivated above, a sponsored search platform may wish to charge a premium for certain slots based on the demand profile of a market. Without the ability to discriminate among bidders, the platform may be constrained to run an anonymous auction. 2 A slight modification to our decreasing price mechanism offers a way to do this.
Related Work. Deb and Pai [7] also study the problem of designing a revenue maximizing mechanism under the anonymity constraint. They devise a set of allocation and payment functions such that in equilibrium bidders pay the Myerson virtual values of their corresponding distributions and the seller achieves revenue that matches the optimal revenue in the unrestricted case. Their results are only for a single item, their mechanisms are BIC and BIR and their solution implementation is Bayes Nash Equilibrium. In contrast, attempting to get more robust and practical results, we require our mechanisms to be dominant strategy IC and ex-post IR and our implementation in Dominant Strategies.
Ashlagi [2] characterizes anonymous truth-revealing position auctions. He shows that under two different notions of anonymity, namely anonymity of the allocation rule and utility symmetry, every truth-revealing position auction is a VCG position auction. His work applies to deterministic auctions and doesn't consider optimizing revenue.
A variety of problems in the optimal auction literature employ similar ideas to reach different ends. Hartline and Roughgarden [10] study simple mechanisms that maximize seller revenue for selling a single item. They show that when bidder distributions are regular, a second price auction with a single reserve -a simple anonymous mechanism -offers a constant fraction of the revenue that is achievable by Myerson's optimal auction [12]. Prior-independent mechanisms (e.g. [9,15]) assume values are drawn I.I.D. to infer a distribution from v −i to approximate the optimal revenue when the prior is not known. In contrast, anonymity will only be a significant constraint when values are non-I.I.D. and the optimal auction must discriminate among them. Optimal auctions for correlated bidders also use v −i to infer a posterior over v i (see e.g. [5,13]). We will see that the optimal anonymous auction is closely related to the optimal general auction for a particular correlated prior.
Model and Preliminaries
A seller has m identical items to sell to n bidders. Each bidder i has a private valuation v i for getting one item. The profile of agent valuations is denoted by v = (v 1 , . . . , v n ). The valuations of the agents are drawn from a product distribution F = F 1 × · · · × F n .
A mechanism M = (A, P) consists of an allocation function A and a pricing function P.
Definition 2.1 (Anonymous Mechanisms).
A mechanism is anonymous if permuting the arguments of v also permutes the resulting allocations and prices. 3 Every agent seeks to maximize his utility Throughout the paper, we are focused on mechanisms that are Dominant Strategy Incentive Compatible (DSIC) and ex-post individually rational (ex-post IR). DSIC means means that an agent cannot improve his utility (expected valuation minus price) by bidding a different valuation even if he knows all the valuations that other agents bid.
bidders with the same CTR or score. We follow Ashlagi and consider a simple model without such parameters to avoid these complexities.
Definition 2.3 (Dominant Strategy Incentive Compatible). A mechanism is dominant strategy incentive compatible if for all profiles v, we have that
Ex-post IR means that a bidder is always better-off participating in the mechanism:
Optimal Anonymous Auctions
First, we study optimal anonymous auctions and show that they have a natural structureinformally, the mechanism uses the values of others v −i to infer a posterior belief h about bidder i's value, then maximizes revenue in the standard way subject to the posterior beliefs h (maximizing virtual value and charging the associated single-parameter payments [12,1]). In the special case of a digital goods auction, each bidder is offered the item at the optimal posted price for her inferred distribution h.
First, since anonymous mechanisms generate the same outcome when bidders are permuted, we observe the following: Observation 3.1. The optimal anonymous mechanism remains optimal if we randomly rename bidders before running the auction.
Moreover, if any mechanism's prior beliefs are symmetric, then bidders can be relabeled without affecting the mechanism's expected revenue: Observation 3.2. Suppose that prior beliefs F are symmetric (possibly correlated). Then there exists a symmetric mechanism that maximizes revenue.
These observations immediately lead to the following claim that allows us to reduce the problem of finding the optimal symmetric auction to optimizing: Claim 3.3. Any mechanism that is optimal among DSIC and ex-post IR mechanisms for the symmetric distribution can be transformed into a mechanism that is optimal among symmetric, DSIC, and ex-post IR auctions for the beliefs F by relabeling bidders according to a uniformly random permutation.
Building on this claim, our characterization theorem for digital goods follows by characterizing the optimal auction for g: Theorem 3.4. The optimal anonymous digital goods auction offers bidder i a copy of the item at the revenue-maximizing price given h(v i |v −i ), the posterior belief about v i given v −i .
For mechanisms beyond digital goods, we can apply a theorem of Roughgarden and Talgam-Cohen [13] to characterize the optimal auction for g as long as inferred posterior h is regular -the resulting optimal mechanism will infer h and maximize virtual value with respect to h. For details, see Appendix A.
Proof. Following Claim 3.3, it is equivalent to study the optimal auction for the correlated distribution g. We know from Myerson and others [12,1], that a normalized mechanism M will be DSIC if and only if A i is monotone in v i and payments are given by In addition, when distributions f i are independent, a clever change-of-variables We can thus write the expected revenue R i from bidder i as and can rearrange to get It remains to choose A, which can be done in an arbitrary (monotone) way for digital goods. The inner integral hA i φdv i is precisely the revenue when bidder i has value distributed according to h(v i |v −i ), so Myerson [12] tells us that the optimal allocation A i (v −i , v i ) is a posted price to bidder i that maximizes revenue given the distribution h.
A few noteworthy extreme cases arise when the auctioneer can identify bidder i given only the bids v −i : If the distributions f i are point distributions (bidders' values are known precisely to the auctioneer), have non-overlapping support, or are the same for all bidders, then the optimal anonymous mechanism coincides with Myerson's optimal mechanism.
In all three cases, the posterior distribution inferred from v −i is precisely f i , therefore the auction precisely identifies each bidder and runs the optimal auction.
These results suggest that anonymous mechanisms perform best when we can differentiate among the bidders; indeed, we will see that this is necessary. In Section 4, we show that the anonymity constraint substantially limits revenue even when distributions are discrete over n points and that assumptions like regularity of f i are insufficient. In Section 5, we show that the performance degrades continuously with the auctioneer's ability to differentiate among the bidders.
Worst-Case Approximations
We compare the revenue guarantees of single price and anonymous mechanisms and find that that anonymous mechanisms can do no better in the worst case.
Single Price Mechanisms
We first look at how well single price mechanisms for m-unit auctions performs compared to the optimal. A single price mechanism allocates items to the m highest bidders with values exceeding p and charges them the maximum of p and the m + 1 highest bid. 4 It is easy to see that single price mechanisms can get at least a 1 n fraction of the revenue by choosing as price the Myerson reserve price of a bidder's distribution chosen uniformly at random. However, such a linear approximation guarantee is unavoidable as we can also show a linear a lower bound of m for the approximation. Proof. Consider the case where each bidder i has a value of 1 ǫ i with probability ǫ i and 0 otherwise. Then, the optimal mechanism gets a revenue of at least m by posting a price to each bidder equal to his high value and selling to the m largest. On the other hand, charging a single price to everyone gives at most max i However, when all agent distributions are regular, we can show that single price mechanisms perform much better. Proof. To prove the theorem we will apply Theorem 4.1 from [3] which states that running VCG with the median of each agent's distribution as a reserve price (VCG-m) gives a 4-approximation to the optimal revenue. Therefore, it suffices to prove that the revenue of single price mechanisms is a Θ(log m) to that of VCG-m.
Let p i be the median prices for each bidder and assume that p 1 ≥ p 2 ≥ · · · ≥ p n . The revenue of VCG-m comes from 2 different sources: reserve prices, where a bidder is charged his reserve price, and competition between bidders, where a bidder is charged the bid of someone else.
If more than half of the revenue comes from competition between bidders, setting a price 0 for all bidders and running a simple VCG gives a 2-approximation. This is because the revenue that comes from competition in VCG-m is at most m times the m + 1 largest bid which is equal to the revenue of VCG with no reserve prices.
Otherwise, more than half of the revenue comes from charging the reserve prices to bidders. In this case, the revenue is at most equal to 2 m i=1 p i . Consider a mechanism that charges each This bound is tight even for bidders coming from point distributions. Suppose that each bidder i has a value of 1/i. The best single price gets revenue of 1 while the optimal mechanism gets revenue H m = Θ(log m).
Symmetric Mechanisms
For general anonymous mechanisms, we show that even if we pick the best symmetric mechanism we cannot get any better asymptotic guarantees than single price for general distributions. . The optimal symmetric mechanism M gives a Θ(m) approximation to the optimal revenue for general distributions.
Proof. We revisit the construction from the Theorem 4.1 but lower the probability that a bidder gets a high value even further. Each bidder i now has a value of 1 ǫ i with probability δǫ i and 0 otherwise. The optimal asymmetric mechanism gets a revenue of nδ. The optimal symmetric mechanism must charge the same price whenever there is only one bidder with a high bid. Let E be the event that at least two bidders value the item high. Given ¬E, the mechanism is identical to a single price mechanism. So the approximation of the optimal symmetric mechanism is upper bounded by nδ goes to 0 as δ → 0.
Moreover, we can show that general symmetric mechanisms cannot beat the asymptotic guarantees that single price mechanisms achieve for regular distributions. In fact, we can show that this is true even for uniform distributions.
Theorem 4.4 (Uniform distributions counterexample). For uniform distributions, the best symmetric mechanism gets at most a Θ(log m) approximation to the optimal revenue.
Proof of Theorem 4.4
We consider a digital goods case where there are N = (2 n − 1)L agents, where 2 i L agents have distributions in U [0, 2 −i ] for i ∈ {0, ..., n − 1}. We can see that the optimal asymmetric mechanism gets a revenue of Ln 4 by charging each agent a price at the midpoint of his distribution. We will now upper bound the revenue that the optimal symmetric mechanism achieves. To do this we consider an instance where a vector of values v is reported.
Let b i = #{j|v j > 2 −i }, i.e. the number of agents with values greater than 2 −i . We will show that if all b i 's are large, the optimal symmetric mechanism charges a very low price to each agent.
Proof. Since we are in a digital goods setting we can apply Theorem 3.4 and consider the distribution that the mechanism infers for an agent k's value by looking at all bids of the other agents. The probability density of agent's k value at a point x given the bids v −k of the other agents is which is proportional to the number of ways to match agents to probability distributions for the bid vector v ′ = (v −k , x).
We can compute the number of ways exactly in terms of 1 choices over all and so on. We now show that 4h(x|v −k ) < h(y|v −k ) for x ∈ (2 −t , 2 −(t−1) ), y ∈ (2 −(t+1) , 2 −t ) and 1 ≤ t ≤ n − 1. That is the probability density at the interval (2 −t , 2 −(t−1) ) is at most a fourth of the probability density at the interval (2 −(t+1) , 2 −t ). Let b ′ (x) and b ′ (y) be the corresponding b ′ parameters for x and y respectively. It is easy to see that b ′ i (x) = b ′ i (y) for i = t and that b ′ t (x) = b ′ t (y) + 1. We have that: cancelling all identical terms We now show that the optimal price for the inferred distribution is less than 2 −(n−1) . Assume that this is not the case and the optimal price is p > 2 −(n−1) . We will show that by charging p/2 we get strictly more revenue. We will prove by induction that P r[ by the induction hypothesis We conclude that P r[x > p] < P r[x > p/2]/2 which implies that pP r[x > p] < pP r[x > p/2]/2, i.e. the revenue we get by charging p is less than charging p/2 if p > 2 −(n−1) .
We now show that for large enough L the conditions of Lemma 4.5 are satisfied with extremely high probability.
Proof. Consider the expectation of b i .
By a union bound for all n possible values of i we get that P r[E] < 1 − ne −2 n−2 .
Therefore, the revenue of the optimal symmetric mechanism is at most N 2 −(n−1) = L(2 n − 1)2 −(n−1) ≤ 2L when event E happens and at most Ln otherwise. Thus, the expected revenue is at most L(2 + n 2 e −2 n−2 ). Since the optimal asymmetric mechanism achieves revenue Ln/4, the approximation ratio is n/8 + o(1). Since the number of agents is at most N ≤ 2 6n , we have that n ≥ log N/6. Thus the approximation ratio in terms of N is log N 48 + o(1) = Θ(log N ) = Θ(log m) since m = N in the digital goods setting.
Anonymous Auctions with Limited Ambiguity
In the previous section, we showed that the best anonymous auction cannot offer better worst-case revenue guarantees than single price mechanisms, even when distributions are regular or have a monotone hazard rate. In this section, we explore a key property called limited ambiguity that separates anonymous mechanisms from single price mechanisms and demonstrates their power.
be the support of the distribution of agent i and assume without loss of generality that a 1 ≥ a 2 ≥ ... ≥ a n . We say that the set of distributions is k-ambiguous if b i < a i−1−k for all i, i.e. a sample from the i-th distribution can be confused with at most k distributions ahead of it.
The extreme case where k = 0 -i.e. bidders' values are drawn from distributions with disjoint supports -gives our first separation between general anonymous auctions and single price mechanisms. It is easy to see that single price mechanisms cannot achieve approximation ratio bounded by a function of k for 0-ambiguous distributions. Consider the single point distribution 1/i for each agent i -it is easy to see that the approximation ratio of any single price is log n, which cannot be bounded by a function of k. In contrast, we showed that the optimal anonymous auction achieves the same revenue as the optimal non-anonymous auction in Section 3.
In this section, we will show that anonymous mechanisms can guarantee an approximation ratio of O(k) for k-ambiguous distributions, and that this is tight. We focus first on the case of digital goods, where m = n, and then extend to m < n as well as to sponsored search auctions.
To show that anonymous mechanisms can achieve an O(k) approximation to the optimal revenue, we construct a simple mechanism called the Decreasing Price Mechanism (DPM) that is efficiently defined by n prices. We will begin with a slight variation that is not dominant strategies incentive compatible (DSIC) to motivate the choice of mechanism. This mechanism is both simple and anonymous, but unfortunately it is not DSIC, since a bidder can lower the price she pays simply by ranking lower in the ordering of bids (indeed, she can always get an item at price p n simply by placing the lowest bid). We add two key ingredients to define our DSIC decreasing price mechanism.
The first ingredient we add limits a bidder's ability to win the item at a lower price: the auction only sells an item at price p i if it has successfully sold items at all higher prices. Consequently, for example, bidder i + 1 must be willing to pay p i in order for bidder i have a chance to win an item at a lower price. When the auction fails to sell an item at price p i and therefore stops selling more items, we call this a "drop" event.
The second ingredient we add restores incentive compatibility: if a bidder could have won an item at a lower price by ranking lower in the bid order, then we automatically charge her the lower price instead. Observe that given our first modification, bidder i can win an item at a lower price p l if and only if b j ≥ p j−1 for all j ∈ {i + 1, . . . , l}. We call this a "chain" effect since there is a chain of bidders with b j ≥ p j−1 .
These two additional ingredients are the intuition for our decreasing price mechanism: • Sort bids in decreasing order.
• Starting with i = 1, allocate items as long as b i ≥ p i , then stop allocating items.
• Each winner i is charged p j(i) , where j(i) is the smallest j ≥ i such that exactly j bidders are bidding above p j .
We note that single price mechanisms are a special case of DPM where all the prices p 1 = ... = p n = p. The following lemma shows several interesting properties of DPM.
Proof. It is clear that the mechanism is anonymous because it ignores any initial labeling and relabels bidders in decreasing order of their bids. The auction is individually rational because a bidder only wins if b i ≥ p i and pays a price p j(i) ≤ p i . The claimed monotonicity property is also easy to see as the mechanism considers bids in decreasing order and allocates items only until it reaches the first bidder with b i < p i .
To see that the mechanism is DSIC, we look at an agent i and show that i cannot win an item at a lower price. Note that if i changes her bid to b ′ i < p j(i) , then there will be j(i) − 1 bids ≥ p j(i) (there were exactly j(i) such bids before i changed her bid) and the auction must stop by the time it reaches reaches bidder j(i). Thus, the auction will not sell an item for less than p j(i) , so i will not get an item. On the other hand, keeping other bids fixed, if i bids b i ≥ p j(i) , there will be exactly j(i) bidders bidding ≥ p j(i) , so i cannot win at a price less than p j(i) .
We will now show that the decreasing price mechanism achieves an approximation ratio of O(k) for k-ambiguous distributions. To illustrate the significant ideas in the proof we will first show the statement for k = 1 before proving the general case.
The case of k = 1
For 1-ambiguous distributions, we prove the following theorem: Theorem 5.2. The optimal Decreasing Price Mechanism approximates the revenue of the optimal auction within a factor of 5 for 1-ambiguous distributions.
Proof. The proof has two parts. First, we use a distribution over DPM pricing schemes to approximate the revenue contribution of agents 3 to n. This distribution will have expected revenue that is a 3-approximation to the welfare of those agents and therefore also to the revenue they contribute in the optimal auction. Second, we use our single price results to cover the revenue from the first two agents.
First, to cover the revenue contributions of agents 3 to n, DPM prices are chosen as follows (the parameters r i will be chosen later): Intuitively, choosing p i = a i is safe because v i ≥ a i , whereas p i = a i−1 extracts more revenue at the risk of triggering a drop event that prevents selling items to bidders > i. We take r 1 = 0 so p 1 = a 1 . Let q i be the probability that v i ≥ a i−1 and define q 1 = 0. We define c i , the conditional likelihood of a chain effect, and d i , the conditional likelihood of a drop event, as follows: By definition of the auction, agent i pays at least a t for some t ≥ i if and only if (a) all bidders j ≤ i have v j ≥ p j so that bidder i wins an item, and (b) there exists a j ∈ {i+1, . . . , t+1} such that exactly j bidders have bids b j ≥ p j . Condition (a) is equivalent to saying that a drop event does not occur among the first i bidders and happens with probability i j=1 (1−d j ). Condition (b), assuming truthfulness and using 1-ambiguity, happens if and only if there is some j ∈ {i + 1, . . . , t + 1} such that either v j < a j−1 or p j = a j−1 , which happens precisely when j does not trigger a chain effect, so the likelihood that such a j exists is 1 − t+1 j=i+1 c j . Let x t denote the expected number of bidders who pay a t and y t = t i=1 x i the expected number who pay at least a t . We can now write y t as To bound this sum, we relate the c i 's and d i 's with the following lemma: Proof. For any such ρ choose Applying this lemma with ρ = 1/i 2 gives r i 's such that The total expected revenue of the mechanism is n i=1 ], it follows that n t=1 a t /3 ≥ n t=3 Rev[Agent t ]/3, i.e. the revenue is at least 1/3 of the optimal revenue generated by agents 3 to n.
It remains to handle the revenue contributed by the first two agents. To do so, we use the single price lemma that says that a single price p is a 2-factor approximation for 2 distributions. If we choose prices p 1 = ... = p n = p with probability 2/5 or the pricing scheme that is defined above with probability 3/5, we get an expected revenue of at least: Since we are randomizing over DPM pricing schemes, there exists a single pricing scheme that achieves the necessary approximation. This completes the proof and shows a 5 approximation.
The general case
For general k-ambiguous distributions, the following theorem shows an O(k) approximation.
Theorem 5.4. The Decreasing Price Mechanism achieves an approximation ratio of (3e 2 + 2)k for k-ambiguous distributions.
The proof of this theorem mimics the 1-ambiguous case. We split agents into blocks of size k such that an agent in block t cannot be confused with any agents in blocks < t − 1, then a technical lemma analogous to Lemma 5.3 bounds the drop and chain rates between blocks to achieve an O(k) approximation to the revenue from blocks 3 to n/k. Finally, a single price mechanism covers the revenue from the top two blocks. -
Proof.
To begin, we split agents into N = ⌈n/k⌉ blocks, such that block 1 contains agents 1 through k, block 2 contains agents from k + 1 to 2k and so on. Notice that as previously agents in block i cannot be confused with agents in blocks < i − 1. Let A i be the lowest value an agent in block i can take, i.e. A i = a i·k . We will first approximate the revenue contribution of blocks 3 to N . The main ideas follow the 1-ambiguous proof. For each block i, we randomly pick a number of items j to price "high:" the top j items in block i are priced at A i−1 , and the remaining k − j items are priced at A i . A block "drops" if we over-estimate the number of bidders who are willing to pay A i−1 ; if block i drops, then the auction will not allocate to any bidders in blocks > i. Similarly, a block "chains" if we underestimate the number of bidders who are willing to pay A i−1 ; if a block chains, then the auction will not be able to charge A i−1 to any bidder, since there will be too many bidders willing to pay A i−1 .
Formally, we set prices for each block as follows: 1. Sample j according to the distribution R i,j . ( k j=0 R i,j = 1) 2. Set the prices for the first j items in the block at A i−1 and set prices for the remaining k − j items at A i : We set R 1,· = (1, 0, 0, 0, ..., 0) so that all agents of block 1 are assigned a price of A 1 .
To define chain and drop probabilities, let Q i,j be the probability that exactly j bidders in block i have value greater or equal to A i−1 . We define Q 1· = (1, 0, 0, ..., 0). We define the associated chain probability C i as the likelihood that the number of agents in block i who are willing to pay A i−1 strictly exceeds the number of prices in the block that were set at A i−1 : Similarly, we define the associated drop probability D i as the likelihood that the number of agents in block i who are willing to pay A i−1 is strictly less than the number of prices that were set at A i−1 : We claim that agents in block i pay at least A t for some t ≥ i as long as (a) no block ≤ i "drops," and (b) at least one block j ∈ {i + 1, . . . t + 1} does not "chain." Note that if no block ≤ i drops, then all bidders in blocks ≤ i have v ≥ p and will therefore get allocated. This happens with probability i j=1 (1 − D j ). If any block j ∈ {i + 1, . . . , t + 1} does not chain, then we know that the number of bidders asked to pay A j−1 cannot be higher than the number of bidders asked; consequently, A j−1 will be a lower-bound on the price paid by bidders in block i. The likelihood that at least one such block does not chain is 1 − t+1 j=i+1 C j . Thus, if we define X t as the expected number of blocks whose agents pay A t and Y t = t i=1 X i be the expected number of blocks where all agents pay at least A t , then: We use the following lemma to relate C i 's and D i 's.
Proof. LetQ i,j = j z=0 Q i,z andQ i,−1 = 0. We consider distributions R i,j that take the following form: where s is a parameter that will be chosen later. Notice that for any s, we have that: We are now ready to prove that there exists choice of s such that C i ≤ 1 − ρ 1− 1 k+1 . We will do this by assuming the contrary, namely that C i > 1 − ρ 1− 1 k+1 for any choice of s, and reach a contradiction. Note that this assumption immediately implies that R i,jQi,j ≤ 1 − C i < ρ 1− 1 k+1 for any j. Before we begin, note thatQ i,−1 = 0,Q i,k = 1, andQ i,j is monotone in j.
We let j * be the smallest j that ρ ≤Q i,j and show inductively that for any z ≥ j * , .
Base case z = j * : When z = j * , we can choose s = z. SinceQ i,(z−1) ≤ ρ, we have that R i,z = 1 and thus we getQ i,z = R i,zQi,z < ρ 1− 1 k+1 (using our contrary assumption). Inductive step: Now, we assume the hypothesis holds for some z and prove it holds for z + 1. We could choose s = z + 1, in which case R i, which completes the proof of the induction.
We can now reach a contradiction by seeing thatQ i,k = 1 < ρ Applying the lemma with ρ = Moreover, we have that Jensen's inequality for The expected revenue of the randomized pricing scheme will be at least N −1 t=1 kA t X t . 5 Since each Y t = t i=1 X t ≥ t 3e 2 k , we get that the expected revenue by the randomized pricing scheme is at least N −1 t=1 kAt ]. To bound the revenue of the first two blocks we use the single price lemma that says that a single price p is a 2k-factor approximation for 2k distributions. If we choose prices p 1 = ... = p n = p with probability 2/(3e 2 + 2) or the pricing scheme that is defined above with probability 3e 2 /(3e 2 + 2), we get an expected revenue of at least: Since we are randomizing over pricing schemes there exists a single pricing scheme that achieves the necessary approximation. This completes the proof and shows an O(k) approximation.
Extension to m-goods and position auctions
We extend the results of the previous section from digital goods, where we have an unlimited supply of identical goods, to the m-unit setting where we have m copies of a good and to position auctions. In sponsored search auctions auctions, the items are slots on a page of search results and the scale factors correspond to the click through rate of the slot.
Theorem 5.6. In any m-good or position auction setting, there exists an anonymous mechanism that achieves an approximation of O(k) for k-ambiguous distributions.
Since m-good auctions are a special case of position auctions with s j = 1 for all j, it suffices to prove this theorem for position auction settings. Moreover, we can assume w.l.o.g. that m = n since we can always add additional items with s j = 0.
The first step in proving the theorem is getting an upper bound on the revenue of the optimal mechanism. Lemma 5.7. For any position auction setting with k-ambiguous distributions, the maximum achievable revenue is at most Proof. Consider a setting where we have an additional copy of every item and we run 2 auctions instead of one: • Auction A: The k + 1 first agents are participating.
• Auction B: The n − (k + 1) last agents are participating.
We claim that the revenue in this setting is not less than the revenue from the original auction by arguing that the following mechanism gets exactly the same revenue as before. Let M be the optimal mechanism in the original setting. Run M in each auction by sampling bids from the distributions of the missing agents. It is easy to see that the first auction achieves revenue equal to the revenue contribution of the first k + 1 agents in the original auction while the second one achieves revenue equal to the revenue contribution of the last n − (k + 1) agents in the original auction. So it suffices to bound the revenue of each of the two auctions separately.
For the auction A, we can see that an auction A' where there are unlimited copies of item 1 (with scale factor s 1 ) would give us at least as much revenue since scale factors of the items could be artificially reduced to match those in auction A. Therefore the revenue of the first auction is upper bounded by k+1 i=1 s 1 Rev[F i ]. For the auction B, we can see that an auction B' where each agent comes from a point distribution with value a i instead of the original distribution supported in [a i+k+1 , b i+k+1 ] would achieve at least as much revenue. This is because, in auction B', the bids of each agent can be completely ignored and resampled from the previous distribution for every agent and then run the optimal mechanism for auction B. This mechanism is definitely DSIC since it doesn't depend at all at the agent bids and is IR since a i > b i+k+1 which means that an agent in auction B ′ will always afford to pay the asked price. The revenue in auction B' is exactly n−k−1 i=1 s i a i which gives us the upper bound for auction B.
We now try to construct a mechanism that achieves good approximation guarantees compared to the bound on revenue we've proven. We alter slightly the rules of the decreasing price mechanism with the following rule. Agents that are asked to pay price p j in the original DPM will get item j and pay s j p j .
Notice that although more than 1 agent may be assigned to item j, the effect of item j can be simulated by giving an item with higher scale factor j ′ < j to each additional agent but with probability s j /s j ′ . Since for every j there are at most j agents assigned to items 1 through j this mechanism is feasible. This mechanism is also DSIC since if agent i is priced p j any bid higher than p j gets him exactly the same price and allocation while any bid lower than p j gets him dropped of the mechanism without any item. Therefore bidding his real value is preferable since the mechanism is clearly IR.
We use the randomized construction of the previous section to create the DPM mechanism. Under this construction, a price of at least A t is assigned to t 3e 2 k blocks in expectation which gives a revenue at least N −1 t=1 kA t S t /(3e 2 k) where S t = s tk is the scale factor of the item agents that are priced A t receive. Since kA t S t ≥ tk+k−1 i=tk s i a i , we get revenue of at least n−k−1 i=k s i a i 3e 2 k . To bound the remaining terms of the revenue, we use second price auction with a single price p to sell just the first item. With probability 1/2 we set p to be the Myerson reserve of the first agent while with probability 1/(2k) we set p to be the the reserve price of the i-th agent for i = 2 to k + 1. This is an anonymous mechanism and gets revenue at least s 1 (kRev[F 1 ] + k+1 i=2 Rev[F i ])/(2k). If we run the DPM mechanism with probability 2/(3e 2 + 2) or the second price auction with probability 3e 2 /(3e 2 + 2), we get an expected revenue of at least: 2 3e 2 + 2 This completes the proof and shows an O(k) approximation.
Conclusion
Anonymity imposes real constraints on an auction and, as we have seen, on the revenue it can achieve. In the worst case, we have shown that anonymous mechanisms are quite limited, and that the best anonymous mechanism cannot substantially beat a simple single price. The real advantage of an anonymous mechanism is directly related to the auctioneer's ability to infer information about f i and v i from the bids of other advertisers, v −i , in essence circumventing the ex-ante anonymity requirement.
Our work leaves a few immediate open questions about anonymous auctions with limited ambiguity. We showed that anonymous auctions can achieve a Θ(k) approximation for general kambiguous distributions. For single price mechanisms, we saw that the worst-case approximation improves from Θ(n) to Θ(log n) when distributions are regular -can we show an analogous Θ(log k) bound in the k-ambiguous setting when distributions are regular? Another interesting research direction is to identify alternative metrics for measuring ambiguity. For example, what can we say about the revenue from an anonymous auction when the differential entropy between f i and the inferred posterior h is small?
More broadly, our work suggests many general questions about anonymous mechanisms. Can anonymous auctions achieve good approximations beyond the settings we have studied? Interesting dependencies arise outside the digital goods setting because one bidder's bid can affect the auctioneer's inference about another bidder, affecting the outcome of the auction in a complicated way. Another question is one of computational complexity -how difficult is it to compute the optimal anonymous auction?
Theorem A.4 (Roughgarden and Talgam-Cohen [13]). In a private, correlated values setting, the optimal mechanism M = (A, P) that is both DSIC and ex-post IR is the following, as long as φ is monotone (h is regular): 1. Elicit values v from the bidders.
Given this characterization and the preceding claim, our general characterization theorem is immediate: Theorem A.5. The optimal anonymous mechanism is the following, as long as φ is monotone (h is regular): 1. Elicit values v from the bidders.
The characterization we gave in Section 3 of the optimal symmetric auction for digital goods is then an immediate corollary: Corollary A.6. The optimal anonymous digital goods auction sets the optimal price for each bidder according to the posterior belief h.
Also, the extreme cases noted in Section 3 behave similarly: Corollary A.7. If the distributions f i are point distributions (bidders' values are known precisely to the auctioneer), have non-overlapping support, or are the same for all bidders, then the optimal anonymous mechanism coincides with Myerson's optimal mechanism.
In all three cases, the posterior distribution inferred from v −i is precisely f i , therefore the auction precisely identifies each bidder and runs the optimal auction. | 2014-11-05T11:51:50.000Z | 2014-11-05T00:00:00.000 | {
"year": 2014,
"sha1": "1b5dedc4cb589bb95b1cd0b808a0f10674369efe",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1b5dedc4cb589bb95b1cd0b808a0f10674369efe",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Computer Science",
"Economics"
]
} |
5090552 | pes2o/s2orc | v3-fos-license | Foreign Aid and Economic Growth in Developing Countries : Evidence from Sub-Saharan Africa
This study aims at understanding the impact of foreign aid on the economic growth of the Sub Saharan African region. Despite being the largest foreign aid recipient in the world, the region is the poorest with the lowest Human Development Index (HDI) and Gross National Income (GNI) per capita. This raises serious questions about the effectiveness of foreign aid to the economic growth and development of the region. As such, we examine the relationship between foreign aid, determined by the official development assistance (ODA), and the economic growth rate of the Sub Saharan Africa’s ten largest recipients of foreign aid, for a 23-year period from 1990 to 2012. These ten countries include Ethiopia, the Democratic Republic of Congo, Tanzania, Kenya, Côte d’Ivoire, Mozambique, Nigeria, Ghana, Uganda and Malawi. We find that aid by itself does not have significant impact on economic growth. However, the variable aid interacted with the policy index was found to be statistically significant and positive, which means that aid tends to increase growth rate in a good policy environment. Subsequently, when we include the institutional quality index and its interaction term in the model, we find that institutional quality has a positive and significant impact on growth; however, none of the aid variables was significant. We also test the two-gap growth model which states that foreign aid enhances economic growth through investment and imports. The results show that foreign aid is a good ingredient for supplementing investment and imports requirements in these ten countries. We believe that given foreign aid is conditional on the economic, political and institutional environment of the recipient country, this can explain why aid effectiveness is insignificant in the Sub Saharan Africa region where bad governance is a core issue on the region. Therefore, respective governments, donor agencies, and policy makers should take into consideration these multiple aspects when undertaking aid-financing activities. How to cite this paper: Tang, K.-B. and Bundhoo, D. (2017) Foreign Aid and Economic Growth in Developing Countries: Evidence from Sub-Saharan Africa. Theoretical Economics Letters, 7, 1473-1491. https://doi.org/10.4236/tel.2017.75099 Received: July 19, 2017 Accepted: August 12, 2017 Published: August 15, 2017 Copyright © 2017 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access K.-B. Tang, D. Bundhoo
Introduction
The role of foreign aid in developing countries has become a subject of heated debate among economists and development specialists over the past decade.This has been generated in large part by international attention towards the Millennium Development Goals (MDGs).The United Nations Millennium Declaration clearly recognises that foreign aid, better termed as Official Development Assistance (ODA), is a necessary and complementary source of finance for better development and achieving the MDGs.The Organisation for Economic Co-operation and Development (OECD) defines ODA as government aid designed to promote the economic development and welfare of developing countries.This source of external finance comes in the form of bilateral grants, loans, food aid, emergency relief, technical assistance, financing for construction projects, as well as multilateral flows.Total aid since 1990 amounts to USD 58 billion in current terms, which work out to approximately USD 96 billion in real terms using 2011 prices.These huge amounts of financial assistance to developing countries amply justify the strong debate among scholars on the real contributions of foreign aid on economic growth-sometimes with claims that it is wasted.One way to try get a handle on this issue is to look at the correlation between ODA and economic growth-the purpose of the proposed paper.A look at the geographical distribution of aid indicates that the Sub-Saharan Africa region is the largest ODA recipient over the past years, accounting for approximately 35% in 2012, which is three times much larger than ODA provided to the South Asia region (World Development Indicators Database).With a large portion of foreign aid injected in the Sub-Saharan Africa, we expect to see much improvement in the aggregate growth and standards of living in the region.
The fundamental contribution that foreign aid can bring to the recipient country is economic growth and development, which in turn can reduce poverty.However, despite being the biggest beneficiary of aid with the highest ODA per capita, it appears that foreign aid has not produced the expected effects in Sub-Saharan Africa.Data from the World Development Indicators Database and the Human Development Report (2013) show that the region's ten largest aid recipients, namely, Ethiopia, The Democratic Republic of Congo, Tanzania, Kenya, Côte d'Ivoire, Mozambique, Nigeria, Ghana, Uganda, and Malawi still face a high level of poverty and low income.In spite of being the ten largest aid recipients in Sub-Saharan Africa, these countries find themselves among the lowest ranked nations based on the HDI.Even with slight improvements in the HDI, there are no significant changes in the standard of livings of these countries.For instance, the Democratic Republic of Congo is the second largest aid consumer in the region but still, 71% of its population lives below the poverty line, and is penultimate in the HDI ranking.Tanzania, Mozambique, Nigeria, Uganda and Malawi also have more than half of their population living below $1.25 a day.This raises a serious question on the effectiveness of foreign aid on the aggregate growth rate of these countries.By narrowing our study to the Sub-Saharan Africa region's ten largest ODA recipient countries and a closer examination of the relationship between economic growth and foreign aid for these ten countries, we will be able to shed some light on the extent to which aid has been effective in the economic development of the region as a whole.
Empirical studies of aid effectiveness on growth have shown mixed results.
While some studies such as Hansen and Tarp [1], Mosley [2], Burnside and Dollar [3], Collier and Dollar [4] find statistically significant links, some do not (Ram,[5]; Boone, [6]).The conclusion has been that there is no robust relationship between aid and aggregate growth.One important growth theory which explains the relationship between foreign aid and economic growth is the two-gap model, pioneered by Chenery and Strout [7] who advocate that foreign aid can help bring a positive contribution to the economic performance of recipient countries by supplementing domestic savings and export earnings through investment and imports respectively, which are both complemented by foreign aid.
To the extent that foreign aid is an important source of development finance for these developing countries, it should be noted that external factors such as the economic policies, institutional and political elements have a large role to play in explaining aid effectiveness on economic performance.Based on the work of Burnside and Dollar [3], several empirical studies such as Collier and Dollar [4], Ram [5], Islam [8], Boone [6] have incorporated variables such as institutional quality index and policy indices interacted with the aid variable.Their findings showed that the economic policy, institutional and political environment of the recipient country has a crucial role to play in the aid-growth relationship.
Using a sample of the Sub Saharan Africa's ten largest ODA recipient countries for a 23-year period (1990 to 2012), the objective of this paper is to understand the extent to which aid is effective on the economic performance of the ten largest aid recipients of the Sub Saharan Africa.Specifically, it also aims to analyse the role of the economic, political and institutional factors of the recipient country in the aid-growth relationship.With this objective in mind, we will able to understand the root of our research question which is: despite the fact that the Sub-Saharan Africa region is the largest ODA consumer, the region is the least developed country with a high level of poverty and poor standard of living.
Specification of Model
This section discusses the model specifications to examine the relationship between foreign aid and economic growth.The model is derived from the basic neoclassical growth model, developed by Solow [9], in which foreign aid is introduced as an input in addition to capital and labour.
The objective of the present paper is to take into account a range of factors such as economic policies, institutional and political factors that can help explain the growth performance of the ten chosen African countries, while at the same time ensuring that any interference about the relationship between aid and growth are robust.Inclusion of all the different factors mentioned above yield the following growth model: where i indexes countries and t indexes time.GGDP is the real GDP per capita growth rate and is the dependent variable.LGGDP 0it is the logarithm of initial real GDP per capita and captures the conditional convergence effects of the growth theory (Hansen and Tarp [1]; Collier and Dollar [4]; Ram [5]).INV represents the rate of growth of capital stock, which is a proxy for gross capital formation as a percentage of GDP.LABOUR represents the increase in labour force as a percentage of the total population.AID, is the net official development assistance (ODA) as a percentage of GDP, while (AID) 2 measures any diminishing returns to aid.To analyse a longer effect of foreign aid on economic growth, we include a certain lagged terms of AID and POLICY interpreted by the variable AID i(t−n) and POLICY i(t−n) respectively, where n represents the number of lagged periods.POLICY is the policy index capturing the fiscal, monetary, and trade policy of the economy.INSTITUTIONAL QUALITY is a measure of the quality of governance.AID × POLICY is the aid-policy interaction term and measures whether aid effectiveness is conditional on the macroeconomic policies of the recipient country.The M2GDP t−1 variable is the lagged M2 (money and quasi money) as a percentage of GDP and measures the financial depth of the economy.AID × INSTITUTIONAL QUALITY is the interaction term between institutional quality and aid; it determines whether aid effectiveness is conditional on the institutional quality of the recipient country.All data are from the World Development Indicators Database of the World Bank.
Policy Index
Using a similar approach to Burnside and Dollar [3], we construct a policy index which captures the fiscal, monetary and trade policy environment of the recipient country.Using an equation with GGDP as the dependent variable, the weights of the macroeconomic terms are determined by a regression where they are used as independent variables to predict growth, without using any terms for foreign aid.
The policy index for each country in each year is then obtained by replacing the coefficients estimated into (2): INFLATION represents the logarithmic inflation rate plus 1 of each country in each year, and is a measure of the monetary policy of the country (Burnside and Dollar [3]; Collier and Dollar [4]).Given that data on the budget surplus was not available, we use government consumption relative to GDP, GOVCONS, as a measure of fiscal policy, as used by Collier and Dollar [4].TRADE, trade openness, is measured as exports plus imports relative to GDP, that is trade as a percentage of GDP, used by Frankel and Romer [10], Collier and Dollar [4], and Dollar and Kraay [11].All data for constructing the policy index are from World Bank-World Development Indicators database.
Institutional Quality Index
To analyse the effect of institutional factors such as political stability, qualitative aspects of the government, and corruption level, we construct an institutional quality index comprised of six indicators from the Word Bank-World Governance Indicators Database.These six indicators are: "Control of Corruption," "Government Effectiveness," "Political Stability and Absence of Violence/Terrorism," "Regulatory Quality," "Voice and Accountability," and "Rule of Law." The INSTITUTIONAL QUALITY index is constructed using Equation (3): where STABILITY measures the "Political Stability and Absence of Violence/ Terrorism" index, EFFECTIVENESS captures the "Government Effectiveness" index, CORRUPTION measures the "Control of Corruption" index, LAW is the "Rule of Law" index, ACCOUNTABILITY captures the "Voice and Accountability" data, and REGULATORYQUALITY is the "Regulatory Quality" index.The Θs are the coefficients derived from regressing GGDP on these six indicators.
Estimation Methods
This study makes use of time-series cross-sectional (TSCS) or panel data for 10 Sub-Saharan African countries over a period of 23 years (1990-2012) giving a total of 230 observations.The estimation period is chosen to use 23 years of time-series observations in each country in order to maximise the cross-sectional dimension of the panel to 10 countries.To achieve our objective of determining the relationship between foreign aid and economic growth, we shall make use of models and estimation methods richer than the basic Ordinary Least Squares (OLS) method such as pooled OLS, fixed-effects, random-effects, first-difference estimator, and two-stage least squares methods.
Policy Index Construction
We begin with a regression of our base specification, using Equation (4) below, but excluding any of the terms involving aid.
Given that data for the institutional factors was not available for the whole period, we did not include them in the present regression model.However, they will be considered at a later stage.
The regression output of Equation ( 4), as illustrated in Table 1 column (1), shows that the model is statistically significant at the 5% level.The most significant variables in the regression (model ( 1)) are INFLATION and GOVCONS, at the 1% level, where GDP per capita growth rate would decrease with an increase in both variables.The logarithmic initial GDP per capita and investment are statistically significant at the 5% level and 10% level respectively.All the variables have the intuitive signs, except for LABOUR.
Using the regression coefficients from Table 1 model ( 1), we construct the policy index comprised of the government consumption, inflation, and trade: 7.038 2.842 0.304 0.033 As mentioned above, INFLATION and GOVCONS are both statistically significant.Although the trade openness variable is not significant at the 5% level, we have reason to believe that there is considerable multicollinearity between the variables in the model.As such, we include all three variables in the policy index.
We let the growth regression determine the relative importance of the different policies in the policy index.The constant term, 7.038, is found by predicting the growth rate using the mean value of all the other variables in the regression.
In this way, the policy index can be thought of as the predicted growth rates of the country for that time period (assuming mean values of all other variables).
Using the basic pooled OLS estimation method, we add the AID variable to produce model 2. The result, as illustrated in Table 1 column ( 2), shows that ODA as a percentage of GDP is not significant.We notice that the coefficients on the policy variables are almost unchanged, indicating that the partial correlation between aid and our policy variables is close to 0. Since the R-squared has remained at approximately 0.25 in both cases, which implies that R-squared does not increase by much when AID is added to the model, this indicates that aid has no significant impact on growth.Nevertheless, the model is statistically significant at the 5% level.
Evaluation of Models
We now present the estimates obtained for five different models: pooled OLS, two-stage least squares (2SLS), first-difference (FD) estimator, fixed-effects (FE), and the random-effects (RE) models (see Table 2).The policy index which is calculated using Equation ( 5) is included as an explanatory variable in the models.To analyse the impact of lagged values of aid and policies on growth, we did 1), the dependent variable is the real per capita GDP growth in model ( 1) across (10).In model ( 1), ( 3), ( 5), (7), and ( 9), the independent variables are the logarithmic of initial GDP per capita (LGGDP), investment (INV), increase in the labour force participation rate (LABOUR), ODA as a percentage of GDP (AID), the squared term of AID (AID2), the policy index (POLICY), the aid-policy interaction term (AID × POLICY), and one period lagged value of the ratio of M2 to GDP (M2GDPt-1).Models ( 2), ( 4), ( 6), (8), and ( 10 Based on the F or Wald statistics, all the ten models are significant at the 5% level.Despite the Hausman test chooses the pooled OLS over the random-effects model, we find that the results output using both methods are exactly the same, with an R-squared of 0.2535, and 0.3335 when we include lagged values of aid and policy terms.The reason for this identical output is the fact that there are no significant differences across these ten countries, and thus the random-effects estimates will be consistent with the pooled OLS estimates.Under the assumption that AID may be endogenous, models (3) and ( 4) in Table 2 use the twostage least squares method, where LGGDP and POLICY are used as instrumental variables for the endogenous variable, AID.In other words, we believe that initial GDP per capita and the economic policies of the country affects real GDP per capita growth indirectly through foreign aid.In their respective work, Burnside and Dollar [3] and Hansen and Tarp [1] also used log of initial GDP and the policy index as instruments in the evaluation of aid effectiveness on economic growth.The first-stage regression with AID as the dependent variable is reliable and significant at the 5% level, with an R-squared of 0.9121 in model ( 3), and 0.6823 in model ( 4).The outcome therefore suggests that the 2SLS estimates are reliable.For each regression model, the standard errors were adjusted to correct for the problem of heteroscedasticity; we therefore used the robust standard errors.
Foreign Aid and Economic Growth
An analysis of the relationship between current foreign aid and economic growth showed mixed results.We find that in model ( 1), (3), ( 5), (7), and (9), where none of the lagged values of aid or policy are incorporated, the coefficient of aid itself is not significantly different from zero at the 5% level.Burnside and Dollar [3], Hansen and Tarp [1] and Boone [6] also came to a similar conclusion where in their respective empirical studies, they found that foreign aid has an insignificant impact on economic growth rate.When we take into consideration the periods lagged values of aid, policy, and their interaction terms, as in model ( 2), ( 4), ( 6), (8), and (10), we find that AID is statistically significant at the 5% level using the pooled OLS, first-difference, and random-effects model; and the models exhibit a negative aid-growth relationship.The idea that aid undermines growth has been found in many studies (e.g.Bakare [12], Griffin and Enos [13], and Knack [14]).Knack [14] associates this negative link by stating that aid dependency is disadvantageous to the economy since it tends to undermine the quality of governance, by encouraging corruption and provoking conflicts over control of aid funds.
Considering lagged values of aid (models (2), ( 4), ( 6), ( 8) and ( 10)), we find that while current foreign aid depicts a negative significant relationship on economic growth, as explained above, the one period lagged value of aid shows a significant positive impact on economic growth, using all five model estimators at the 1% percent level.This implies that this year's foreign aid would positively impact on next year's economic growth by approximately ±0.2%.A positive aid-growth relationship is always encouraging because more aid implies higher economic growth.Positive association between foreign aid and growth has been found in many empirical studies such as Burnside and Dollar [3], Dalgaard et al.
[15], Dowling and Hiemenz [16].The insignificant or negative impact of current aid on economic growth and the positive impacts of one period lagged value of aid can be explained by the argument given by Moreira [17]: One would not expect foreign aid to have its immediate effect on growth.Instead, lags may occur between aid-financed activities and their final impact on economic growth.This is especially the case with foreign assistance given for infrastructure, research and development or education purposes which may not show any immediate impact on growth in the immediate or short term.Figures from the OECD database show that the purpose for which aid are provided for by the DAC are mainly social and administrative, and economic infrastructure which tend to take time for being fully effected on growth.
The fact that aid inflows consist of a large component which tend to have very gradual impact on growth may help explains the non-significant aid-growth relationship under the 2SLS and fixed-effects model.When we look at the impact of foreign aid in two and three years' time, we find no significant effect at the 5% level.The coefficients of AID t−2 and AID t−3 were non-significant across the five models.Rabin [18] explains that sustained and rapid population growth in the African region is one important condition which is making aid effectiveness harder.There is therefore a need to address this demographic challenge.
Economic Policies and the Aid-Growth Relationship
One important aspect of this study is the contribution of the economic policies (fiscal, monetary, and trade) in the aid-growth relationship.By incorporating the policy index, constructed using Equation ( 5), in the growth models, we find that the policy index in model ( 1), (7), and (9) (see Table 2) is positive and statistically significant at the 1% level.Also, the coefficient is close to 1, which is similar to Burnside and Dollar [3] findings.In models (2), ( 4), ( 6), (8), and (10), where lagged values of aid, policy, and aid-policy terms are incorporated, we find that the policy index at current periods is not statistically significant.However, one period lagged value of the policy index appears to have a high positive significant impact on GDP per capita growth rate.The two and three periods lagged values of policy on the other hand are insignificant in most cases except under the first-difference estimator.Nevertheless, the most important variable in the model is the AID × POLICY interaction term.It is believed that the economic policies of an economy have a crucial role in determining the impact of aid on the growth rate.
Considering the aid-policy interaction term in the growth models, an interesting result emerges.The aid-policy interaction term has a significantly positive coefficient at the 1% level across all models, except in (1), ( 7) and ( 9) (see Table 2).
Burnside and Dollar [3] and Denkabe [19] also found similar results where the relationship was positive.The positive significant interaction term implies that the higher the policy level, the greater the effect of foreign aid on GDP per capita.The results from our study, which used inflation rate, trade openness, and government consumption as a measure of the policy index, tend to assert that these countries should aim in improving these three economic factors.
When we refer to the lagged aid-policy interaction term, the results show a significant negative coefficient for the variable AID t−1 × POLICY t−1 across all models.This is interpreted as: with a high level of economic policies last year, foreign aid provided last year will tend to have a lower impact on the present economic growth level, implying that aid works better in worse policy environments.Two periods and three periods lagged values of the aid and policy interaction terms were however positive and statistically significant at the 5% level using the first-difference estimator.Nonetheless, these results clearly acknowledge that the economic policies of a country do have important implications on the aid-growth relationship, the impacts of which, however, manifest themselves differently at different lagged periods.
Institutional Factors and the Aid-Growth Relationship
Among other factors, one very crucial fact that has arisen in understanding the relationship between aid and economic growth is the political and governance issues facing a country.Such a point is an important contribution to our study in the sense that Africa is considered as lagging behind with respect to good governance and where conflicts are very common.Many recent articles such as Moyo [20] and Abuzeid [21] have claimed that the large infusion of foreign assistance to these African countries may not have served its true purpose due to poor governance and the high political instability prevailing there.The Worldwide Governance Indicators (WGI) database of the World Bank provides six measures of the institutional quality of the economy, which are: "Control of Corruption," "Government Effectiveness," "Political Stability and Absence of Violence/Terrorism," "Regulatory Quality," "Voice and Accountability," and "Rule of Law." Given that these data are not available for the whole period of 1990-2012, but only for 14 years, we believe that the results might not reflect the real situation in these countries and have therefore excluded it in the growth regression in Table 2.
To analyse the role of the institutional qualities of an economy on the effectiveness of aid, we construct an institutional quality index comprised of all six governance indicators.By regressing GGDP on these six indicators, we obtain the following results.
Using the regression coefficients from Table 3, we obtain the following institutional quality index: Using GGDP as the dependent variable, we let the growth regression determine the relative importance of the different institutional indicators in the index.The model is reliable and significant at the 5% level.STABILITY and REGULATORYQUALITY are significant at the 1% level.EFFECTIVENESS and ACCOUNTABILITY are statistically significant at the 5% and 10% level respectively.Despite LAW and CORRUPTION are not significant at the 5% level, we believe there is considerable multicollinearity between the variables in the model, and thus we include all six indicators in the institutional quality measure.
With reference to the work of Knack and Keefer [22], Burnside and Dollar [3] used a measure of institutional quality that captures security of property rights and efficiency of the government bureaucracy.We trust that the institutional quality measure constructed in this study is a better indicator since it covers a wider aspect of the government quality.By replacing these six indicators with the institutional quality index, and using the same approach as in Table 2, we get the results as illustrated in Table 4.
Referring to the F-statistics, we find that model ( 5), ( 6) and ( 7) are not reliable.Model (8) also does not have much significance: the model is reliable only LGGDP 1), the dependent variable is the real per capita GDP growth in model ( 1) across (10).In model ( 1), ( 3), ( 5), (7), and ( 9), the independent variables are the logarithmic of initial GDP per capita (LGGDP), investment (INV), increase in the labour force participation rate (LABOUR), ODA as a percentage of GDP (AID), the squared term of AID (AID2), the policy index (POLICY), the aid-policy interaction term (AID × POLICY), a measure of the institutional quality of the country (INSTITUION), the institution-aid interaction term, (AID × INSTITUTION), and one period lagged value of the ratio of M2 to GDP (M2GDPt-1).Models ( 2), ( 4), ( 6), (8), and (10) include the three periods lagged values of AID, POLICY, and the AID × POLICY interaction term.The 2SLS model ( 3) and ( 4) uses LGGDP and POLICY as instrumental variables.*Significant at the 10% level; **Significant at the 5% level; ***Significant at the 1% level; Robust standard errors are in parentheses.
at the 10% level.Including the institutional quality index and its interaction with aid in the model have changed the results quite significantly.Compared to Table 2, the goodness-of-fit of each model has reduced.Aid was insignificant under all models.The aid-policy interaction terms were also non-significant which implies that the economic conditions of an economy do not affect the extent to which aid impacts GDP growth rate.Using Burnside and Dollar [3] dataset and sample for a time period of 24 years from 1970 to 1993, Easterly [23] also found no significant relationship between the aid-policy interaction term and GDP per capita growth using the OLS and 2SLS model.
The main objective of Table 4 is to see whether the institutional quality has any role to play in the aid-growth relationship.The result showed that INSTITUTIONAL QUALITY is statistically significant and positive in nearly all models, which implies that institutional factors such as government effectiveness, political stability, qualitative factors, etc. have a significant impact on GDP per capita growth rate.However, the institutional quality index interacted with foreign aid is not significant, except under the 2SLS model.The results reported in Table 4 appear to be less reliable with a larger number of coefficients being not significant compared to the model presented in Table 2.This can be partly explained by the smaller sampler size since data for these indicators were only available for a smaller time frame.
Given that institutional qualities of an economy has a significant impact on GDP per capita growth rate in the Sub Saharan Africa region, we acknowledge the fact that political stability, government effectiveness, corruption, quality of the law have a very big role to explain the aid-growth situations in the Sub Saharan Africa region today.In an online article titled: "Why foreign aid is hurting Africa," in Wall Street Journal, economist Moyo [20] clearly explains that foreign aid is making Africa becoming poorer.The most important reason put forward in her article is the high level of corruption and government inefficiency.Given that the region faces high level of debts, with the infusion of large amount of aid, debts are being repaid at the expense of improving the economic activity of the country.Also, foreign aid given to boost up the economy usually end up in satisfying the personal gains of bureaucracies.In spite of the knowledge that recipient countries will misuse the foreign aid, donors fail to speak out against them because of the strategic or political importance of these regions as an ally [24].The corruption watchdog agency Transparency International found evidence for several cases where foreign aid money is being massively misused at the expense of the development of the economy.A report in 2002 by the African Union estimated that corruption was costing the continent $150 billion a year.
These are crucial evidence to illustrate the fact that the high level of corruption and political instability has been a hindrance to aid effectiveness on economic development in the Sub Saharan Africa region, reflected in the area's high level of poverty and very low HDI.
Testing the Two-Gap Model
As explained by Harrod-Domar's model, further developed by Chenery and Strout [7] as the two-gap model, foreign development assistance is an important ingredient for boosting economic activity in a country.The two-gap model states that foreign assistance can play a critical role in supplementing domestic resources in order to relieve savings or foreign-exchange bottlenecks (Todaro and Smith [25]).The basic argument here is that most developing countries face either a shortage of domestic savings to match their investment opportunities, or a shortage of foreign exchange to finance needed imports of capital and intermediate goods.With a need to increase investment and imports, these two gaps are mostly filled with foreign aid.By applying a similar approach to Easterly [26], we analyse the impact foreign aid in the two-gap model.
Using the same approach as him, we find that overall, foreign aid has a positive significant impact on the level of investment at the 1% level.A 1% increase in ODA per GDP would raise the investment level by approximately 0.46%.
However, this is far from a one-to-one relationship.A country-by-country analysis shows that out of the ten countries, only the Democratic Republic of Congo, Ethiopia, Malawi, Mozambique, and Tanzania financing gap would be improved by aid.
The two-gap model also presents a trade gap where exports earnings may not be sufficient to offset the imports requirements.If the trade gap is larger than the investment-savings gap, then foreign aid will automatically fill the investmentsavings gap as well.Assuming aid requirements are calculated as the excess of imports over exports, we expect aid to go one for one in imports.An overall analysis shows that a 1% increase in aid would significantly increase imports by 0.48%.A country-by-country regression shows that imports are improved by foreign aid in eight of the ten countries, except in Côte d'Ivoire and Uganda.
The results of testing the two-gap model, shows that foreign aid will have a more favourable effect on imports than on the investment levels of the countries.
Côte d'Ivoire's investment level and imports however does not appear to be improved through foreign aid.But overall, foreign aid can help improve economic growth through investment and imports in the Sub Saharan region of Africa (Table 5).
Conclusions
To understand the impact of foreign aid on economic growth in the Sub-Saharan African region, this study makes use of a sample of ten countries over a period of 23 years from 1990 to 2012.These countries were chosen on the basis that they are the ten largest recipients of aid in Sub-Saharan Africa, and are namely: Ethiopia, the Democratic Republic of Congo, Tanzania, Kenya, Côte d'Ivoire, Mozambique, Nigeria, Ghana, Uganda, and Malawi.Based on the important work of Burnside and Dollar [3], we have explored the impact of policy and institutional variables in our aid-growth analysis.Our results showed that aid by itself is not effective on the economic performance of the recipient country.The conclusion we could derive was that the policy and institutional environment of the country has important implications for aid effectiveness: aid tends to be more effective in countries with sound economic and institutional policies.An important contribution we made in this study is the analysis of lagged variables of aid, policy and the aid-policy interaction term.We could deduce that foreign 5 shows the results of testing the gap model, as conducted by Easterly [26].Column (1) is the result of regressing aid requirements, which is the investment-savings gap, on the level of investment.Column (2) is the outcome of regressing aid requirements, which is the import-export gap, on the level of imports.A country-by-country analysis is conducted then an overall analysis is carried out.Data on savings, investments, exports and imports are obtained from the World Bank.*Significant at the 10% level; **Significant at the 5% level; ***Significant at the 1% level; a Aid is calculated as the investment-savings gap; b Aid is calculated as the export-import gap; c is the analysis conducted on all ten countries together.
aid may not show any immediate impact on economic growth since ODA intended for investment projects (social and administrative/economic infrastructures) will only show an effect on economic growth in the medium or longer term.A test of the two-gap model pioneered by Chenery and Strout [7] shows that foreign aid can help promote economic growth through supplementing imports and investment.Evidence backed by the empirical results, which show that economic policies and institutional factors have important significance on aid effectiveness and economic growth on the largest ten recipients of aid in the Sub-Saharan African region, implies that governments and aid agencies should take into consideration these factors when it comes to improve aid efficiency.In this study, the economic policy index was constructed using the inflation rate, trade as a percentage of GDP, and government consumption.We can therefore fairly assert that recipient countries should aim to improve these three variables for a better economic policy environment.Effective and efficient use of foreign aid is however possible only in countries with good governance and less corruption, which does not seem to be the case for the countries studied in this paper.This could partly explain why a high percentage of people living in these countries are still living in extreme poverty.Measures, as explained by Collier [27], to better improve aid effectiveness may include provision of aid on the basis of attained level of policies rather than on promises of improvement; foreign aid in the form of technical assistance and skills rather than money, which in return may help promote productivity.Moyo [20] lays emphasis on being aid-independent and to exploit the natural resources such as oil, copper, gas reserves, which are in abundance in the African continent.
The findings would have been more precise if there were no limitations to the study, in particular due to data gaps.For instance, data for the institutional quality variables were available for only 14 years.We also believe a larger sample size to represent the whole of Sub-Saharan Africa is advisable for future research.Additionally, aid effectiveness could be conducted using different parameters such as income levels, donors' characteristics, etc. Incorporating a larger set of variables to capture the economic policy and institutional environment of the recipient country can provide better results.To analyse the impact of foreign aid on economic growth for more than one year, we included lagged variables of aid for three periods.We believe that developing a dataset with sufficient quantity of data which allow for analysing aid effectiveness on a longer time period is highly recommended for better policy making.The above suggestions, if considered, can improve the findings of the present paper and hence better inform policymakers and aid agencies.
Notes:
The dependent variable is the real per capita GDP growth in models (1) and (2).We use Equation (4) to construct model(1).In model (1), the independent variables are the logarithmic of initial GDP per capita (LGGDP), investment (INV), change in the labour force participation rate (LABOUR), one period lagged value of the ratio of M2 to GDP (M2GDP t−1 ), inflation (INFLATION), government consumption as a percentage of GDP (GOVCONS) and trade as a percentage of GDP (TRADE).INFLATION, GOVCONS and TRADE are the policy variables.In model (2), we add AID, ODA as a percentage of GDP, as independent variable in addition to the variables presented in model(1).*Significant at the 10% level; **Significant at the 5% level; ***Significant at the 1% level robust standard errors are in parentheses.
two regressions under each model: one with, and the other without the lagged values.To determine which model is more reliable out of the five, we carry out the Hausman test and the Breusch-Pagan Lagrange Multiplier (BPLM) test.Out of the pooled OLS, random-effects, fixed-effects, and 2SLS models, we find that the pooled OLS is the best model.No comparison test is made with the firstdifference model because none of the coefficients are common with the other models.The first-difference estimator makes use of the first-difference of each coefficient.These tests therefore suggest that analysis of the aid-growth relationship should be based on the more consistent models: pooled OLS and firstdifference model.
) include the three periods lagged values of AID, POLICY, and the AID × POLICY interaction term.The 2SLS model: (3) and (4) use LGGDP and POLICY as instrumental variables.*Significant at the 10% level; **Significant at the 5% level; ***Significant at the 1% level; Robust standard errors are in parentheses.
Table 2 .
Growth regressions-using policy index and aid terms.
Notes: Using Equation (
Table 3 .
Institutional quality index.
Note: Table3is used to construct the institutional quality index comprised of a measure of: political stability (STABILITY), government effectiveness (EFFECTIVENESS), corruption (CORRUPTION), quality of law (LAW), voice and accountability (ACCOUNTABILITY), and quality of the government policies (REGULATORYQUALTY).The dependent variable is the annual growth rate in GDP per capita (GGDP).*Significant at the 10% level; **Significant at the 5% level; ***Significant at the 1% level; Robust standard errors are in parentheses.
Table 4 .
Growth regressions-using institutional index and policy index.
Table 5 .
Testing the two-gap model.
Note: Table | 2018-04-24T22:28:13.715Z | 2017-06-30T00:00:00.000 | {
"year": 2017,
"sha1": "f31fde5ccd5b372c2661d95b40d2b1475fb10108",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=78442",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f31fde5ccd5b372c2661d95b40d2b1475fb10108",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
54665142 | pes2o/s2orc | v3-fos-license | Nutrigenomics Coupling with other OMICS Platform Enhance Personalized Health Care in Metabolic Disorders
Diabetes is a multifactorial of diseases characterized by high blood glucose levels which occur as a result in the body’s inability to produce and/or use insulin. Both type 1 and 2 diabetes are thought to be complex disease which developed by the infl uence of many susceptibility and protective genes, in relation with negative and positive environmental factors. Though type 1 diabetes is distinguished by common beta-cell loss which is mediated by an autoimmune process to extent that all patients with overt type 1 diabetes will essentially need insulin. Multiple genetic factors have been connected to type 1 diabetes which can defi ne individualized plan for type 1 prevention. This review has focused on type 2 diabetes (T2D) which has become more and more a challenging health burden as a result of its degree of morbidity, mortality and heightened prevalence worldwide. According to World Health Organization (WHO) and Centers for Disease Control and Prevention (CDC), T2D is among the top ten leading cause of death in USA and the world at large, while prediabetes is prevalent among children and young adults. T2D also known as hyperglycemia result from compromised insulin utilization (insulin resistance-IR) linked with insuffi cient compensatory insulin production. Long term consequence and comorbidities of T2D are nephropathy, neuropathy, retinopathy, hypertension, cardiovascular disease, dyslipidemia, cerebrovascular and peripheral vascular disease [1-3].
Short Communication
Diabetes is a multifactorial of diseases characterized by high blood glucose levels which occur as a result in the body's inability to produce and/or use insulin. Both type 1 and 2 diabetes are thought to be complex disease which developed by the infl uence of many susceptibility and protective genes, in relation with negative and positive environmental factors. Though type 1 diabetes is distinguished by common beta-cell loss which is mediated by an autoimmune process to extent that all patients with overt type 1 diabetes will essentially need insulin. Multiple genetic factors have been connected to type 1 diabetes which can defi ne individualized plan for type 1 prevention. This review has focused on type 2 diabetes (T2D) which has become more and more a challenging health burden as a result of its degree of morbidity, mortality and heightened prevalence worldwide. According to World Health Organization (WHO) and Centers for Disease Control and Prevention (CDC), T2D is among the top ten leading cause of death in USA and the world at large, while prediabetes is prevalent among children and young adults. T2D also known as hyperglycemia result from compromised insulin utilization (insulin resistance-IR) linked with insuffi cient compensatory insulin production. Long term consequence and comorbidities of T2D are nephropathy, neuropathy, retinopathy, hypertension, cardiovascular disease, dyslipidemia, cerebrovascular and peripheral vascular disease [1][2][3].
Recently, dietary and nutritional imbalances had been known as key risk factors for T2D but the underlying mechanism remains ill-defi ned. Since diets and genes changes one's health and susceptibility to diseases therefore, identifying genes which are regulated by diets and can cause or contribute to metabolic/chronic diseases could bring about development of diagnostic tools, individualized intervention and strategies for maintaining health. The genetic makeup of an individuals inherited from their parents are responsible for each variation in response to food and their susceptibility to chronic disease as T2D. Familiar variation in gene sequence include single nucleotide polymorphism (SNPs) brings about differences in complex traits such as food-gene interaction, height or weight potential, food mechanism [2,4,5]. There is the need to understand systemic disease monitoring platform such as the defi ne risk concept, detection risk and validation disease involved in nutrigenomics; how the omics can be applied to monitoring health include prevent and treat disease: how can the correlation between gene expression and metabolic process, at the cellular level infl uence individual's health and will the understanding of the outcome between gene and nutrients lead to personalized/individualized nutrition. To consider those questions, integration of diverse omics as a disease-care, prevention, and monitoring module is essential understanding the role nutrients plays on health and disease.
T2D, as a polygenic, multifactorial disease can serve as a model for cancer, obesity, cardiovascular disease and other chronic diseases that are infl uenced by diet and environmrntal factors. Epidemiological studies showed that about 90% of T2D are as a result of fi ve major lifestyles which include diet, physical activity, smoking, obesity and alcohol consumption [1][2][3]. But among these diet is essentially given that T2D disease rooted in dysfunctional metabolism and energy fuel utilization.
Imbalance diet in both quality and quantity is a risk factor that had been established for obesity which is closely linked to T2D. transcirptomics, epigenomics, proteomics, metabolomics, and microbiomics could enhance health caring surveillance system which people suffers metabolics disorder would be implemented [2,4,6]. This is the most successful technology applied to nutrigenomics. It coves the step of passing information from DNA to RNA. It is a simultaneous measurement of almost all gene expressed in a given cell, tissue or organism. The linked to malignancy such as prostate cancer and hepatocellualr [4,5].
Proteome studies
Proteome studies in nutrigenomics detected both well-
The study of metabolomics
The study of metabolomics makes it possible to conduct high resolution characterization of thousand of metatabolites.
Metabolites plays a major role in IR and T2D, since diabetes is a metabolic disorder. It studies the changes in metabolites with the aim to isolate and characterize them. The two major analytical methods used in metabolomics analyses are MS and NMR. Each analytical method has its own inherent advantages and disadvatanges, for exmple high reproducibility but low sensitivity in NMR-based methods compared to MS-based techniques. These metabolomics tool helps to comprehensively measure key metabolites in signaling, receptor binding, translocation and biochemical reaction pathways. In general, various metabolomics approaches could detect known biomakers of diabetes suc as sugar metabolites (1,5-anhydrousglucoitol), ketone bodies (3-hydroxybytyrate) and the branched chain amino acids. Recent metabolomics studies had revealed a diet-specifi c changes in matabolites. An example is that HFD increases lipid metabolites (phosphatidylcholines and fatty acids) but which decreases lipid metabolism intermediates (several acyl carnitines) and the NAD+/NADH ratio, which indicates a decrease in beta oxidation and abnormal lipid and energy metabolism. In addition, the levels of metabolites associated with obesity-related diseases which include energy harvest and various metabolic pathways in the organism [9]. For instance it is discovered that microbiome In conclusion, the overview of the application of omics platform in metabolic disease such as diabetes(T2D) testifi es to the ability of molecular-based detection technologies in determining novel biomarkers which can be used to diagnose, predict, monitor the progress of diabetes and to develop preventive and therapeutic strategies along with relevant bioinformatics. Omics study in T2D had shown a broad outcome of dietary imbalance on the molecular systems. The studies in high fat diet (HFD) revealed a detrimental shift in dietary components leading to a critical metabolomics changes and promoting gut microbiomic dysbiosis, which to a great extent aggravate metabolomic dysfuntional. This changes in key metabolites can modify the epigenome and perturb circadian rythym to promote reprogramming of the transcritome and proteome which can eventually lead to disruption in the diversity, amount, so also the movement pattern of genes and proteins involved in major metabolic pathways and immune processes neccessay for T2D development. | 2019-04-02T13:04:56.079Z | 2017-02-03T00:00:00.000 | {
"year": 2017,
"sha1": "ecc1a2162a67c0561e5d91cc618494d7ff32e919",
"oa_license": "CCBYNC",
"oa_url": "https://www.peertechz.com/articles/GJODMS-4-116.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f3d6b0e410cf6664175493c04838cfc9fcd2e5f7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
14823957 | pes2o/s2orc | v3-fos-license | Alcohol Enhances Acinetobacter baumannii-Associated Pneumonia and Systemic Dissemination by Impairing Neutrophil Antimicrobial Activity in a Murine Model of Infection
Acinetobacter baumannii (Ab) is a common cause of community-acquired pneumonia (CAP) in chronic alcoholics in tropical and sub-tropical climates and associated with a >50% mortality rate. Using a murine model of alcohol (EtOH) administration, we demonstrated that EtOH enhances Ab-mediated pneumonia leading to systemic infection. Although EtOH did not affect neutrophil recruitment to the lungs of treated mice, it decreased phagocytosis and killing of bacteria by these leukocytes leading to increased microbial burden and severity of disease. Moreover, we determined that mice that received EtOH prior to Ab infection were immunologically impaired, which was reflected in increased pulmonary inflammation, sequential dissemination to the liver and kidneys, and decreased survival. Furthermore, immunosuppression by EtOH was associated with deregulation of cytokine production in the organs of infected mice. This study establishes that EtOH impairs immunity in vivo exacerbating Ab infection and disease progression. The ability of Ab to cause disease in alcoholics warrants the study of its virulence mechanisms and host interactions.
Introduction
Acinetobacter baumannii (Ab) is a Gram-negative bacterium that has gained particular notoriety as one of the leading causes of nosocomial infections, principally amongst immunocompromised individuals [1]. Similarly, Ab is a primary agent for communityacquired pneumonia (CAP), particularly in individuals with a history of alcohol (EtOH) abuse who characteristically present a fulminant clinical course with secondary bloodstream infection and a .50% mortality rate in tropical and sub-tropical climates [2]. Approximately 10% of alcoholics transiently carry this bacterium in the nasopharynx which may act as the source of infection [3]. To exacerbate the problem, Ab has an intrinsically extraordinary ability to develop resistance to commonly used antibiotics [4,5,6].
Excess EtOH consumption may lead to host's innate and acquired immune deficiency, causing increased susceptibility to infections [7,8]. For instance, pneumonia mediated by Klebsiella or Streptococcus pneumoniae augmented mortality in EtOH-treated rodents [9,10]. Acute and chronic EtOH abuse alters the capacity of monocytes to present antigens to T-cells [11] as well as adversely modifying the levels of pro-inflammatory cytokines produced by phagocytes [12]. Notably, natural killer cell proliferation and activity are considerably reduced after EtOH consumption [13]. Moreover, EtOH exposure elevates serum IgG and IgA antibody levels, a clear indication of B-cell dysfunction [14]. Furthermore, inflammatory mediators are negatively regulated by immune cells following EtOH exposure [15].
Neutrophils circulate in the blood and are among the first cells to arrive at the site of infection; thus, neutrophils play an important role in early modulation of immune response to multidrug resistant Ab infection and tissue injury [16,17]. We have recently shown the detrimental effects of EtOH on macrophages in Ab infection [18]. However, there is lack of information about EtOH effects on specific effector functions of neutrophils in Abmediated pneumonia. In this study, we hypothesized that physiological EtOH levels impair neutrophil antimicrobial responses enhancing Ab pathogenesis. We demonstrated that EtOH damages neutrophil effector functions, an important factor that might enhance the severity of community-acquired Ab pneumonia in alcoholic patients resulting in high mortality.
To our knowledge, this is the first study describing the role of EtOH specifically on the setting of Ab infection using a mammal experimental infection model. Furthermore, we explore the impact of EtOH on neutrophils which play a critical role in host resistance to respiratory Ab infection.
Bacteria
Ab 0057, a clinical isolate acquired from Mark D. Adams (Cleveland, OH), was chosen for this study because it has been sequenced and it is resistant to b-lactam antibiotics including carbapenems but remain susceptible to tigecycline and colistin [19,20]. The strain was collected from the bloodstream of a soldier in 2004 at Walter Reed Army Medical Center, Washington DC. The isolate was stored at 280uC in Tryptic Soy Broth (TSB; Difco Laboratories, Detroit, MI) with 50% glycerol. Frozen stocks were grown in TSB with rotary shaking at 150 rpm overnight at 37uC. Optical density (OD) measurements were taken at 600 nm (Bio-Tek, Winooski, VT) to monitor growth.
Ethics statement
All animal studies were conducted according to the experimental practices and standards approved by the Institutional Animal Care and Use Committee (IACUC) at Long Island University (Protocol #: . The IACUC at Long Island University approved this study.
Colony forming units (CFU) determinations
At indicated time points (4,24,48, and 72 h) post-infection, mouse tissues (lungs, liver, and kidney) were excised, perfused and externally washed with sterile PBS, before being finally homogenized in PBS. Serial dilutions of homogenates were performed, with 100 mL of each sample plated on TS agar (TSA; Difco) plates and incubated at 37uC for 24 h. Subsequently, bacterial colonies were counted and the results were normalized by tissue weights.
Cytokine and myeloperoxidase (MPO) determinations
Three mice per group were sacrificed at 4, 24, 48, and 72 h post-infection. The organs of each mouse were excised, perfused and externally washed with PBS, and finally homogenized in PBS with protease inhibitors (Complete Mini; Roche, Ridgefield, CT). Cell debris was removed from homogenates by centrifugation at 6,000 g for 10 min. Samples were stored at 280uC until tested.
Histological processing
At indicated time-points (4, 24, 48, and 72 h) post-infection, organs (lung, liver, and kidney) were excised and fixed in 4% paraformaldehyde for 24 h. Tissues were processed, embedded in paraffin, and 4 mm vertical sections were fixed to glass slides. Tissue sections were stained with Hemotoxylin and Eosin (H&E), Gram or MPO to assess morphology, bacteria, or neutrophil infiltration, respectively. Microscopic examinations of tissues were performed by light microscopy with an Axiovert 40 CFL inverted microscope (Carl Zeiss, Thornwood, NY) and photographed with an AxioCam MrC digital camera using the Zen 2011 digital imaging software (Carl Zeiss).
Isolation of peripheral blood human neutrophils
Whole human blood was purchased from the Interstate Blood Bank, Inc. (Memphis, TN). Upon arrival, blood cells were pelleted, and erythrocytes removed by hypotonic lysis. Neutrophils were separated from the remaining cells by centrifugation over discontinuous Percoll gradients at 5006g for 30 min at 4uC, consisting of 75% (vol/vol) Percoll in PBS. Neutrophils were .95% viable as determined by Wright-Giemsa staining. Recovered neutrophils (,98% as determined by fluorescence-activated cell sorting (FACS) using Ly-6G as a marker) were cultured (,30 min) in (37uC, 5% CO 2 ) in RPMI 1640 (Cellgro, Manassas, VA) supplemented with 10 mM HEPES (pH 7.4) and 10% fetal calf serum (FCS) (Atlanta Biologicals, Lawrenceville, GA) prior to use.
Neutrophil counts in blood
Seven days after EtOH administration neutrophil counts were performed by differential leukocyte count in all experimental animals using a Hema 3 Stat Pack kit (Fisher Scientific, Hanover Park, IL) and light microscopy.
Chemotaxis assays
Chemotaxis was measured using a transwell chamber with 6.5 mm diameter polycarbonate filters (3 mm pore size; Corning, Tewksbury, MA). Immediately after isolation, cells were incubated in RPMI 1640 supplemented with FCS in the absence or presence of EtOH (6.25 and 12.5 mM correspond to 3 and 6% of EtOH in human blood, respectively) for 2 h. Then, cells were transferred to RPMI 1640 supplemented with FCS, cultivated on filters and allowed to migrate toward the chemo attractant fMLF (formylmethionyl-leucyl-phenylalanine, 10 26 M; Sigma) or medium alone at 37uC, 5% CO 2 . After 1 h, the filters were removed, and the cells that migrated through the membrane were fixed, stained and counted using light microscopy.
Phagocytosis assay
Phagocytosis was determined by FACS analysis. Human neutrophils (10 6 cells) were incubated on 6 well plates with feeding medium supplemented with EtOH (6.25 or 12.5 mM; Sigma), or PBS for 2 h at 37uC and 5% CO 2 . FITC (Molecular Probes, Grand Island, NY)-labeled Ab cells were incubated with 25% human serum for 30 min to allow complement proteins to opsonize Ab. Bacterial cells were washed and then 10 7 bacterial cells were added to the 10 6 neutrophils for 2 h. Similarly, extracellular bacteria were quenched with trypan blue to prevent interference with the assay. Samples were processed (10,000 events per condition) on a LSRII flow cytometer (Becton Dickinson Biosciences, San Diego, CA) and were analyzed using FlowJo software.
Ab killing assay
Since EtOH reduces Ab phagocytosis by neutrophils, leukocytes were first allowed to phagocytize Ab cells for 0.5 h to determine the initial uptake. Each well containing interacting cells was gently washed with feeding media and incubated with feeding media supplemented with amikacin (200 mg/mL; to kill extracellular bacteria) and either PBS or EtOH (6.25 and 12.5 mM) for 4 h. Viable bacteria were released from neutrophils following 0.5 and 4 h of host-cell interaction by forcibly subjecting culture to a 27gauge needle passage 5-7 times for efficient lysis [21]. Four microtiter wells per condition were used to ascertain CFU. For each well, serial dilutions were plated in triplicates onto TSA plates, which were incubated at 37uC for 24 h prior to CFU tallying.
Nitric oxide determinations (NO)
Nitric oxide (NO) produced in supernatants by untreated or EtOH-treated neutrophils was quantified after exposure to Ab using a Griess method kit (Promega, Fitchburg, WI).
Luminol chemiluminescence assay
Reactive oxygen species (ROS) signals were made chemiluminescent (CL) by luminol (1 mM). CL was monitored for 30 min using the automatic luminescence analyzer SpectraMax L (Molecular Devices, Sunnyvale, CA) at 37uC.
Statistical Analysis
Data were analyzed using Prism (GraphPad, LaJolla, CA). Differences in survival rates were analyzed by the log-rank test (Mantel-Cox). Analysis of cytokine, MPO, chemotaxis, CFU, NO, and chemiluminescence data was done using analysis of variance and adjusted by use of the Bonferroni correction. Analysis of neutrophil count data was performed using student's t-test. P values of ,0.05 were considered significant.
EtOH-treated mice display reduce survival in Ab infection
Administration of EtOH accelerated the death of Ab-infected mice relative to control mice (P,0.01; Fig. 1). On day 3 postinfection, 100% of EtOH-treated mice had died in comparison to 20% of the untreated mice. On average, EtOH-treated mice died of Ab-mediated pneumonia 2 days post-infection compared to 9 days for untreated mice (Fig. 1). None of the animals in the EtOHuninfected group died (Fig. 1). Furthermore, similar to humans who abuse EtOH, treated animals displayed marked stereotypic behaviors such as loss of motor coordination and distress. These behaviors were most apparent 5-10 min after EtOH injection and continued for several hours.
EtOH exacerbates Ab-mediated pneumonia
In sub-lethally infected mice, pulmonary bacterial burden of EtOH-treated animals was significantly higher than untreated mice (P,0.05; Fig In summary, these studies demonstrated that EtOH administration enhances disease progression by showing greater pulmonary Ab burden compared to the untreated group as shown in the CFU data. We measured pro-inflammatory cytokine response (TNF-a, IFN-c, IL-1b, and IL-6) in the lungs of untreated or EtOH-treated mice and uninfected or infected with Ab at 4, 24, 48, and 72 h post-infection (Fig. 2C). The pulmonary tissue of infected mice treated with EtOH and infected with Ab contained significantly higher quantities of IL-1b (P,0.05) and reduced levels of TNF-a (P,0.05) in contrast to other group conditions. The untreated-Ab-infected group displayed significantly lower levels of TNF-a (P,0.05) than untreated controls 24 h post-infection. EtOH-treated animals showed the highest levels of TNF-a (P,0.05) at 24-72 h. Similarly, EtOH-treated animals exhibited the highest levels of IFN-c (P,0.05) at 48 and 72 h whereas this cytokine was only significantly elevated in EtOHtreated-Ab-infected mice 72 h post-infection. Lastly, untreated-Abinfected group evinced the highest levels of IL-6 production (24-48 h; P,0.05), although EtOH-treated group showed a similar increase in this cytokine at 72 h.
EtOH accelerates Ab dissemination from the lungs to other organs
We investigated whether EtOH enhances Ab dissemination from the lungs to other organs. Ab significantly disseminated from the lungs of EtOH-treated mice to the liver in 24 h (P,0.05; Fig. 3A) and to the kidneys in 72 h (P,0.05; Fig. 4A) after intranasal infection. Histological analysis revealed hepatocellular atrophy (Fig. 3B) and considerable presence of bacteria in the livers of animals treated with EtOH ( Fig. 3B; insets; upper panel). We evaluated the levels of TNF-a, IFN-c, IL-1b, and IL-6 in the hepatic tissue of untreated or EtOH-treated mice and uninfected or infected with Ab at 24, 48, and 72 h post-infection (Fig. 3C). The liver of infected animals treated with EtOH and infected with Ab showed significantly lower levels of TNF-a (P,0.05), IFN-c (P,0.05), and IL-1b (P,0.05) than those of the other experimental conditions. Untreated-Ab-infected and EtOH-treated animals showed significant increase in TNF-a (P,0.05), IFN-c (P,0.05), and IL-1b (P,0.05) levels compared to untreated controls. Also, these three cytokines were highly elevated 24 h after Ab infection in untreated-Ab-infected and EtOH-treated groups but gradually reduced at 48 and 72 h. Untreated-Ab-infected mice showed the highest levels of IL-6 production (P,0.05). EtOH and Ab-infected animals demonstrated lower IL-6 levels than EtOH-treated animals (P,0.05) 24 h post-infection.
Kidneys excised from EtOH-treated animals presented nephromegaly with abnormally thickened basement membrane of the glomeruli and cell proliferation (Fig. 4B). Additionally, high bacterial burden was observed in EtOH-treated mice, mostly accumulated in the glomeruli of the kidneys (Fig. 4B; insets; upper panel). Untreated mice displayed uniform bacterial cell distribution throughout the cortex and medulla of this excretory organ ( Fig. 4B; inset; upper panel). We examined the levels of TNF-a, IFN-c, IL-1b, and IL-6 in the kidneys of untreated or EtOHtreated mice and uninfected or infected with Ab at 72 h postinfection (Fig. 4C). The renal tissue of infected mice treated with EtOH and infected with Ab contained significantly reduced levels of TNF-a (P,0.05), IFN-c (P,0.05), and IL-1b (P,0.05) compared to the other experimental conditions. Untreated-Abinfected and EtOH-treated animals exhibited significant increase in TNF-a (P,0.05) and IL-1b (P,0.05) levels compared to untreated controls. In addition, significant increases in IL-6 production (P,0.05) were observed in untreated-Ab-infected mice compared to the other conditions.
EtOH enhances neutrophil infiltration
We examined whether EtOH administration affected the number of circulating neutrophils in the blood of C57BL/6 mice using differential leukocyte staining. Cell count analysis showed that EtOH-treated animals had no difference in blood circulating phagocytes when compared to controls (Fig. 5A).
MPO is highly produced by neutrophils, therefore, detection of this enzyme is commonly used as a surrogate to quantify neutrophil recruitment to different tissues. MPO yields hypochlorous acid from hydrogen peroxide and chloride anion during the neutrophil's respiratory burst. Therefore, we assessed the effect of EtOH on neutrophil recruitment in the lungs of C57BL/6 mice by neutrophils. Mice treated with EtOH (P,0.001) or untreated (P,0.001) resulted in significantly higher levels of MPO after Ab infection than did untreated neutrophils (Fig. 5B). Similarly, we identified neutrophil infiltration using immunohistochemistry (IHC) by measuring the expression of MPO in pulmonary tissue. Tissue sections from EtOH murine lungs infected with Ab exhibited early (4 h) and massive neutrophil infiltrations when compared to lungs excised from untreated-infected animals (Fig. 5C). Additionally, uninfected lungs demonstrated minimal neutrophil infiltration (data not shown). To confirm the IHC findings, EtOH was tested for its ability to promote human neutrophil chemotaxis in vitro. EtOH exposure (6.25 mM; P,0.001 and 12.5 mM; P,0.001) significantly stimulated higher leukocyte migration than untreated or fMLF-treated controls (Fig. 5D).
EtOH reduces neutrophils phagocytosis and killing of Ab
We analyzed the effects of physiological EtOH on Ab phagocytosis and killing by neutrophils using FACS analysis. EtOH reduced phagocytosis of Ab by neutrophils, compared with the control (Fig. 6A). Our results showed a 17.3 and 73.5% phagocytosis inhibition in cells treated with 6.25 and 12.5 mM EtOH, respectively, when compared to control cells. We examined whether EtOH interferes with neutrophil-mediated killing of Ab cells. EtOH significantly reduced bacterial killing by human neutrophils (P,0.05) (Fig. 6B). Consequently, we investigated the impact of EtOH on extracellular NO production by these leukocytes after co-incubation with Ab. Our results indicate that NO levels were significantly reduced in the supernatants of EtOHtreated cells (6.25 mM; P,0.05 and 12.5 mM; P,0.001) when compared to controls (Fig. 6C). Finally, neutrophils mostly kill bacteria via NADPH oxidase-derived ROS. Hence, we evaluated the impact of EtOH on neutrophils' oxidative burst by measuring luminol chemiluminescence intensity. Increased concentration of EtOH significantly decreased ROS production compared to untreated-Ab neutrophils (P,0.05; 15 to 60 min). Minimal production of ROS was observed in untreated and unstimulated control cells.
Discussion
EtOH abuse has been previously shown to predispose the host to CAP, particularly to multi-drug resistant Ab resulting in significant illness and mortality [2]. In this study, we demonstrated that EtOH administration had a profound effect on survival in mice i.n. challenged with Ab. The increased mortality in EtOHtreated mice was attributable to the inability of immune cells to clear infection. We observed a significant increase in bacterial burden in the lungs of EtOH-treated animals, compared to controls. After infection and in contrast to untreated animals, EtOH-treated mice displayed high inflammation and number of inflammatory cells present within the alveoli suggesting that increased Ab burden and reduced animal survival were not attributable to a diminished recruitment of immune cells to the lungs, but to decreased cellular microbicidal capacity. In this regard, high levels of IL-1b present in the pulmonary tissue of EtOH-treated mice suggest that this cytokine compensates the late production of TNF-a and IFN-c. In contrast, untreated-Abinfected mice displayed early high levels of TNF-a followed by a time-dependent reduction of this cytokine and elevated levels of IL-1b and IL-6 which may explain reduced bacterial burden in these animals. Another contributing factor to the inability of treated mice to reduce bacterial numbers is that EtOH changes pulmonary surfactant production which may result in a lessening antibacterial activity [22].
MPO IHC demonstrated that EtOH-treated animal experienced earlier and higher recruitment of neutrophils to the lungs post-infection than the untreated-Ab-infected group. To confirm the IHC results, quantitative analysis exhibited a considerable time-dependent increased and sustained MPO production in pulmonary tissue of EtOH-treated animals. Perhaps, EtOH stimulates early and uncontrolled massive recruitment of neutrophils to the infection site increasing the probability of tissue damage by activated leukocytes. This observation is also supported by dysregulated levels of pro-inflammatory cytokines found in lung homogenates of EtOH-treated mice. For instance, neutrophil apoptosis might be impaired extending the presence of these phagocytic cells in lung tissue which might be detrimental for the host's tissue architecture [23,24]. Consistent with previous studies, our findings indicated that EtOH administration did not reduce neutrophil infiltration to the infection site [25], which might suggest that this substance of abuse impairs the ability of phagocytic cells to engulf and kill Ab within the lungs [26,27], resulting in early dissemination of infection to the bloodstream and other organs. For example, human neutrophils treated with physiological levels of EtOH showed increased chemotaxis in vitro and decreased Ab phagocytosis and killing. Similarly, we have recently shown that EtOH-mediated phagocytosis dysfunction may be associated with reduced expression of GTPase-RhoA, a key regulator of the actin polymerization signaling cascade using a murine J774.16 macrophage-like cell line [18].
After phagocytosis, bacteria are rapidly exposed to the microbicidal armamentarium conferred by neutrophils, which (C) Cytokines levels (TNF-a, IFN-c, IL-1b, and IL-6; pg/mL) in the kidneys of C57BL/6 mice. Solid and error bars denote the means and standard deviations, respectively. Symbols denote P-value significance (P,0.05) calculated by ANOVA and adjusted by use of the Bonferroni correction. *, Q, ' indicate higher levels than untreated, untr + Ab, EtOH groups, respectively. #, p, x indicate lower levels than untreated, untr + Ab, EtOH groups, respectively. These experiments were performed twice and similar results were obtained. doi:10.1371/journal.pone.0095707.g004 consist of toxic reactive species, such as NO, and lysosomal hydrolases. Our data show that NO generation is significantly decreased in neutrophils that are exposed to EtOH. This phenomenon might be explained by reduction in TNF-a levels. Impaired production of NO and ROS might create an ideal environment for microbial survival, facilitating intracellular replication and dysregulation of the phagolysosomal milieu. For instance, EtOH reduced NO production by alveolar macrophages after challenge with Mycobacterium tuberculosis [28]. Recent work in our laboratory demonstrated that inducible NO synthase expression is reduced after macrophage-like cells were exposed to EtOH [18].
Liver damage is common in EtOH users, often leading to hepatitis, cirrhosis, and fatty liver. The accumulation of EtOH in the liver may be responsible for the hepatocellular atrophy we observed in EtOH-treated animals. One of the potential mechanisms by which EtOH can cause exacerbation of Ab infection is, in part, linked to the deleterious interaction of EtOH and bacterial lipopolysaccharide (LPS) on the immune response [29]. Thus, EtOH induced immunosuppression may be promoting this gram negative organism's dissemination and replication within hepatic tissue. Surprisingly, we observed reduced levels of TNF-a, which have been previously implicated with cell death and liver damage in alcoholics when overproduced by Kupffer cells [30,31,32]. Our findings provide fundamental insights into how EtOH may play an important role in Ab infection-related morbidity and mortality in a vertebrate model of infection.
Chronic alcohol consumption can cause kidney dysfunction, mainly in conjunction with established liver disease. Excised kidneys from EtOH-treated mice exhibited higher Ab burden and swelling than controls at 72 h post-infection. Bacterial distribution in the renal tissue was also different with Ab accumulation in the glomeruli of EtOH-treated mice as opposed to uniform dispersed bacteria throughout the organ observed in the control animals. This result suggests that EtOH predispose alcoholics to urinary tract infections and consequent kidney failure which may result in death [33,34]. This clinical scenario is complicated by the multidrug resistance capacity of Ab and reduced pro-inflammatory cytokine production, increasing the host's susceptibility to an Although Cheung et al., have previously shown that the differential count for the neutrophils in young and mature rats treated with alcohol is lower than in untreated animals [35]. In humans, various abnormalities in circulating neutrophils also have been described with alcohol consumption, ranging from an increase in the number of these cells in the peripheral blood to neutropenia in those with the most severe form of infection or severe underlying hepatic disease [36]. We found that EtOHtreated animals had no differences in blood circulating neutrophils when compared to untreated controls further supporting the notion that Ab infection could not be controlled in EtOH-treated animals due to a reduction of the antimicrobial effector functions of these leukocytes.
Previous studies have shown that EtOH exposure may increase Ab virulence. For instance, EtOH promotes secretion of the outer membrane protein OmpA, which is important in Ab biofilm formation and induces epithelial cell apoptosis [37,38,39]. Similarly, co-culture of Ab with the baker's yeast Saccharomyces cerevisiae promoted bacterial growth, primarily due to fungalmediated EtOH production [40]. Perhaps, EtOH provides Ab with the capacity to tolerate salt stress, making Ab an extremely successful opportunistic pathogen for persons abusing alcohol who copiously sweat due to increased body heat. Ab genomic and proteomic analyses revealed that EtOH regulates genes responsible for stress responses, drug resistance, iron transport, and biofilm formation [41,42]. Furthermore, EtOH enhances Ab pathogenesis in invertebrate models of infection [42,43,44].
The majority of CAP-Ab infections occur in individuals with underlying comorbidities, who reside in tropical and subtropical climates [45]. For instance, Australians aborigines in the Northern Territory are overrepresented relative to the general population in rates of CAP caused by Ab [46]. This disparity has been attributed to the interaction of both climate and a high prevalence of comorbidities in the indigenous Australian population including alcoholism, diabetes mellitus, chronic obstructive pulmonary disease and cigarette smoking [46,47]. Although it is not clear whether immune cell dysfunction directly contributes to CAP Ab infection in humans, our study revealed that EtOH administration is positively associated with the progression of this infection in an animal model. Moreover, alcohol consumption has been previously correlated with impaired immune responses including alveolar macrophages dysfunction in phagocytosis, killing of bacteria, and cytokine secretion [46]. Figure 6. EtOH reduces human neutrophil phagocytosis, nitric oxide production, and killing of Ab. Neutrophils were untreated or exposed to EtOH for 2 h followed by incubation with Ab. (A) Phagocytosis of FITC-labeled Ab by human neutrophils was determined using Fluorescent Activated Cell Sorting (FACS) analysis. Representative histograms are shown. (B) Killing of Ab by neutrophils was determined using colony-forming units (CFU) analysis. Solid and error bars denote means and standard deviations, respectively. Symbols denote P-value significance (P,0.05) calculated by ANOVA and adjusted by use of the Bonferroni correction. * and # indicate higher CFU numbers than untreated and 6.25 mM EtOH groups, respectively. (C) Nitric oxide (NO) production was quantified using the Griess method after untreated or EtOH-treated neutrophils were co-incubated with Ab. Untreated and uninfected neutrophils were also used as controls. Solid and error bars denote means and standard deviations, respectively. Symbols denote P-value significance (P,0.05) calculated by ANOVA and adjusted by use of the Bonferroni correction. *, #, & indicate lower levels than untreated, 6.25 and 12.5 mM EtOH groups, respectively. (D) Oxidative burst was quantified for 60 min using luminol chemiluminescence after untreated or EtOH-treated neutrophils were co-incubated with Ab. Untreated and uninfected neutrophils were also used as controls. Symbols and error bars denote means and standard deviations, respectively. P-value significance (P,0.05; discussed in result section) was calculated by ANOVA and adjusted by use of the Bonferroni correction at each time point. For (A)-(D), experiments were performed twice and similar results were obtained. doi:10.1371/journal.pone.0095707.g006 It is important to mention that CAP-Ab is almost never detected in countries outside the tropics despite the fact that alcohol abuse is common in many countries around the world. It is possible that tropical regions are favorable and optimal for the growth of Ab in the environment. For instance, Ab is a very resilient microbe resistant to high temperature ($47uC) and desiccation [48]. Nevertheless, correlating climate and infection is out of the scope of this study, a very interesting question to pursue in future studies.
In conclusion, this is the first report that experimentally demonstrates that EtOH intensifies Ab-associated pneumonia and disease progression in vivo by deregulating neutrophil antimicrobial functions. The synthesis of these findings, including recent advances in Ab virulence studies, should raise awareness on the negative impact of EtOH abuse in alcoholics, specifically in prevalent regions. The ability of Ab to cause disease in alcoholics underscores how the study of its virulence mechanisms and host interactions is necessary. | 2016-05-12T22:15:10.714Z | 2014-04-21T00:00:00.000 | {
"year": 2014,
"sha1": "09109b290552f348e1ce473790381a5cb21cc946",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0095707&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "09109b290552f348e1ce473790381a5cb21cc946",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
212423607 | pes2o/s2orc | v3-fos-license | EU’S MULTIANNUAL FINANCIAL FRAMEWORK POST-2020: BREXIT IMPLICATIONS, WITH A FOCUS ON POLAND
The aim of the paper is to critically analyse the main elements proposed in the EU’s Multiannual Financial Framework (MFF) for 2021–2027 presented by the European Commission in May 2018 and the ways to solve the problem of the Brexit gap. The assessment of the effects of budgetary changes is focused on Poland. In order to achieve the research goals, we conduct a critical analysis of EU documents and a review of the literature. Britain’s exit from the EU may speed up the reform of EU budget revenue. The Brexit gap is so large that EU Member States, despite a general dislike of taxes at the EU level, may accept some of the EU proposals in order to bridge that gap. An increase in GNI-based contributions to the EU budget is also a very possible scenario. On the expenditure side of the budget, the new MFF provides for cuts in spending on agricultural and cohesion policies. As a very large beneficiary of such support at present, Poland will lose relatively the most. The compromise on funding the Brexit gap will significantly affect the EU’s ability to finance its priority expenditure after 2021 and thus the possibility to cope with present and future integration challenges.
Introduction
The EU's Multiannual Financial Framework (MFF) constitutes a key document specifying the maximum amounts of revenue and expenditure from the EU budget in a period of several years. Therefore, it determines the scale of measures funded at the EU level. The new MFF, to be effective from 2021, must take account of the possibility of the United Kingdom's exit from the EU (known as Brexit), which was postponed from 29 March 2019 until 31 October 2019 and then again until 31 January 2020 1 . Brexit will bring about a reduction in both the UK's contributions to the EU budget and transfers to the UK economy. Since the country in question is the second biggest net payer to the EU budget, the decrease in EU budget revenue will be much greater than the decline in spending.
On 2 May 2018, the European Commission presented a package of documents containing a draft MFF and accompanying legislative proposals (http://ec.europa.eu/budget/mff/index2021-2027_en.cfm). It was preceded by a series of analytical documents presented in 2017 regarding various aspects of the EU budget and the future of the EU (https://ec.europa. eu/commission/publications/reflection-paper-future-eu-finances_en).
This article aims to critically analyse the main elements proposed in the MFF and to determine the European Commission's approach to adjusting the EU funding system for 2021-2027 to the United Kingdom's exit from the EU. The emphasis is on the scale of reductions in EU budget revenue after 2021 and on the ways of financing it as that part of the budget will be the most affected by Brexit. We also indicate selected proposals for reducing EU budget spending. The Commission's rationale is (partly) that savings need to be made because of Brexit. Furthermore, these proposals concern two areas which are currently the most significant sources of EU transfers to Poland, i.e. cohesion policy and the common agricultural policy (CAP), and their consequences will be very important for the Polish economy. 1 There is no guarantee, however, that Brexit will actually happen. The withdrawal agreement of the UK from the EU was endorsed by EU leaders on 25 November 2018. The UK was due to leave on 29 March 2019, two years after it started the exit process by invoking Article 50 of Lisbon Treaty. The agreement was not, however, accepted by the British Parliament, whose approval is necessary for ratification. In fact, it was rejected several times by the House of Commons. On 11 April 2019, the European Council agreed -at the request of the British Prime Minister Theresa May -to an extension of the UK's exit from the EU until 31 October. However, at the end of 2019 there are still huge controversies over Brexit among British politicians and in British society.
In order to achieve the research goals, we conduct a critical analysis of EU documents and a review of the literature.
The starting point for the analysis is an assessment of the importance of the multiannual budget to the implementation of EU priorities. This is followed by an estimation of the United Kingdom's current position in that budget and of the scale of funds necessary to finance the gap stemming from the UK's exit from the EU. In that context, the paper presents the Commission's proposals concerning the new MFF for 2021-2027. The findings refer to the implications of the discussed changes, mostly from the point of view of Poland.
Importance of the Multiannual Financial Framework to the Process of European Integration
Heated discussions between the EU institutions in connection with the adoption of annual budgets tend to attract significant interest from the public. But the fundamental role in EU actions is played by the Multiannual Financial Framework (MFF). In accordance with the Treaty on the Functioning of the European Union (TFEU), the MFF determines the size of the EU's annual budgets (Articles 310 to 320 of the TFEU). The current own resources ceiling in the budget (the appropriations with the reserve, referred to as the margin) is 1.23% of the EU-28 GNI (Council Regulation No 1311/2013. However, the ceilings on commitment appropriations and on payment appropriations as may be spent by the EU in the period covered by the MFF are lower. As regards the ceiling on commitment appropriations (i.e. funds for the implementation of EU policies, usually in a period longer than one year, after meeting certain conditions), in 2014-2020 it is an average of 1% of the EU-28 GNI, whereas the limit on payment appropriations (to be spent in a given year) is even lower -a mere 0.95% of the EU-28 GNI. The EU budget submitted for adoption must be in balance, i.e. show no deficit.
The Multiannual Financial Framework translates the EU policy priorities into budgetary amounts. Simultaneously, it is an instrument for maintaining budgetary discipline, since expenditures in annual budgets must be consistent with the MFF ceilings. Thanks to covering a period of several years (since the early 1990s -7 years, whereas the TFEU provides for MFFs adopted for a minimum of 5 years), the MFF also ensures stability in the financing of EU actions: beneficiaries are able to project the level of such spending in subsequent years.
The new MFF should enter into force at the beginning of 2021 as the current MFF expires at the end of 2020. Reaching a new financial compromise will be much more difficult than in the case of the current MFF for 2014-2020, e.g. due to the large revenue gap stemming from the anticipated exit of the United Kingdom from the EU, the new challenges faced by the EU, such as an enormous inflow of immigrants and refugees, the digital revolution, globalisation, demographic changes, socio-economic inequalities, climate change, etc. (European Commission 2017, p. 8). The talks were also slowed down by the European Parliament elections (in May 2019) and the resulting change in the composition of the European Commission, which started its work with a one-month delay, i.e. on 1 November 2019.
The legal basis for an MFF is a regulation adopted by the Council unanimously after obtaining the consent of the European Parliament (given by a simple majority of its component members). Negotiations on the whole package of financial provisions involve -according to the TFEUthree institutions: the Council, the European Parliament and the European Commission. In practice, however, the key elements of the MFF are first established by the European Council.
Threats to the MFF Post-2020 Resulting from Brexit
The UK exit from the EU will result in a significant decline in EU budget revenue after 2020. In the agreement with the EU-27 of November 2018 (Agreement 2019, Art. 135) the United Kingdom agreed to continue to honour its financial obligations under the MFF for 2014-2020, even though it would probably earlier cease to be a Member State of the EU. In the case of a "no deal" exit from the EU (without an agreement), which cannot be excluded because of huge Brexit turmoil, the UK may decide to stop contributing to the EU budgets under the present MFF (i.e. in 2020 if Brexit is effective as of this year).
Due to the fact that the United Kingdom is now a major (the second largest) net payer, after its exit the decrease in EU budget revenue (in respect of the UK's payments) will be much greater than the decline in spending (transfers to the United Kingdom). Therefore, there is a risk that funds for EU-27 actions will be reduced from 2021 onwards.
In the literature there are varying estimations of the "Brexit gap" beyond 2020. The differences in these findings are primarily due to the adoption, as the basis for estimation, of different concepts of the EU annual budget, different years for estimation, and different calculation methodologies (specifically, the inclusion or exclusion of the UK rebate). For example, J. Haas and E. Rubio (2017, p. 1) estimate the yearly net gap amount (net of UK contributions to the EU budget) at EUR 10 billion; E. Kawecka-Wyrzykowska (2018, p. 5) -at EUR 16.5 billion (as an average calculated on the basis of data for 2014-2015); and I. Begg (2017, p. 2) -at EUR 17 billion (an annual average for the period 2013-2015), i.e. ca. 12% of EU budget revenue. Each of the above-mentioned approaches shows a significant amount of funds missing from the budget after the United Kingdom's exit 2 . An obvious consequence of such a situation would be reducing appropriations for the financing of EU-27 actions in comparison with current spending. Therefore, an important question is whether the EU Member States will be able to agree on bridging the gap arising after 2020 or whether the EU budget will be reduced.
Possible Financing of the Brexit Gap after 2021
The proposal for a Council Decision on the system of own resources of the EU of 2 May 2018 provides for 1.29% of the EU-27 GNI ceiling (in terms of payments; European Commission 2018d, p. 2). This is an increase compared with the present financial period and reflects the higher payment needs of the EU integration process on the one hand, and the proposal to finance the "Brexit gap" on the other. Without raising the ceiling on own resources (set as a percentage of the EU GNI), the absolute size of the EU-27 budget would fall after the withdrawal from the EU of the United Kingdom, a very significant Member State in terms of income (around 15% of the EU GNI). In other words, leaving the own resources ceiling at 1.23% of GNI determined for the EU-28 for 2014-2020, after a decrease in the number of EU Member States and, therefore, a considerable fall in GNI, would result in a decline in the absolute value of the budget.
However, in 2021-2027, as at present, the commitment and payment appropriations will be lower than the ceiling on own resources. Those will be, respectively, 1.11% and 1.08% of GNI (current prices; European Commission 2018b, p. 25). According to the Commission, the above levels are comparable to the size of the current Financial Framework in real terms.
Obviously, every growth in the EU budget, or even only maintaining its level from the current period, after the revenue reduction in respect of UK payments, must be reflected in an increase in revenue. As already mentioned, the EU budget must be in balance.
Aware of the reluctance of various Member States to accept any new burdens in the form of additional contributions to the EU budget, the European Commission proposed significant modifications in the financing of the budget (European Commission 2018b, p. 27). The main new elements, presented by the European Commission on 2 May 2018, provide for the introduction of a basket of the following three new own resources: a) 20% of the Emissions Trading System (ETS) revenues: the ETS (set up in 2005) is a key tool of EU climate policy, conducted for years in order to reduce greenhouse gas emissions. Within the framework of this policy, a number of "allowances" are auctioned by Member States and purchased by companies to cover their greenhouse gas emissions. The system is already significantly harmonised at the EU level.
b) A 3% EU call rate to be applied to the new Common Consolidated Corporate Tax Base (CCCTB) to calculate companies' taxable profits in the EU, including the digital sector. The call rate would be phased in once the tax and necessary legislation has been adopted. This solution would link the financing of the EU budget directly to the benefits enjoyed by companies operating in the Single Market. Each Member State would be free to tax its share of the profits at its own national tax rate. c) A national contribution calculated on the amount of non-recycled plastic packaging waste (a call rate of EUR 0.80 per kilo). The assumption is that this will create an incentive for Member States to reduce packaging waste and stimulate Europe's transition towards a circular economy by implementing the European plastics strategy.
When assessing the above proposals, it must be stated that the Commission chose such sources of revenue as would allow to better connect payments of specific entities with their benefits from the EU's single market. In some cases (proposals a) and c)) new resources would not only generate receipts to the budget but also foster the achievement of EU climate and environmental policy objectives, which are increasingly important. However, the effects on Member States would vary widely. For example, Poland's ETS--based payment would be relatively high (and likely to significantly increase the country's total contribution) owing to the Polish economy's considerable dependence on CO 2 emissions and the high cost of purchasing additional greenhouse gas emission allowances by undertakings emitting CO 2 .
Altogether, the three new own resources could contribute EUR 22 billion per year, which corresponds to 12% of EU budget revenue.
Moreover, simplification of the contributions based on current Value
Added Tax is envisaged -they will be based on standard rates only 3 .
According to the Commission's proposal, the widely criticised rebates will disappear. On the United Kingdom's exit from the EU, there will be no more reason for the existence of the UK rebate and related rebates (i.e. reductions in its financing for Austria, Germany, the Netherlands and Sweden). As regards rebates connected with call rates for the VAT-based own resource and the lump sum reductions for contributions based on GNI, these will automatically expire at the end of 2020. Let us note that such changes would bring about a significant increase in payments from the Member States currently benefiting from reductions 4 . Therefore, the Commission proposed the phasing out of the rebates over a period of 5 years.
According to the proposal for the MFF for 2021-2027, there will also be a reduction in the collection costs retained by Member States from traditional own resources (mainly from customs duties) from 20% to 10%.
The Commission also emphasised that a swift political agreement on a new EU budget would be essential to demonstrate "that, following the withdrawal of the United Kingdom in 2019, the Europe of 27 is unified, has a clear sense of purpose and direction, and is ready to deliver. And it would give the best possible chance for new programmes to hit the ground running on schedule on 1 January 2021, turning political objectives into quick results on the ground" (European Commission 2018c, p. 18). In addition, as stressed by the Commission, an early agreement is important not only from the political but also from the practical point of view, as the EU funding will directly affect many beneficiaries and all of them need legal and financial certainty. Any delay in the adoption of the MFF will have negative implications for the launch of the new programmes and consequently to the achievement of funding priorities (European Commission 2018c, p. 18).
The Commission's position is naturally justified and correct but it will not be easy to achieve the adopted goals, not to mention a swift agreement. In practice, the proposal for a basket of new own resources of the budget means accepting new taxes at the European level. At first glance it seems that it should be positively assessed by EU Member States as it offers bigger financing of the EU budget, without an increased burden on national budgets. The costs of additional funding would mostly be borne by enterprises (the CCCTB and ETS proposals) and consumers (the ETS and plastic packaging waste-based payments). However, many countries have "always" fought against any European tax, treating it as the strengthening of the powers of the Commission (as an institution over which the citizens have no control) and the weakening of national fiscal sovereignty and thus of political sovereignty as well. In previous years the Commission submitted various proposals for the introduction of a tax as a source of co-financing for the EU budget, but it was never successful in obtaining the Member States' consent. The difficulty in arriving at an agreement is that deciding on the system of own resources of the EU budget requires the Council to act unanimously and all the EU Member States to ratify such a decision (Article 311 of the TFEU). The chances are, however, that at least some of the Commission's tax proposals (or yet another tax) 5 will be accepted, since this time the situation is different -a revenue gap of more than ten billion euros caused by Brexit and new challenges requiring extra financing.
Another option to cover the Brexit gap is to increase GNI-based payments. That would be the simplest solution in technical terms. This payment is a somehow automatic mechanism of national contributions (due to its residual character) 6 . Moreover, the method for calculating it is easy and transparent. The main problem is that the increase in GNI-based contributions would mean a very uneven financial burden on individual Member States. The countries to be hit hardest would be the present largest net contributors as they would become even bigger net payers to the EU budget. Such a solution would be politically unacceptable for those countries. A solution to mitigate this problem might be the introduction of new rebates (see: Kawecka-Wyrzykowska 2018, p. 6).
Failure to find appropriations for financing the gap would necessarily involve dramatic cuts in current budget items, including expenditure on cohesion and agriculture. Such reductions would have to be even sharper if the EU Member States intended to simultaneously increase spending on new priorities such as border protection and migration, youth mobility, 5 The Commission itself presented the possibility of adding other sources of revenue in the form of seigniorage (revenue from the production of the euro that exceeded the cost of production of the euro) or revenues from the new European Travel Information and Authorization System (European Commission 2018a). 6 The residual character of GNI-based resource means that it supplements revenue when the proceeds from traditional own resources and the VAT-based resource are not sufficient. National contributions of GNI resource are calculated according to the share of Member States in the EU GNI. environmental and climate protection, i.e. areas where the most significant growth in expenditure was proposed by the Commission. However, deep cuts in expenditure would give rise to strong objections by a number of countries which considerably benefit from the cohesion and agricultural policies.
Even before the submission of specific proposals by the European Commission in May 2018, the European Parliament took a position on the new MFF. This opinion is important as the Parliament must approve the MFF after its adoption by the Council, although it is not entitled to negotiate on the MFF or to modify the Council's arrangements. In its resolution of 14 March 2018, the EP stated as follows: "ahead of a decision on the post-2020 MFF, the 'Brexit gap' should be bridged while guaranteeing that EU resources are not reduced and that EU programmes are not affected negatively" (European Parliament 2018b, point 17). In practice, this means that the Parliament is not inclined to accept any deeper reductions in expenditure on the cohesion and agricultural policies.
The Commission's Proposals for Savings in the EU Budget after 2020
The financial package for 2021-2027 provides not only for new revenue resources (taxes) but also for savings. These apply to the two biggest types of expenditures from the EU budget: the common agricultural policy and cohesion policy.
In its Communication of February 2018, the Commission pointed to the positive role played by rural development programmes (European Commission 2018c, p. 12). With regard to direct payments, currently representing 70% of the CAP budget (with rural development and market intervention measures accounting for 25% and 5%, respectively), the Commission stated that "Discussions are ongoing as to how to make best use of direct payments. Today, 80% of direct payments go to 20% of farmers".
Characteristically (certainly not incidentally), the Commission pointed out in its previous document from 2017 that "Apart from the rural development measures financed under the second pillar of the CAP, this is the only policy area managed together with the Member States without national co-financing" (European Commission 2017, p. 19). It may be interpreted as possible consideration of the national co-financing of payments in the new MFF. Such an option was explicitly mentioned by certain scholars and agricultural experts (e.g. Darvas & Wolff 2018, p. 3;Begg 2017, p. 6).
In support of topping up direct payments, regional policy chief Corina Cretu stated that "National co-financing could be considered an option for direct payments" and added that "farmers don't mind whether CAP money comes from Brussels or the national coffers" (https://www. independent.ie/business/farming/eu/cap-under-pressure-as-most-memberstates-reject-cofinancing-of-direct-payments-35942698.html). However, Agriculture Commissioner Phil Hogan said that the vast majority of Member States opposed the idea of co-financing pillar I of the CAP.
Therefore, the idea of introducing the co-financing of direct payments is not purely theoretical. Poland is the sixth largest beneficiary of direct payments in 2014-2020 (Regulation (EU) No 1307/2013). Obviously, any decision on reducing the expenditure in question would involve a deteriorated income position of Polish farmers. At the same time, national co-financing of those payments would necessarily entail cuts in Polish budgetary spending on other important development objectives. However, we must emphasise that the Communication of 2 May 2018, i.e. the Commission's official proposal to be negotiated among the EU Member States, does not mention any national co-financing of direct payments.
According to the Commission's proposal, the reformed CAP will, with EUR 365 billion (European Commission 2018b, pp. 13, 29), account for 28.5% of the MFF commitments scheduled for 2021-2027. This means a reduction of around 5% for the CAP budget at current prices (equivalent to a reduction of around 12% in constant 2018 prices) (http://europa.eu/ rapid/press-release_MEMO-18-3974_en.htm). Such cuts in CAP spending will substantially limit income support for farmers and funds aimed at improving the competitiveness of agricultural products.
As regards Poland, the proposal provides for EUR 30.5 billion (8.5% of total spending on the common agricultural policy for the EU-27), of which nearly 70% will be for direct payments and 30% for rural development.
The Communication from the Commission assumes greater flexibility in the utilisation of appropriations at the disposal of Member States as they will have the option to transfer up to 15% of their CAP allocations between direct payments and rural development and vice-versa to ensure that national priorities and measures can be funded (http://europa.eu/rapid/press-release_IP-18-3985_en.htm). The Commission also proposed -undoubtedly under the influence of criticism from Member States, particularly those that joined the EU after 2004 -to reduce the differences in direct payments per hectare 7 .
The new CAP will require farmers to better address environmental and climate goals. A portion of the direct payments will be conditional on enhanced environmental and climate requirements. Moreover, at least 30% of the rural development budget of each Member State will have to be dedicated to environmental and climate measures.
According to the Commission, the EU budget plays a crucial role in contributing to sustainable growth and social cohesion. In recent years, however, some regions have actually diverged, even in relatively richer countries 8 . To better address the new situation, the Commission decided to extend the eligibility criteria for support to include new factors: labour market situation, education and demographics (15% of the allocation of all funds); climate protection covering greenhouse gas emissions (1%); migration factors, meaning net migration of non-EU citizens (3%). The traditional gross domestic product (GDP) per capita level (GNI for the Cohesion Fund) will be responsible for 81% of the allocation of cohesion policy funds. Moreover, the national co-financing rates will be increased, which -in the Commission's opinion -will better reflect today's economic realities.
Out of EUR 373 billion (current prices, commitments) of cohesion policy appropriations in 2021-2027, Poland is supposed to receive EUR 72.7 billion, i.e. 19.5% of the sum total (http://europa.eu/rapid/press-release_IP-18-3885_ en.htm). In contrast, in the 2014-2020 period, Poland has at its disposal EUR 77.6 billion (current prices) for reducing disparities in socio-economic development, i.e. 22% of the overall amount from the EU budget for that purpose (https://ec.europa.eu/regional_policy/en/information/publications/ factsheets/2014/cohesion-policy-and-poland). Therefore, the sum proposed is lower, especially in real terms (taking account of inflation). Nevertheless, in absolute terms, Poland will remain the largest beneficiary of cohesion policy in the EU.
Brexit may have yet another adverse effect on cohesion policy: certain regions will lose support. As a result of the United Kingdom's exit from the EU, there will be a fall in GDP per capita, which will decrease the eligibility threshold for support for the least wealthy regions. N. J. Brehon (2017) estimates that decline at ca. 3.6%, i.e. around EUR 1,000. According 8 Opinions among economists on the effectiveness of cohesion policy differ but a number of empirical studies confirm the positive effect of this policy on real convergence in the EU. Such convergence (in terms of GDP per capita in PPP, it is in purchasing power parity) is visible at the country's level, as the convergence between regions has been increasing since the deep recession in 2008 (on the review of academic literature relating to the effectiveness of cohesion policy see : Creel 2018). to his calculations, this statistical effect will cost 12 EU regions their support entitlements. In that group, he also identified the Polish region of Wielkopolska (Brehon 2017, p. 18). Obviously, such regions are likely to get transitional solutions (the phasing-out of support), as was the case before when such situations occurred (e.g. as a result of previous EU enlargements). However, much will depend on the final decisions made, including on the scale of appropriations for that objective.
As cohesion policy plays an increasingly important role in supporting economic reforms in the Member States, the Commission proposed to strengthen the link between the EU budget and the European Semester of economic policy coordination. Let us note that the European Semester is about the enhanced coordination of national economic policies. Therefore, one can expect that the EU Member States will not easily accept the new proposal for making funding under cohesion policy conditional on the implementation of the European Semester priorities imposed by the Commission. But the Commission promised to prepare a "dedicated investment-related guidance alongside the annual Country-Specific Recommendations, both ahead of the programming process and at mid-term to provide a clear roadmap for investment in reforms that hold the key to a prosperous future" (European Commission 2018b, p. 9). However, there is still a risk that the proposed "guidance" will reduce the flexibility of cohesion policy spending in individual Member States.
Under the heading "Cohesion and values", the Commission also proposed increasing the stability and efficiency of the Economic and Monetary Union (EMU) and certain funds to pursue those goals. The rationale is evident. As the Commission argues: "Under the Treaties, the euro is the currency of the EU, and economic convergence and stability are objectives of the Union as a whole. This is why the tools to strengthen the Economic and Monetary Union must not be separate but part and parcel of the overall financial architecture of the Union" (European Commission 2018b, p. 10). For reasons of space, we shall not discuss this issue further here. Let us merely point out that those tools, albeit justified, will not be fully available to Poland as some of them are targeted at euro-area members only.
Proposed Inclusion of the Conditionality Principle
The Commission's proposal for the new post-2020 financial rules also included a suggestion as regards conditionality. This concerns the possibility to link the payment of budget appropriations to respect for the values referred to in Article 2 of the TEU, in particular with regard to the rule of law in Member States (European Commission 2018e). As indicated by the Commission, "under the current Multiannual Financial Framework, all Member States and beneficiaries are required to show that the regulatory framework for financial management is robust, that the relevant EU regulation is being implemented correctly and that the necessary administrative and institutional capacity exists to make EU funding a success". Simultaneously, the new MFF offers an opportunity to evaluate the implementation as well as "the moment to consider how the link between EU funding and the respect for the EU's fundamental values can be strengthened" (European Commission 2018c, p. 16). As a rule, such a mechanism could apply to all policies involving expenditure from the EU budget. The legal basis of a Regulation proposal is Article 322 of the Treaty on the Functioning of the EU, through which financial management rules are set 9 .
Under the proposal, the Union could suspend, reduce or restrict access to EU funding in a manner proportionate to the nature, gravity and scope of the deficiencies. This regulation could be invoked when a generalised deficiency as regards the rule of law in a Member State poses threats to, for instance, the proper functioning of the national authorities implementing the Union budget, effective judicial review by independent courts, the prevention and sanctioning of fraud, corruption or other breaches of EU law relating to the budget, the recovery of funds unduly paid, endangering the independence of the judiciary, failing to prevent, correct and sanction arbitrary or unlawful decisions by public authorities, the lack of implementation of judgements 10 . Thus, the coverage of the proposal is very broad. The proposed mechanism would not affect individual beneficiaries of EU funding under the budget, e.g. Erasmus students, researchers, etc. The argument is that they cannot be held responsible for breaches of law.
Findings
The decision on the next MFF funds will determine not only the Member States' approach to whether they wish to at least maintain the real size of the budget at the present level (which will require increasing revenue after the withdrawal of the United Kingdom) but, primarily, their choice of a scenario for the EU's development in the nearest future. As aptly pointed out by J. Barcz, "in recent years, the internal differentiation of the Union has become a fact, a risk of fragmentation of the process of European integration, and a permanent characteristic of the process of European integration" (Barcz 2018, p. 31).
The above conducted analysis has demonstrated how much the future of an internally diverse EU now depends on reaching a compromise on increasing the budget for 2021-2027, at least by the Brexit gap. Without such a compromise, there will be insufficient funds to continue the current integration process, not to mention the new and ambitious priorities of the EU. A larger budget will mean readiness to jointly resolve existing and new problems and to enhance integration benefits. Limiting the budget to the size resulting from Brexit would mean having to reduce appropriations for currently implemented policies, especially the agricultural and cohesion policies, which represent important pillars of the process of European integration. The need to increase the budget is all the stronger that there are new objectives vital to all the EU Member States and whose effective implementation requires greater funds (e.g. counteracting climate change, the digitalisation revolution, the stabilisation of economic and monetary union, and external border protection).
The analysis has revealed that the United Kingdom's exit may speed up the reform of EU budget revenue. The Brexit gap is so large that net payers will object to financing it in technically the simplest but politically the hardest way -i.e. through a GNI increase. Therefore, they are likely to agree on new, additional sources, although not necessarily to approve all three of the Commission's proposals. It is also conceivable that a new rebate will be introduced as a compromise in the adoption of new solutions.
In 2021-2027, expenditure on cohesion policy and agriculture will be reduced. Such cuts would probably be inevitable anyway, but Brexit has made it easier for the Commission to justify them with the need for budgetary "savings" in conditions of lower revenue after 2020.
As in the case of other countries, Poland will receive less money from the EU budget after 2021 compared to 2014-2020. Cuts in funds for Poland (as well as for other Member States) will also result from other proposals of the Commission, only briefly mentioned here or excluded due to lack of space. For instance, those include the option to apply the conditionality principle (the reduction or suspension of EU funding in the event of a violation of EU values) in practice. Invoking such a provision is likely in situations where the Commission raises objections to Poland's deficiencies as regards the rule of law. Certainly, such a decision would be unfavourable for the country. Other conditions for possible cuts in EU funds for beneficiaries include decreasing the EU co-financing rate for projects funded under cohesion policy, the lack of access to all appropriations proposed for enhancing the stability of the euro area (some items are only targeted at euro-area members), etc. In other words, the sums resulting from the formal division of appropriations among Member States do not adequately reflect the scale of funds expected within the MFF for 2021-2027. The actual amounts will depend on meeting a number of detailed conditions. perspektywę Polski. Dla osiągnięcia celów badawczych zastosowano metodę analizy dokumentów unijnych i przeglądu literatury przedmiotu. | 2020-02-06T09:14:32.792Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "f13cd9a1d8e254021440480518b74dc276709966",
"oa_license": null,
"oa_url": "https://aoc.uek.krakow.pl/article/download/1829/1443",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e905fa14660362b8ee3ebe891a76ddc0c07a642e",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
233650309 | pes2o/s2orc | v3-fos-license | Monty Hall three door ’anomaly’ revisited: a note on deferment in an extensive form game
The Monty Hall game is one of the most discussed decision problems, but where a convincing behavioral explanation of the systematic deviations from probability theory is still lacking. Most people not changing their initial choice, when this is beneficial under information updating, demands further explanation. Not only trust and the incentive of interestingly prolonging the game for the audience can explain this kind of behavior, but the strategic setting can be modeled more sophisticatedly. When aiming to increase the odds of winning, while Monty’s incentives are unknown, then not to switch doors can be considered as the most secure strategy and avoids a sure loss when Monty’s guiding aim is not to give away the prize. Understanding and modeling the Monty Hall game can be regarded as an ideal teaching example for fundamental statistic understandings.
Introduction
Since Friedman (1998), the Monty Hall decision problem 1 was intensively discussed. While the experimental observations appear interesting, its behavioral explanation still remains disappointing. The investigated decision frame is constructed after a television show where three doors exist of which only one conceals the winning prize, while the other two equal zero profits. After you, as the contestant, have picked a door of your choice, the show master Monty opens one of the unchosen doors which does not reveal the prize. The question then is: do you want to switch to the remaining door or do you want to stick with your original choice. In other words, what are the winning probabilities for changing and not changing doors. In the standard construction, Monty always opens one of the unchosen doors (the one without the prize if the prize has not been chosen and otherwise one of the two randomly) and offers the contestant the option to change doors. Under these simplifying specifications the undisputed consent is to always change your door as the remaining door's probability to be a winning door is (at least) larger than 1/3, 2 while the probability has not changed for the initially chosen door. Why is it then that so many of us stay with their initial choice and do not want to change to the other door with the higher winning probability?
Is it then necessary to resort to things like reverse psychology as a possibility raised by Kevin Spacey in the role of MIT Professor Micky Rosa (in the movie "21" released 2008 by Columbia Pictures)? 3 Interestingly, not only the first intuition is to stick with the initially chosen door, but experimental investigations show that many participants remain reluctant to change and do not switch to the other unopened door. Though, playing the Monty game repeatedly documents a robust learning effect toward increased switching close to or slightly above 50% (i.e. Friedman 1998). Palacios-Huerta (2003) show that incentives, ability, and social interaction 2 Various arguments were provided for the range between 2/3 and 1/2 winning probability for switching (Rosenhouse 2009, for a broader overview to the decision problem and the corresponding literature see), and most contributions agree that it is profitable to switch from the initial choice to the other unopened door. 3 The following dialog is transcribed from a scene where the Monty Hall problem is taught in class. Prof. Rosa (Kevin Spacey): "Is it in your interest to switch your choice?" Ben (Jim Sturgess): "Ja." Prof. Rosa: "Wait! Remember the host knows where the car is. So how are you knowing he is not playing a trick on you? Trying to use reverse psychology to get you to pick a goat?" 1 The Monty Hall Show was a television broadcast where participants choose between different doors with only one bearing the winning prize (i.e. sport car) and the others nothing (i.e. goats). Frequently (but definitely not always) the host opened one of the unchosen doors (always) showing that this did not contain the prize. Participants were then asked if to change from the originally chosen door to the unchosen but closed door. The emotional difficulty with changing the door or not was a key feature of the show. Defining the optimal choice appeared to be an interesting puzzle (i.e. Nalebuff 1987). After Vos Savant (1990) proclaimed that in the so called Monty Hall dilemma the probabilities are actually two to one in favor for the change iff Monty opened the other door, an academic discussion of the decision problem began (see for example Morgan et al. 1991;Gill 2011). This reached so far to develop models or simulations for people to better understand the probabilities, with for example decision tree illustrations or by increasing the number of opened doors (see for example Shaughnessy and Dick 1991;Page 1998;Franco-Watkins et al. 2003;Krauss and Wang 2003) 1 3 Monty Hall three door 'anomaly' revisited: a note on deferment… can further strengthen learning effects in the repeated game. In a similar vein, Slembeck and Tyran (2004) conclude that communication and competition between participants supports learning towards increased switching -especially over the first rounds. Repetitions seem to help, although do not lead to optimal behavior. Granberg (1999a) show in their cross-cultural comparison study that sticking with the initial choice in the Monty game is a rather universal phenomenon. Cognitive illusions (i.e. of control) or cognitive biases (i.e. status-quo) have been proposed as possible explanatory concepts for such kinds of behavior (compare Granberg and Brown 1995;Granberg 2014). Can game theory provide alternative solutions besides explanatory concepts and posthoc rationalizations?
Definitions and Solutions
The Monty game can be defined as a sequential two player constant sum game with asymmetric information and the following specific characteristics.
(i) Player 1 (i.e. you) chooses between three options with only one holding the winning prize, but you do not know which. Therefore, the probability of having chosen the winning option (W) is 1/3 and the probability of having chosen the losing option (L) is 2/3. (ii) Player 2 (i.e. Monty) has the possibility to expose (e) or not expose ( e ′ ) one of the unchosen options which is not holding the prize. (iii) Player 2 knows before deciding between e or e ′ if W or L. The prize is never exposed and revealed to player 1 only in the final stage of the game. (iv) Iff e player 1 decides between changing to the unexposed and unchosen option (c) or staying with the initial choice ( c ′ ). (v) The incentive for player 1 is to win the prize and for player 2 not to give away the prize.
Furthermore, assume fully rational players completely abiding to these rules and always acting according to purpose without error. Simplified Monty decides, as player 2, only between e and e ′ . Sophisticated Monty fully takes information under (iii) into account, and as player 2 chooses separately for e W and for e L or respective odds. First, pure and then mixed strategies are investigated. The utility structure is strongly simplified under (v). The easiest representation of individual utility is in monetary terms, here as winning or not winning the prize. Monetary rewards are not necessarily the only outcome, which is taken into account. Social considerations or anticipated feelings can determine the resulting utility as well. Plausible utility extensions for player 2 and player 1 are investigated under Monty game expansions. These additional interdependent components are introduced by stepwise adding complexity.
Simplified Monty game
The simplest representation of the Monty game as a strategic game is in normal form. This defines the full strategy space for every player and all possible strategy combination with the resulting payoff for each player. The representation of all possible strategy combinations is in the form of a static matrix, which can be a contingent representation of a sequential game. Without considering the information if it is the winning or losing option W or L, the Monty game can be considered a simultaneous move game as shown in Table 1. The solution concept here is the Nash equilibrium, where in a given situation none would be better off by switching towards an alternative strategy. With two players and two strategies for each, this simply means that a player could not increase his/her payoff by choosing the other strategy, given the current strategy of the other player. This must simply hold for both players.
Proposition 1
The only equilibrium in pure strategies is with player 2 not exposing ( e ′ , c ) and ( e ′ , c ′ ).
As a sequential game in extensive form the simplified Monty game reduces to one subgame perfect equilibrium at ( e ′ , c ) through backwards induction (see Fig. 1). Given that player 2 decides not knowing whether W or L, there is no mixed strategy equilibrium as player 2 can only improve by increasing the proportion of e ′ as e ′ weakly dominates e (if c then e ′ is better and if c ′ then e ′ is not worse). The maximum gain for player 1 is increasing the winning probability from 1 3 to 2 3 in c for e. This gain is simplified in the literature when e is given, although without further assumptions player 2 would prefer e ′ (i.e. never opens a door to expose that it is not the winning prize).
Sophisticated Monty game
In addition, in previous investigations it is stressed that player 2 knows if the winning door was chosen (W or L), and this knowledge can be acknowledged in a formal representation of the Monty game. Monty as player 2 knows if player 1 has initially picked the winning option (i.e. door with the prize behind it) or not, and it is reasonable to assume in the sequential form game two variants for e: one if it was the winning choice e W (or e ′ W ) and another one if it was the losing choice e L (or e ′ L ). Furthermore, these can be chosen with different probabilities in a mixed strategy equilibrium. A comparable differentiation between probabilities for e has been made by Morgan et al. (1991), page 286, Mueser et al. (1999), pages43-46, andWhitmeyer (2017), pages5-7. Schuller (2012) more generally stresses that with unknown expose probabilities of winning versus losing cases the safe strategy for player 1 is not to change and secure a 1/3 winning probability. As a consequence, all sophisticated Monty game equilibria restrict player 1 to c ′ .
Proposition 2
The only Nash equilibria in pure strategies are with player 1 not changing ((e W , e � L ), c � ) and ((e � W , e � L ), c � ).
Proof Player 2 is indifferent ( e = e � ) iff player 1 not changes ( c ′ ), otherwise player 2 prefers e ′ W and e L where player 1 prefers c ′ over c. Only for ((e W , e � L ), c � ) and Player 2 exposing doors dependent on the initial choice of player 1 (e conditional on W or L) is an informational advantage and does change the equilibria. With asymmetric information the game is represented in extensive form. In pure strategies it makes player 1 to choose c ′ , which is consistent with most peoples' intuition. Mixed strategies can be derived for player 1 with p for c and 1 − p for c ′ . Player 2 can mix Proof Indifference for player 2 between e W and e ′ W as well as e L and e ′ L requires p = 0 as otherwise r = 1 and s = 0 . Determining r and s so that player 1 is indifferent between c and c ′ requires All combinations of e W and e L with r = 2s (and c ′ ) are equilibria. It pays for player 1 to choose c only when r 2 > 2 , but this again would contradict player 2' interests. Player 2 keeps this combination only for c ′ , as otherwise decreasing r and increasing s would be beneficial. Naturally, player 2 can have different incentives in this game deriving for example from extending the game or from receiving something back if the prize is won.
Monty game expansions
Additional assumptions can be introduced as explanatory concepts for the observed behavior. Two game expansions are proposed here for illustration purposes. First, the process of opening a door (e) is beneficial for the host and the derived utility
Fig. 3 Monty game expansions
needs to be added for player 2. Second, social concerns like reciprocity might play a role and can be taken into account. It appears reasonable the host being fickle and alternating between e and e ′ . Furthermore, these frequencies can be chosen purposeful when enjoying the prolongation of the game per se. 4 This is represented in Fig. 3a by adding constant utility for player 2 when reaching the second stage. The only equilibrium in pure strategies would then be ((e W , e L ), c) , as e weakly dominates e ′ and for e player 1 prefers c. Note that this only holds for the value of prolonging being equal to the prize. This value can be expected to be lower and then only one mixed strategy equilibrium remains. As for player 1 the payoffs are always the same, 2r = s remains unchanged. e W is strictly preferred (i.e. r = 1 ) and e L = e � L requires More generally, for prolonging being smaller in value than the prize then p equals their relation (i.e. p = 0.5 if the value of prolonging is half the value of the prize).
Only if the values are equal does the pure strategy equilibrium result. Otherwise for player 1 the question to answer is "what is prolonging worth for player 1" to determine p. Interestingly the proclaimed advantage of c can result, but the value of simply prolonging the show can be comparably small. Another game expansion is to assume social motives in the form of reciprocity. In the setting of the Monty Hall game show this could be in the form of showing extra joy for winning after having to reconsider the choice (being valuable for the show master by increasing the number of viewers). The expanded game in Fig. 3b acknowledges this, but without taking negative reciprocity into account. Concerning pure strategy equilibria nothing changes, and mixed strategy equilibria still require p = 0 for player 2 to be indifferent. The only difference concerns the relation between s and r, which now need to be equal for a payback of 0.5 as shown in Fig. 3b. For a reasonably lower payback than 50% of the prize r < s ( 2s[1−payback] = r ). The higher the payback the lower is the proportion of r. The question for player 2 then shifts towards the question of reciprocity ("how much can I expect back") when exposing the door without the prize behind (i.e. in terms of show value). Both expansions together provide a more specific characterisation of the Monty Hall problem than its simplified representation in the literature, and which is more in line with the natural understanding of this strongly framed choice task under uncertainty.
Conclusion and discussion
Psychological expansions can rationalize the popular solution, although simply mixed strategy equilibria and conditional probabilities suffice here. An interesting psychological aspect is to take first associations or the initial intuition into account. This need not only apply for the equilibrium selection problem (i.e. focal points or prominence), but could also enrich the understanding of other behavioral regularities. Perceived risk is the fundamental characteristic investigated by the Monty Hall game. The derived results describe the (persistent) behavior of many that switching doors is more risky. This is not only true under bounded rationality of not knowing the odds, but also in a strategic setting where the host prefers not giving away the prize. Only for simplified Monty who is always opening, or if Monty is assumed to make lots of errors while revealing a losing door (i.e. opening the doors in winning and losing cases more equally), then switching doors becomes the more successful strategy.
Most controversies of the Monty Hall problem might be due to unclear player incentives (see Mueser et al. 1999). The experimental evidence of many participants not switching is robust even under experimenters explicit claim of always opening the unchosen door with no prize behind (compare Granberg 1999b). Uncertainty might prevail as this experimental promise is non-binding and the choice situation can be represented as a normal form game with two players both having two strategies, as in the Simplified Monty game. The sequential game representation, as in the Sophisticated Monty game, illustrates this uncertainty as an information set for the contestant not knowing in which state of the world W or L (s)he is in. Furthermore, bounded rationality could argue for the complexity of the task making not switching the more robust strategy, and we do not need refer to reverse psychology or other forms of psychological tricks to influence the other players behavior. If there is an additional utility from prolonging the game and this crucial utility of the host is acknowledge by the contestant, only then switching should be preferred to not switching. An alternative explanation are social preferences. In the form of sequential reciprocity this can work similar to forwards induction in the trust game (compare Kohlberg and Mertens 1986;Dufwenberg and Kirchsteiger 2004;Battigalli and Dufwenberg 2009). The (anticipated) effect of trusting or not can be seen as serious competitors to mixed strategies equilibria, but Monty's motivation mostly remains unclear. For this various Monty types have been proposed (i.e. mean, altruistic, etc.), but the general grounds for cooperative versus uncooperative behavior remain dubious. The Monty game is usually specified as a one shot game (though investigated experimentally as a repeated game). Signaling the Monty type by opening a door does not work either (compare common priors Whitmeyer 2017). Also that joy will be shown by the contestant cannot be taken for granted and would demand another decision stage. Note that not all possible incentive structures of the game are covered here and that the chosen game tree expansions are mainly introduced to illustrate corresponding shortcomings in the discussion of this choice task under uncertainty. When the specific structural component of a simultaneous choice is stressed for switching to be the dominant strategy, as if deciding before the revealing weather to switch or not, this as well seems not properly represent the strategic situation in the game. If Monty always reveals a losing door, this does not represent a free agent in a strict economic sense (i.e. for game theory an awkward definition of a social problem as one player against chance). Furthermore, the experimental results of increasing switching decisions over repetitions might as well result from experimenter demands or being a reconsidering effect, and improving behavior over repetitions does not necessarily incorporate the learning of the underlying odds.
Still, the Monty Hall game illustrates the clash between statistical thinking and observed choice behavior. Taking this discrepancy seriously asks for descriptive models that can cope with the complexity of the problem. Already different standard representations help to illustrate the problem. A formalization of choices in social settings is given by game theory that captures the strategic dependencies between players. The provided exercise of differently representing the choice situation should sharpen the understanding of the problem diversity and illustrates how the representation of a choice problem can theoretically lead to distinct outcomes. What expansions are useful to improve the general understanding of the problem can only be answered empirically. The provided expansions for the Monty Hall problem clearly need to be investigated experimentally. This theoretical approach here is to stress the importance of developing sound foundations in experimental investigations, and to help understand the behavioral facets in social settings. Behavior can be manifold. Formalizing, and thereby clearly defining the decision problem at hand, is important in all social sciences and teaching conditional probabilities and aspects of game theory serves as a nice illustrative example here.
Sometimes the initial intuition can be right. Usually the audience in the Monty Hall show perceives changing doors as more risky under unknown probabilities. This can be seen as some kind of uncertainty avoidance (similar to the Allais paradox) by people simply playing safe. For the Monty game uncertainty avoidance has been investigated as anticipated regret (Gilovich et al. 1995) or a minimax strategy (Schuller 2012), and not switching doors does not need another explanatory heuristic. If a person changes his/her initial choice this behavior demands distributional assumptions about Monty's behavior, preferences for prolonging the game, or some form of forwards induction with specific social preferences. Usually, social situations can be rather complex, but also grasped by various theoretical concepts. Grasping the statistical dependencies within the Monty Hall game is representative for the understanding of various decision problems in social sciences.
Funding Open Access funding enabled and organized by Projekt DEAL.
Conflicts of interest No conflicts to report.
Availability of data and material Not applicable.
Code availability Not applicable.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-05-05T00:08:05.227Z | 2021-03-26T00:00:00.000 | {
"year": 2021,
"sha1": "3f86d16512cffb7b59a7a8e10de989cafe8e1cd2",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11299-021-00277-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "d54f0ddbf504ff71e57bf70e7552677a2fcf4cec",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
14975757 | pes2o/s2orc | v3-fos-license | Hyperpolarization-Activated Current (Ih) Is Reduced in Hippocampal Neurons from Gabra5−/− Mice
Changes in the expression of γ-aminobutyric acid type A (GABAA) receptors can either drive or mediate homeostatic alterations in neuronal excitability. A homeostatic relationship between α5 subunit-containing GABAA (α5GABAA) receptors that generate a tonic inhibitory conductance, and HCN channels that generate a hyperpolarization-activated cation current (Ih) was recently described for cortical neurons, where a reduction in Ih was accompanied by a reciprocal increase in the expression of α5GABAA receptors resulting in the preservation of dendritosomatic synaptic function. Here, we report that in mice that lack the α5 subunit gene (Gabra5−/−), cultured embryonic hippocampal pyramidal neurons and ex vivo CA1 hippocampal neurons unexpectedly exhibited a decrease in Ih current density (by 40% and 28%, respectively), compared with neurons from wild-type (WT) mice. The resting membrane potential and membrane hyperpolarization induced by blockade of Ih with ZD-7288 were similar in cultured WT and Gabra5−/− neurons. In contrast, membrane hyperpolarization measured after a train of action potentials was lower in Gabra5−/− neurons than in WT neurons. Also, membrane impedance measured in response to low frequency stimulation was greater in cultured Gabra5−/− neurons. Finally, the expression of HCN1 protein that generates Ih was reduced by 41% in the hippocampus of Gabra5−/− mice. These data indicate that loss of a tonic GABAergic inhibitory conductance was followed by a compensatory reduction in Ih. The results further suggest that the maintenance of resting membrane potential is preferentially maintained in mature and immature hippocampal neurons through the homeostatic co-regulation of structurally and biophysically distinct cation and anion channels.
Introduction
Proper functioning of the central nervous system depends on the delicate control of neuronal excitability through a balance of excitation and inhibition. The homeostatic regulation of ion channels that regulate membrane conductance contributes to the maintenance of this balance [1,2]. Pathological brain states can result when this balance is disrupted, such as the development of seizures following the loss of neuronal inhibition [3,4]. Ample evidence suggests that homeostatic mechanisms exist to compensate for the loss of neuronal inhibition to maintain normal brain function [5,6].
The neurotransmitter c-aminobutyric acid (GABA) largely mediates inhibitory neurotransmission in the mammalian brain [7]. Activation of synaptically-localized type A GABA (GABA A ) receptors results in rapid transient inhibition of postsynaptic neurons whereas activation of extrasynaptic GABA A receptors by low concentrations of ambient GABA generates a tonic inhibitory conductance [8]. A tonic GABAergic conductance in the hippocampus is predominantly generated by GABA A receptors that contain either the a5 subunit (a5GABA A ) or d subunit (dGABA A ) [9,10]. Tonic GABAergic inhibition can exert powerful regulatory constraints on neuronal firing, excitability, and plasticity of excitatory synapses of hippocampal pyramidal neurons [11][12][13].
Loss of tonic inhibition can induce compensatory changes in the expression of other ion channels that maintain normal neuronal function. For example, in cerebellar granule cells of a6GABA A receptor-null mutant mice, the loss of tonic inhibition mediated by putative extrasynaptic dGABA A receptors was accompanied by a homeostatic increase in the expression of two-pore domain K + TASK-1 channels that generate a tonic inhibitory K + current [14]. This increase in TASK-1 channel expression maintained neuronal excitability at levels observed in wild-type (WT) neurons.
Genetic deletion of voltage-dependent ion channels can also induce homeostatic changes in tonic GABAergic inhibition [15]. In particular, the genetic deletion of the hyperpolarizationactivated cyclic nucleotide-gated type 1 (HCN1) channel which generates a hyperpolarization-activated cation current (I h ) increased the expression of a5GABA A receptors in cortical pyramidal neurons [15]. HCN channels are encoded by four genes (HCN1-HCN4), and are activated at hyperpolarized membrane potentials. HCN channels are permeable to both Na + and K + ions and mediate an inward current [16]. These noninactivating ion channels exert complex effects on neuronal function by providing a tonic depolarizing current which contributes to resting membrane potential and opposes deviations away from the prevailing membrane potential. In hippocampal and neocortical pyramidal neurons, these biophysical properties of I h , together with a preferential distribution of the channels in distal dendrites limits the influence of excitatory synaptic input on membrane potential [17].
Pyramidal neurons of the hippocampus and cortex predominantly express the type-1 isoform of HCN (HCN1), and deletion of HCN1 strongly decreases I h in these neurons [18,19]. Surprisingly, the summation of evoked excitatory post-synaptic potentials (EPSPs) in cortical neurons was unchanged following genetic deletion of HCN1 [15]. A homeostatic upregulation of a5GABA A receptors in the cortex maintained the sublinear somatic summation of EPSPs following deletion of HCN1 [15]. As such, the increase in tonic inhibition compensated for the loss of I h and constrained dendritosomatic efficacy. Notably, there was no upregulation of a5GABA A receptors in hippocampal pyramidal neurons of HCN12/2 mice, perhaps due to a saturation of a5GABA A receptor expression in these neurons [15].
a5GABA A receptors and HCN1 channels have several common biophysical and functional properties that suggest they may mutually co-regulate neuronal excitability. For example, both channels can remain persistently activated following a hyperpolarization of the membrane to regulate resting membrane potential and conductance [11,16,20]. Additionally, HCN1 channels are expressed in high levels in the distal dendrites of hippocampal pyramidal neurons [21] where a5GABA A receptors are also clustered [22]. Tonic inhibition and I h both regulate the induction of long-term synaptic plasticity of hippocampal pyramidal neurons and limit sublinear EPSP summation in neocortical pyramidal neurons [15]. Finally, both a5GABA A receptors and HCN1 channels constrain hippocampus-dependent memory performance [13,19].
The functional commonalities between a5GABA A receptors and HCN1 channels suggest that the potential reciprocal homeostatic co-regulation of these proteins is plausible. However, it is unknown whether the expression of a5GABA A receptors regulates I h . In this study, we tested the hypothesis that a reduction in the expression of a5GABA A receptors causes a reciprocal upregulation of I h in hippocampal pyramidal neurons. Unexpectedly, we found the opposite, where a reduction in the expression of a5GABA A receptors was associated with a reduction of I h that contributes to homeostatic maintenance of resting membrane potential in these cells.
Electrophysiology
Hippocampal cell culture. The experiments reported here were approved by the Animal Care Committee of the University of Toronto. All experiments were conducted with hippocampal tissue harvested from WT Gabra5+/+ or a5GABA A null mutant mice (Gabra52/2) mice. Generation of the Gabra52/2 mice has been described previously [23]. Briefly, all mice were of mixed genetic background (50:50 C57BL/6 and 129SvEv), and WT and Gabra52/2 mice were generated by crossing heterozygous Gabra5+/2 mice. Cultures of hippocampal neurons were prepared as previously described [11] from Gabra52/2 and WT littermates on postnatal day 1. Cells were maintained in culture for 14 to 21 days before experimentation. Hippocampal brain slices. Slices were prepared from WT and Gabra52/2 mice that ranged in age from postnatal day 17-21. After administration of isoflurane anesthesia, the mice were decapitated and their brains quickly removed and placed in icecold, oxygenated (95% O 2 , 5% CO 2 ) artificial cerebrospinal fluid (aCSF; containing in mM: NaCl 124, KCl 3, MgCl 2 1.3, CaCl 2 2.6, NaH 2 PO 4 1.25, NaHCO 3 26, D-glucose 10) with osmolarity adjusted to 300-310 mOsm. Brain slices (350 mm) containing coronal sections of the hippocampus were prepared with a VT1200 tissue slicer (Leica, IL, USA).
Data Acquisition. Data were acquired with a Multiclamp 700B amplifier (Molecular Devices Corporation, Sunnyvale, CA, USA) controlled with pClamp 9.0 software (Molecular Devices Corporation) via a Digidata 1322 interface (Molecular Devices Corporation). Membrane current and voltage were filtered at 2 kHz and sampled at 10 kHz for all electrophysiological experiments. Membrane capacitance was measured with the membrane test protocol in pClamp 9.0. Access resistance was monitored periodically throughout the experiments by a brief 10-mV or 10-pA hyperpolarizing step during voltage-clamp and current-clamp experiments, respectively. Cells were eliminated from further analysis if the access resistance changed by more than 20% over the recording period. Liquid junction potential and pipette capacitance were corrected using the pClamp 9.0 software before the whole-cell configuration was established.
Patch pipettes, pulled from thin-walled borosilicate glass capillary tubes, had open-tip resistances of 4 to 6 MV when filled with an intracellular solution that contained (in mM) 145 K + gluconate, 5 Na + gluconate, 2 KCl, 10 HEPES, 11 EGTA, 4 Mg 2+ ATP, and 1 CaCl 2 with an osmolarity of 300 to 320 mOsm and the pH adjusted to 7.3 with KOH. Extracellular solutions for all experiments contained (in mM) 140 NaCl, 1.3 CaCl 2 , 2.0 KCl, 25 HEPES, and 33 glucose; the osmolarity was adjusted to 290 to 300 mOsm with sucrose, and the pH was adjusted to 7.4 with 10 N NaOH. The extracellular solution was applied directly to neurons at a rate of 1 ml/min by a computer-controlled, multibarreled perfusion system (SF-77B; Warner Instruments, Hamden, CT, USA). Whole-cell current was recorded with the holding potential clamped at 260 mV except where indicated otherwise.
Experiments in cultured pyramidal neurons were performed as previously described [11]. For experiments in hippocampal slices, whole-cell recordings were obtained from the pyramidal cell layer using a blind-patch technique. Neurons with small membrane capacitances suggestive of non-pyramidal neurons in this preparation (,60 pF) were excluded from study (3 WT, 1 Gabra5 2/2 neuron). The composition of the intracellular solution and the recording procedures were identical to those described for the recordings from cultured neurons.
In all experiments, the ionotropic glutamate antagonists 6cyano-7-nitroquinoxaline-2,3-dione (10 mM) and 2-amino-4-phosphonovaleric acid (40 mM) were added to the extracellular solution. In experiments designed to measure I h and membrane impedance, the Na + channel blocker tetrodotoxin (0.3 mM; Alomone Labs, Jerusalem, Israel) was added to the extracellular solution. Aqueous stock solutions of all drugs were prepared with distilled water. All drugs and chemicals were purchased from Sigma-Aldrich (Oakville, Ontario, Canada) except where indicated otherwise.
Measurement of I h . I h was activated by changing the holding potential from 260 mV through a range of test potentials (from 2120 mV to 230 mV) in 10-mV steps. Each test potential was maintained for 500 ms. The net I h conductance was measured as the difference between the steady-state current at the end of the test potential and the minimum current measured within 100 ms of the start of the test potential ( Fig 1A). The I h tail current was measured as the peak amplitude of the residual current measured at the end of each test potential immediately after the return the holding potential to 260 mV. The membrane potential that evoked half-maximal activation (V 50 ) of I h was determined by fitting the tail current activation data to a Boltzmann sigmoidal function using Graphpad 4 (Graphpad, San Diego, CA, USA). The kinetics of I h activation, measured at holding potentials between 2120 mV and 270 mV, were determined by fitting onset of the current with a single exponential curve using Clampfit 10 (Molecular Devices Corporation) with the equation: The net I h was measured at the end of the test holding potential, and the I h conductance was estimated by fitting the net I h measured between 2120 mV and 290 mV with a linear regression line.
Measurement of after-hyperpolarization. An after-hyperpolarization of the membrane was induced by stimulating neurons with a train of action potentials in current-clamp mode. A depolarizing current sufficient to stimulate action potential firing at a frequency of 5 Hz for 2 s was applied and after-hyperpolarization was measured as the area under the curve, relative to resting membrane potential, of the membrane potential over the period of hyperpolarization following the train of action potentials. The decay time constant (t) of the after-hyperpolarization was measured with Clampfit 10 by fitting the decay with a standard single exponential curve.
Determination of membrane impedance. The neuronal frequency-dependent membrane impedance was studied using the impedance (Z) amplitude profile (ZAP) as described previously [24]. In brief, in whole-cell current-clamp mode, neurons were Figure 1. Reduced I h in cultured Gabra52/2 neurons. A) Schematic illustrating the method of I h measurement B) I h was activated in cultured hippocampal pyramidal neurons of wild-type (WT) and Gabra52/2 neurons by changing the membrane potential from 2120 mV to 230 mV in 10-mV increments. C) Estimation of I h conductance from the linear portion of the current-voltage curve generated by hyperpolarizing the resting membrane potential revealed a 43% reduction of I h conductance in Gabra52/2 neurons. D) Quantification of the I h tail currents that remained after membrane potential was returned to 260 mV revealed significantly lower I h density in Gabra52/2 neurons (n = 16) than in WT neurons (n = 9). Neither the kinetics of I h activation (E) nor sensitivity to Ba 2+ (0.5 mM; n = 5) or Cs + (0.5 mM; n = 4) (F) were changed in Gabra52/2 neurons, which suggested no change in the subtypes of HCN channels generating I h . G) Enhancing or reducing the tonic current in WT neurons with 1 mM GABA (n = 6) or 1 mM picrotoxin (PTX; n = 6), respectively, did not change I h measured at 2120 mV, demonstrating that the lower level of I h in Gabra52/2 neurons is independent of changes in tonic inhibition. doi:10.1371/journal.pone.0058679.g001 injected with a sinusoidal current of constant amplitude and linearly increasing frequency (0-40 Hz over 30 s). The amplitude of the ZAP current was adjusted to maintain a peak depolarization of the membrane potential of approximately 10 mV positive to resting potential. The frequency-dependent membrane impedance was determined by transforming the membrane voltage and input current recordings with a fast Fourier transform over the range of frequencies from 0.5 to 40 Hz with Clampfit 10 and dividing the transformed voltage by the current. The peak resonance frequency was determined as the input frequency at which membrane resistance was the greatest.
Hippocampal protein (15 mg) was loaded on 10% Bis-Tris gels and transferred onto nitrocellulose membranes (Pall Life Sciences, NY, USA) followed by SDS-PAGE. The membranes were rinsed in TBS-Tween that contained 50 mM Tris-HCl, 150 mM NaCl, and 0.05% (v/v) Tween 20 and then incubated in 5% (w/v) milk in TBS-Tween at room temperature for 1 hr. Primary and secondary antibodies were diluted in 3% (w/v) bovine serum albumin in TBS-Tween. The membranes were incubated with 1:1000 anti-HCN1 antibody (clone N70/28; NeuroMab, UC Davis NeuroMab facility, CA, USA) overnight at 4uC, washed with TBS-Tween, and incubated in 1:1000 anti-mouse antibody (Cell Signaling, MA, USA) at room temperature for 1 hr. The membranes were treated with enhanced chemiluminesence western blotting substrate (Thermo Scientific, IL, USA) for protein band visualization. HCN1 primary and secondary antibodies were stripped from the membranes by incubating in stripping buffer (Thermo Scientific, IL, USA) at room temperature for 20 min, followed by 4 washes in TBS-Tween. To allow the normalization of HCN1 blot densities, b-actin blots were then performed using the western blotting procedure described above with 1:1000 antib-actin antibody (Millipore, MA, USA), followed by 1:1000 antirabbit antibody (Cell Signaling, MA, USA).
All membranes were exposed and quantified using the Kodak Image Station 2000R (Kodak, USA). Because HCN1 is known to exist in a glycosylated (108 kDa) and unglycosylated (100 kDa) form, both of which are recognized by the Anti-HCN1 antibody used (clone N70/28; NeuroMab), the densities of both bands were pooled for analysis as described elsewhere [25]. The density of HCN1 bands were normalized to b-actin, a prototypical loading control.
Statistical analyses
Statistical analyses were performed using Graphpad Prism 4. Membrane impedance and I h tail current and activation kinetics were analyzed with two-way repeated-measures ANOVA followed by a Bonferroni post hoc test. The remaining comparisons were performed with one-way ANOVA or Student t-tests, as appropriate. Any p value less than 0.05 was considered significant. All data are shown as mean 6 standard error of the mean.
Reduced I h in cultured Gabra52/2 neurons Next, the amplitude of the I h current was measured in WT and Gabra52/2 neurons (Fig 1A) The net I h was measured as the time-dependent inward current activated by the voltage step ( Fig 1B). The I h conductance was estimated from the near linear current-voltage relationship of I h measured between 2120 mV and 290 mV (Fig 1C). From these analyses, the total I h conductance was estimated to be 43% smaller in Gabra52/2 neurons compared with WT neurons (WT: 6.0 nS60.2 nS, n = 9; Gabra52/2: 3.4 nS60.1 nS, n = 16; p,0.0001). The maximum amplitude of the tail current measured following the hyperpolarizing voltage steps was smaller in Gabra52/2 neurons (n = 16) than in WT neurons (n = 9; Fig 1D; voltage 6 genotype: F 9,198 = 4.09; p,0.0001), consistent with a reduced I h in these neurons. The HCN channel blocker ZD-7288 caused a complete block of I h in both WT and Gabra52/2 neurons (data not shown).
The reduced I h in Gabra52/2 neurons may result from the substitution of HCN1 with another HCN isoform. The subtype of HCN channels determines its sensitivity to cAMP and voltagedependent activation and kinetics [16]. Thus, a substitution of HCN subtype is predicted to be accompanied by changes in the activation kinetics and voltage-dependent activation of I h . However, we observed that the time course of current activation (t I h ) was similar between WT and Gabra52/2 neurons (Fig 1E) (genotype 6 voltage: F 5,99 = 0.05, p.0.05). In addition, the voltage-sensitivity of I h , measured as the half-maximal activation voltage (V 50 ) of the tail currents (Fig 1C), was similar between WT and Gabra52/2 mice (WT: 291.5 mV65.0 mV, n = 9; Ga-bra52/2: 293.3 mV67.3 mV, n = 16, p.0.05). These results suggest that the lower I h in Gabra52/2 neurons is not likely due to a change in the subpopulation of HCN channels that generate I h .
A pharmacological characteristic of I h generated by HCN channels is an insensitivity to low concentrations of extracellular barium and potent inhibition induced by low concentrations of extracellular cesium [26]. To confirm that the reduction in I h in Gabra52/2 neurons resulted from a decrease in HCN-generated current; we applied a low concentrations of either BaCl 2 (0.5 mM) or CsCl (0.5 mM). Consistent with HCN pharmacology, BaCl 2 (0.5 mM) did not block I h in WT (n = 5) or Gabra52/2 (n = 5) neurons, but CsCl (0.5 mM) caused near complete inhibition of I h in both WT (n = 4) and Gabra52/22/2 (n = 4) neurons, when I h was activated at 2120 mV (Fig 1F).
We next sought to determine whether the acute enhancement or inhibition of a5GABA A receptor-mediated current changed I h , similar to the reduction of I h observed following genetic deletion of a5GABA A receptors. The tonic current was either enhanced by applying 1 mM GABA (n = 6) or inhibited by applying 1 mM picrotoxin (n = 6), as described previously [11] then I h was activated in WT neurons by hyperpolarizing the membrane potential to 2120 mV. Neither enhancement or inhibition of the tonic current changed the amplitude of I h (Fig 1G; one-way ANOVA F 2,18 = 0.08, p.0.05).
I h can exert a powerful regulatory effect on the resting membrane potential of neurons [16]. Further, the dynamic voltage-dependent activity of depolarizing I h opposes any changes in membrane potential away from the resting membrane potential. We next sought to determine whether the lower I h in Gabra52/2 neurons would exert less control over resting membrane potential than in WT neurons. Application of the HCN antagonist ZD-7288 (20 mM) induced a similar hyperpolarization of the resting membrane potential of approximately 5.5 mV in both WT and Gabra52/2 neurons (WT + ZD-7288: 272.8 mV62.0 mV, n = 8; Gabra52/2 + ZD-7288: 273.6 mV61.8 mV, n = 7, p.0.05). These results suggest that the baseline level of I h depolarized the resting membrane potential to a similar degree in both WT and Gabra52/2 neurons.
Increased low-frequency membrane impedance in Gabra52/2 neurons
Previous studies have demonstrated that I h contributes to the frequency-dependent membrane impedance of neurons [19,24]. Specifically, I h generated by HCN1 in hippocampal pyramidal neurons selectively attenuates changes in membrane potential resulting from low-frequency input (, 5Hz), which in turn reduces the subthreshold membrane resonance properties of neurons in response to input in this frequency range [19,24]. We examined the membrane impedance properties of cultured WT (n = 7) and Gabra52/2 (n = 10) neurons by injecting an oscillating current of linearly increasing frequency and then measuring the impedance (Fig 3A). Gabra52/2 neurons had a higher frequency-dependent impedance than WT neurons in response to stimulation in the frequency range of 0 to 4 Hz (Fig 3B) (genotype 6 frequency: F 80,1215 = 1.39; p = 0.016). Post hoc analysis revealed a significantly greater membrane impedance in Gabra52/2 than in WT neurons over most of the frequency range from 1 to 4 Hz (Fig 4B).
To ascertain whether this difference in membrane impedance between WT and Gabra52/2 resulted from the lower I h , we tested for changes in membrane impedance following the application of ZD-7288 (20 mM). In WT neurons (n = 4), blockade . The membrane hyperpolarization that occurred following depolarization was measured relative to resting membrane potential to reveal a reduced after-hyperpolarization in Gabra52/2 neurons (example traces enlarged to emphasize after-hyperpolarization are shown in lower traces). B) The area of after-hyperpolarization was smaller in Gabra52/2 neurons (n = 9) than in WT neurons (n = 12). Application of the HCN antagonist ZD-7288 (20 mM) blocked the afterhyperpolarization in neurons of both genotypes (n = 5), confirming the contribution of I h to after-hyperpolarization. C) The peak afterhyperpolarization, measured relative to the resting membrane potential, was also smaller in Gabra52/2 neurons compared to WT. D) The decay kinetics of after-hyperpolarization were similar between WT and Gabra52/2 neurons. doi:10.1371/journal.pone.0058679.g002 of I h by ZD-7288 resulted in a robust frequency-dependent increase in membrane impedance (Fig 3C) (genotype 6frequency: F 80,729 = 3.35; p,0.0001). Post hoc analysis of this interaction revealed a significant increase in membrane impedance in the frequency range 1 to 6 Hz (Fig 3C). In contrast, ZD-7288 caused only a modest frequency-dependent increase in membrane impedance in Gabra52/2 neurons (n = 5), which was not significantly different from control at any specific frequency (Fig 3D) (main effect of drug: F 1,1053 = 32.05, p,0.0001). Notably, the membrane impedance of WT and Gabra52/2 neurons was similar following application of ZD-7288 (Fig 3E) . Increased membrane impedance in response to low-frequency input in cultured Gabra52/2 neurons. A) The membrane impedance properties of WT and Gabra52/2 neurons were determined by quantifying membrane resistance during the injection of a sinusoidal current ranging in frequency from 0 to 40 Hz. Example traces show larger changes in membrane potential in the Gabra52/2 neuron at low frequencies, indicative of an increased membrane impedance. B) The membrane impedance of Gabra52/2 neurons (n = 10) was greater than that of WT neurons (n = 7) in response to low-frequency input from 1 to 4 Hz (p 1.0 Hz ,0.01, p 1.5 Hz ,0.001, p 2.0 Hz ,0.001, p 2.4 Hz ,0.01, p 2.9 Hz ,0.05, p 3.4 Hz .0.05, p 3.9 Hz ,0.01). The inset shows the membrane impedance ratio of Gabra52/2 to WT neurons. C) Blockade of I h in WT neurons with ZD-7288 (n = 4) increases membrane impedance to input from 1 to 6 Hz (p 1.0 Hz ,0.001, p 1.5 Hz ,0.001, p 2.0 Hz, 0.001, p 2.4 Hz ,0.001, p 2.9 Hz ,0.001, p 3.4 Hz ,0.001, p 3.9 Hz, 0.001, p 4.4 Hz ,0.001, p 4.9 Hz ,0.001, p 5.4 Hz ,0.05, p 5.9 Hz ,0.05). D) Blockade of I h in Gabra52/2 neurons with ZD-7288 (n = 5) caused a modest but significant increase in membrane impedance. Post hoc analysis did not reveal significant differences within any specific frequency range. E) No differences were observed in the impedance of Gabra52/2 and WT neurons in the presence of ZD-7288. Asterisks indicating significant differences within specific frequency ranges have been omitted for clarity. doi:10.1371/journal.pone.0058679.g003 Reduced I h and HCN1 in Gabra52/2 hippocampal neurons in brain slices We next sought to determine whether the reduced I h observed in cultured Gabra52/2 hippocampal neurons was also present in neurons of the hippocampal CA1 pyramidal layer recorded in brain slices (Fig 4A). Similar to cultured neurons, we observed an increased membrane resistance in Gabra52/2 neurons compared to WT (Gabra52/2: 212 MV614 MV, n = 13; WT: 158 MV619 MV, n = 12; p = 0.027). I h current density was again reduced in Gabra52/2 neurons (n = 12) compared to WT (n = 8) (Fig 4B) (voltage 6 genotype: F 8,144 = 9.64; p,0.0001). Relative to WT neurons, the total I h conductance was estimated to be 28% lower in Gabra52/2 neurons (WT: 4.5 nS60.3 nS, n = 8; Gabra52/2: 3.2 nS60.4 nS, n = 12; p = 0.030). I h tail current was also reduced in Gabra52/2 neurons compared to WT (voltage 6 genotype: F 8,144 = 3.03; p = 0.004), although post-hoc analysis did not reveal a significant reduction at any specific potential (Fig 4C). The difference in I h current density was not attributable to differences in cell size (WT: 166 pF623 pF; Gabra52/2: 196 pF614 pF; p = 0.30). Additionally, the V 50 of I h was similar between WT and Gabra52/2 mice (WT: 284.2 mV63.4 mV, n = 8; Gabra52/2: 284.6 mV63.0 mV, n = 12; p.0.05). These data suggest that the reduction of I h in Gabra52/2 neurons is robust and occurs at different stages of development and in different neuronal environments.
Protein levels of HCN1 are reduced in Gabra52/2 neurons One likely explanation for the reduction of I h in Gabra52/2 neurons, in the absence of changes in I h kinetics, is a decrease in the expression of HCN1 protein. This hypothesis was tested by measuring levels of HCN1 protein in hippocampal tissue samples from adult WT (n = 6) and Gabra52/2 (n = 6) mice (Fig 4D). HCN1 was selected for measurement since it is the most highly expressed isoform in the hippocampus CA1 [21]. Densitometric analysis showed that compared to WT mice, total protein expression of HCN1 in the hippocampus of Gabra52/2 mice was decreased by 40.8%69.1% (Fig 4E) (one-sample t-test, p = 0.002). Thus, the magnitude of the reduction of HCN1 protein in Gabra52/2 hippocampal neurons closely paralleled the reduction of I h .
Discussion
Here, we tested the hypothesis that reduced expression of a5GABA A receptors would be accompanied by a reciprocal increase in I h [15]. Unexpectedly, we observed a reduction in I h in Gabra52/2 hippocampal neurons compared to WT neurons, as indicated by the lower hyperpolarization-activated current, lower after-hyperpolarization, and greater low-frequency membrane impedance. The reduction in I h was observed in both cultured neurons and in hippocampal pyramidal neurons. We observed no change in I h activation kinetics in Gabra52/2 neurons, suggesting that changes in HCN channel isoform did not contribute to the reduced I h in Gabra52/2 neurons. Finally, we observed a decrease in the protein levels of HCN1 in Gabra52/2 hippocampus that paralleled the reduction of I h observed in Gabra52/2 neurons.
Reduced I h maintains normal resting membrane potential in Gabra52/2 neurons The resting membrane potential was not different in Gabra52/2 neurons, despite the fact that the tonic inhibitory conductance generated by a5GABA A receptors was absent in Gabra52/2 neurons [11]. These data raise the possibility that a decrease in I h , which normally provides a tonic depolarizing current, serves to homeostatically maintain the same resting membrane potential in Gabra52/2 and WT neurons. It is notable that the reduced I h current associated with deletion of the a5GABA A receptor was observed in both cultured hippocampal pyramidal neurons and in CA1 hippocampal neurons. This finding suggests that there exists a robust relationship between a5GABA A receptor and HCN1 channel expression that persists in very different neuronal environments and at different developmental stages.
The lack of change in resting membrane potential contrasted with the differences between WT and Gabra52/2 mice in afterhyperpolarization and membrane impedance. The after-hyperpolarization was reduced in Gabra52/2 neurons. Since ZD-7288 blocked the after-hyperpolarization in both WT and Gabra52/2 neurons, the after-hyperpolarization measured here was predominantly generated through the voltage-dependent deactivation of I h during depolarization. Despite the differences in peak afterhyperpolarization, activation of I h terminated the after-hyperpo- Gabra52/2 mice. I h was activated and measured by changing the membrane potential from 2120 mV to 230 mV in 10-mV increments. B) Estimation of I h conductance from the linear portion of the currentvoltage curve revealed a 28% reduction of I h in Gabra52/2 neurons. C) A modest but significant reduction in I h tail current was also observed in Gabra52/2 neurons. Post hoc analysis did not reveal significant differences at any specific test potential. D) The expression of HCN1 protein and b-actin in hippocampal tissue from adult WT and Gabra52/2 mice. E) After normalization to b-actin, the expression of HCN1 was reduced in hippocampal tissue from Gabra52/2 mice by 41% relative to WT mice, paralleling the decrease in I h current. doi:10.1371/journal.pone.0058679.g004 larization similarly in WT and Gabra52/2 neurons. Because of the role I h plays in regulating the firing of action potentials [28], a reduced after-hyperpolarization may disturb the firing frequency of Gabra52/2 neurons. Nonetheless, the reduced I h in Gabra52/2 neurons appears to maintain membrane potential even at the expense of a reduced after-hyperpolarization and the potential consequences on firing activity.
A reduction in I h also increased the frequency-dependent membrane impedance in Gabra52/2 neurons. These findings are consistent with the established role of I h in reducing membrane impedance to low-frequency, fluctuating input [19,24]. Similar to after-hyperpolarization, we found that membrane impedance was not greatly influenced by tonic a5GABA A receptor activity, since WT and Gabra52/2 neurons exhibit similar membrane impedances when I h was blocked by ZD-7288. Overall our data suggest that the reduced I h in Gabra52/2 hippocampal neurons homeostatically maintains resting membrane potential, with consequential changes in other neuronal properties and behaviours that are regulated by I h , such as after-hyperpolarization and membrane impedance. Whether the reduced I h also restores normal synaptic integration in Gabra52/2 neurons [15] remains to be determined.
Homeostasis of neuronal excitability following reduction of tonic GABAergic inhibition
Deletion of the GABA A receptors that contribute to tonic GABAergic inhibition causes changes in other conductances that regulate neuronal excitability. For example, the genetic deletion of a6GABA A receptors, which mediate a tonic current in cerebellar granule cells, causes the upregulation of the two-pore-domain leak K + channel, TASK-1 [14]. The converse relationship has also been found: genetic deletion of Kv4.2 K + channels was associated with an increased tonic inhibitory current in hippocampal pyramidal neurons [30]. In both of these examples, the loss of one inhibitory current was offset by an increase in another inhibitory current to maintain normal neuronal excitability. We showed that the genetic deletion of a5GABA A receptors that generate tonic outward currents in hippocampal neurons [9] was associated with a decrease in I h that provides tonic inward current. As such, the normal relative levels of outward and inward current could be maintained, as reflected in the lack of difference in resting membrane potential between WT and Gabra52/2 neurons.
It is notable that in previous studies, an upregulation of a5GABA A receptors was not observed in hippocampal pyramidal neurons of HCN12/2 mice [15]. The expression of a5GABA A receptors in the hippocampus is among the highest in the mammalian brain [31]. The high basal level of expression of a5GABA A receptors may reduce or eliminate the capacity for further upregulation of these receptors [15]. Alternatively, HCN1 channels and a5GABA A receptors may serve different functional roles in hippocampal pyramidal neurons and may be homeostatically co-regulated in a manner different from that observed in cortical neurons. The cortex and hippocampus are distinct neuronal environments that may exert unique homeostatic pressures, such that either resting membrane potential or EPSP summation is preferentially preserved through compensatory mechanisms [15]. Thus, the mechanisms of compensation may be diverse and likely vary depending on the primary contribution of the ionic currents to neuronal function and the prevailing activity patterns of the neurons [32,33]. Finally, tonic inhibitory currents are subject to regulation by endogenous hormones, such as neuroactive steroids and insulin [34,35]. It would be of interest to ascertain whether the endogenous regulation of tonic inhibition also induces changes in I h .
Lastly, HCN1 channels expressed in hippocampal CA1 pyramidal neurons play an important role in the regulation of hippocampus-dependent memory [19]. Specifically, deletion of HCN1 in forebrain neurons enhanced short-and long-term memory in mice [19]. Similarly, Gabra52/2 mice display better hippocampus-dependent memory performance [13,23]. Thus, it is possible that reduced I h contributes to the enhanced memory performance of Gabra52/2mice. Additionally, Gabra52/2mice exhibit a reduced sensitivity to memory impairment by etomidate, which potently enhances the activity of a5GABA A receptors [36,37]. HCN channels are similarly inhibited by anesthetics including propofol and isoflurane [18], and a reduction of I h may also contribute to the reduced sensitivity of Gabra52/2 mice to the amnestic effects of anesthetics. Overall, the results of this study suggest a co-regulation of a5GABA A receptors that generate a tonic GABAergic conductance and HCN1 channels that generate I h in hippocampal pyramidal neurons. It will be of future interest to determine whether alterations in I h contribute to the behavioural phenotype of Gabra52/2 mice. | 2018-04-03T00:57:08.324Z | 2013-03-14T00:00:00.000 | {
"year": 2013,
"sha1": "7b7e7830ffbcc9742b515e863f35d1181a04343c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0058679&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b7e7830ffbcc9742b515e863f35d1181a04343c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
231665348 | pes2o/s2orc | v3-fos-license | What should we report? Lessons learnt from the development and implementation of serious adverse event reporting procedures in non-pharmacological trials in palliative care
Serious adverse event reporting guidelines have largely been developed for pharmaceutical trials. There is evidence that serious adverse events, such as psychological distress, can also occur in non-pharmaceutical trials. Managing serious adverse event reporting and monitoring in palliative care non-pharmaceutical trials can be particularly challenging. This is because patients living with advanced malignant or non-malignant disease have a high risk of hospitalisation and/or death as a result of progression of their disease rather than due to the trial intervention or procedures. This paper presents a number of recommendations for managing serious adverse event reporting that are drawn from two palliative care non-pharmacological trials. The recommendations were iteratively developed across a number of exemplar trials. This included examining national and international safety reporting guidance, reviewing serious adverse event reporting procedures from other pharmacological and non-pharmacological trials, a review of the literature and collaboration between the ACTION study team and Data Safety Monitoring Committee. These two groups included expertise in oncology, palliative care, statistics and medical ethics and this collaboration led to the development of serious adverse event reporting procedures. The recommendations included; allowing adequate time at the study planning stage to develop serious adverse event reporting procedures, especially in multi-national studies or research naïve settings; reviewing the level of trial oversight required; defining what a serious adverse event is in your trial based on your study population; development and implementation of standard operating procedures and training; refining the reporting procedures during the trial if necessary and publishing serious adverse events in findings papers. There is a need for researchers to share their experiences of managing this challenging aspect of trial conduct. This will ensure that the processes for managing serious adverse event reporting are continually refined and improved so optimising patient safety. ACTION trial registration number: ISRCTN63110516 (date of registration 03/10/2014). Namaste trial registration number: ISRCTN14948133 (date of registration 04/10/2017).
Background
More research is needed in palliative care to improve the evidence base that underpins clinical practice [1], especially as the need for palliative care is predicted to increase substantially by 2060 [2]. There is a commensurate need to increase the number of high quality trials in palliative care as they are an optimal design for testing the effectiveness of treatments and therapeutic interventions [3,4]. Many interventions and treatments commonly used in palliative care have little supporting trial evidence [5]. Clinical trials, as well as testing effectiveness, also need to assess whether the novel treatment or intervention is in fact safe [6].
Safety reporting procedures aim to capture any adverse events that may arise during a trial [7]. Trial protocols should contain details of how adverse events are to be identified, collected, assessed, reported and managed [8]. Findings papers should also report on the adverse events that have occurred during a trial [7,9,10]. There are internationally agreed definitions and reporting procedures for pharmacological trials [11]. In a clinical trial, an adverse event is any untoward medical occurrence that is experienced by a trial participant which is not necessarily related to the intervention [12]. The adverse event is classified as serious when, at any dose, it results in: death, is life-threatening, requires inpatient hospitalisation or prolongation of existing hospitalisation, results in persistent or significant disability/incapacity or is a congenital anomaly/birth defect [12].
Monitoring of adverse events during a trial is key to ensuring patient safety but structures and processes, including nomenclature, can vary depending on the funder, trial type and jurisdiction [13]. Generally, an internal study team or group is responsible for the day to day running of the trial while a Trial Steering Committee, made up of largely independent members including patient representation, provides additional scrutiny [13]. An independent Data Safety Monitoring Committee may also be set up, more commonly in pharmaceutical trials, to monitor un-blinded safety and efficacy data and if required recommend the trial is stopped to safeguard the interests of participants [13,14]. Ethical approval processes can vary internationally [15] but a research ethics committee's role is to review the potential risks of a study [11]. Requirements for reporting adverse events to research ethics committees can vary between nations [16] but international guidance recommends that unexpected serious adverse events related to the intervention (SUSARs) be promptly reported [11].
There is evidence that serious adverse events, such as psychological distress, can occur in non-pharmacological trials [17]. This paper focuses on serious adverse event reporting in palliative care non-pharmaceutical trials as there is a lack of guidance for researchers. This is also an issue outside palliative care. One review of psychological trials highlighted an over reliance on the definition used in pharmacological trials and that researchers did not identify which serious adverse events might likely arise from a specific intervention in a particular population [18].
The definition of a palliative care population can vary [19,20] but in this paper a palliative care trial focuses on those patients living with advanced malignant or non-malignant disease and their family carers. This group of patients are viewed as vulnerable as they have complex physical, psychosocial and spiritual needs and can have a limited life expectancy [21]. They are cared for in diverse clinical settings and receive care from specialist and/or generalist palliative care professionals. Non-pharmacological palliative care interventions are heterogeneous. Typically, they are complex interventions that reflect a holistic and multi-disciplinary approach to care [22] with quality of life and/or symptom control being the primary outcome [22][23][24] rather than survival or disease response [25]. Interventions may be taken from other patient populations and applied to those living with advanced disease [26] or developed specifically to meet the needs of this patient group [27]. The characteristics of a non-pharmacological palliative care trial make implementing serious adverse event reporting procedures challenging.
The challenges of applying the standard serious adverse event definitions and reporting procedures was considered in two recent palliative care nonpharmacological trials. The ACTION study was a cluster randomised controlled trial assessing the effects of an advance care planning programme on the quality of life of patients with advanced lung or colorectal cancer. The trial took place in six European countries and recruited 1117 participants in the hospital setting [28]. The Namaste Care study was a feasibility cluster randomised controlled trial. The trial took place in nursing homes in the UK and focused on the psychosocial Namaste Care intervention for residents living with advanced dementia [29]. Research Ethics Committee approval was obtained in all six countries taking part in the ACTION study (NRES Committee North West -Liverpool East 14/NW/ 1189) and in the UK for the Namaste study (Wales Research Ethics Committee 5 Bangor 17/WA/0378). Written informed consent was obtained for all participants with consent being provided by a proxy in the Namaste trial as residents lacked capacity.
Given the health status of participants in the ACTION trial and the Namaste trial, there was a relatively high risk of death and/or hospitalisation for participants during the study. These events were not anticipated to be related to the receipt of the intervention or the trial procedures and similar issues have been raised in critical care trials [30]. A challenge in both studies was how to ensure serious adverse events related to the trial intervention or procedures were recorded while preventing unnecessary and burdensome reporting processes for both study coordinating centre staff and clinicians. There was a risk that reporting all serious adverse events would result in those potentially related to the intervention being missed [31]. There was also a risk that clinical staff would not report serious adverse events because they were not pharmaceutical trials.
This paper outlines a number of recommendations (see Table 1) that were drawn from the learning from these two exemplar trials. The recommendations may be useful for others who are developing and implementing serious adverse event reporting procedures in palliative care non-pharmaceutical trials.
Methods
A number of strategies were used to develop the serious adverse event reporting procedures for the ACTION trial. Initially, the procedures of other pharmacological and non-pharmacological trials were reviewed for guidance. This was in addition to the national and international guidance available to guide serious adverse event reporting in clinical trials [7,8,10,11,32]. This formed the basis of the serious adverse event form used in the study. A Data Safety Monitoring Committee was set up, as this was a requirement in the UK, an approach then approved by all trial consortium members. The Data Safety Monitoring Committee recommended a proactive rigorous approach to the monitoring of serious adverse events during the trial (see Table 3 for further details). Development of the serious adverse event reporting procedures was a collaborative process between the ACTION trial consortium and the trial's Data Safety Monitoring Committee. Both groups comprised a diverse group of clinical and academic professionals from across Europe and included expertise in oncology, palliative care clinical practice and research, including trials, statistics, as well as medical ethicists. This collaboration led to the definition of a serious adverse event in this study (see Table 2).
During the trial, a review of the literature was carried out to explore how the serious adverse event reporting procedures of the ACTION study compared with other trials of palliative care psychological interventions (see Additional file 1). The review highlighted that there is a lack of evidence of how serious adverse events should be monitored in these type of studies. How the study teams planned to manage psychological distress and deal with concerns raised from questionnaire responses were sometimes reported in the published trial protocols. There was also a lack of reporting of serious adverse events in the final reports of included studies which could suggest that no serious adverse events have occurred, they were not recognised or recorded or they were recorded but not reported [18].
The recommendations outlined below were iteratively developed from the learning across both trials.
The recommendations
Experience from both trials highlighted the need to factor in adequate time at the study planning stage to develop serious adverse event reporting procedures that reflected the study population, the intervention being tested and that aligned with international, national and local procedures. The Namaste trial also required additional time as the nursing home sites had not taken part in a previous trial and for some of the homes, this was their first experience of research.
Defining what a serious adverse event is in your trial
The importance of defining what a serious adverse event is in your trial based on your study population was Table 1 Recommendations for managing serious adverse event reporting procedures in palliative care non-pharmacological trials • Factor in adequate time at the study planning stage to develop serious adverse event reporting procedures especially in a multinational study or for research naïve settings such as a nursing home. • Review level of trial oversight required (see Fig. 1) • Define what a serious adverse event is in your trial, based on your study population, including their health state, the expected risks and the type of events that should be reported. • Develop documentation to support serious adverse event reporting.
• Implement serious adverse event reporting procedures.
• Monitor serious adverse events during the trial.
• Refine the reporting procedures during the trial if necessary.
• Report the serious adverse events that occur during the trial in the final report papers.
identified. This definition should take account of their health status, the expected risks and the type of events that should be reported. How this process was operationalised in the two trials is described in Table 2. In the Namaste trial, patient and public involvement representatives provided advice on the wording of participation information [34] and questionnaires to try and reduce the risk of distress.
Documentation to support serious adverse event reporting
Serious adverse event standard operating procedures and reporting forms were developed for both trials. The Clinical Trial Unit that was managing the Namaste trial data had limited experience of supporting nonpharmaceutical trials. Their standard reporting procedures had to be adapted to fit the trial design and clinical setting which added additional time to the study set up process. In the ACTION trial, a form for documenting routine hospital admissions was produced that asked for reason and length of admission. In both trials, a form was created to document all deaths which included the date and cause of death, in the ACTION trial place of death was also documented.
Implementation of serious adverse event reporting procedures
In the ACTION trial, oncologists and research nurses were experienced in pharmacological trial serious adverse event reporting procedures but less so in nonpharmacological studies. Informal training was provided at the start of study and support was available throughout the trial and if a serious adverse event was suspected. In the Namaste trial, nursing home staff were unsurprisingly largely research naïve so a research manual was developed to explain reporting procedures to non-research staff. Formal research training was provided at the start of the study and support was available throughout the trial and if a serious adverse event was suspected.
Monitoring of serious adverse events during the trial
Multiple strategies were used to monitor serious adverse events in both trials and reporting procedures were refined during the trial as necessary (see Table 3). As recommended by the Consort guidelines [7], both passive and active surveillance strategies were used. Passive surveillance involved the recording of spontaneously reported serious adverse events by patients, their proxies or health care professionals. In the ACTION trial, active surveillance involved the review, by the Data Safety Monitoring Committee, of the total number of patients screened for eligibility, who was eligible, asked for consent, and included plus response rates per study arm and per tumour type, the primary outcome measure and hospital admission and death data in both arms of the trial. In the Namaste trial, a review of baseline questionnaires highlighted the need to monitor patient pain scores and guidance for highlighting concerns to the nursing home manager was developed.
Reporting of the serious adverse events that occur during the trial The serious adverse events that occurred were reported in the final report papers. In the ACTION trial, three serious adverse events related to the intervention were Table 2 Defining what a serious adverse event is in your trial
ACTION trial Namaste trial
What is the study population?
Advanced colorectal or lung cancer patients with an approximate 50% one-year survival rate. It would not be unexpected that patients may die or be admitted to hospital while taking part in the trial.
Nursing home residents living with advanced dementia (FAST score 6 or 7). In a previous study evaluating the Namaste Care programme, early deaths (< 2 months) were not uncommon in the advanced dementia population [33].
What are the expected risks?
Patient and/or carer distress due to the intervention and/or completion of questionnaires. The risks were expected to be limited in those countries where advance care planning conversations are considered to be part of routine care and mostly validated questionnaires were being used in the study.
The anticipated risks for residents of taking part were viewed as low as the core elements of the programme are sensory activities that involve music, massage, colour, taste and scents. These core elements are viewed as best practice in dementia and end of life care. A potential risk identified was a skin reaction to Namaste Care activities e.g. massage oils or the actigraphy watch that was being used for data collection, with anaphylaxis being viewed as a potential serious adverse event.
Nursing home staff completed proxy questionnaires on behalf of the residents taking part in the study as they lacked capacity.
What events should be reported?
'We ask you to complete this form for every event in the study that takes a course that is significantly more unfavourable to study participants than foreseen in the normal course of the illness.' *All hospital stays of at least one night and deaths in both arms of the trial were included in reports for the Data Safety Monitoring Committee.
Only deaths, hospitalisations, life threatening or medically significant/important events related to the intervention or data collection procedures were to be reported as serious adverse events.
reported; one patient became distressed after reading the study information materials and two after having participated in the advance care planning conversations. They were resolved through conversations with the patients [35]. In the Namaste trial, there were no serious adverse events reported but one adverse event arose from use of the actigraph device used for data collection. Bruising was observed on one individual, with no lasting effect [36].
Discussion
The need to improve the quality of reporting of serious adverse events in trials has been recognised [7,9] but there is a lack of practical guidance on how to manage this process, particularly in palliative care nonpharmacological trials. This may be because published trial protocols and results papers may have limited space to document these processes and/or they are challenging to implement because of the characteristics of a palliative care trial. This paper addresses this issue by presenting a number of recommendations based on the lessons learnt from managing serious adverse event reporting procedures in two non-pharmacological trials in palliative care. When designing a palliative care non-pharmaceutical trial the possibility that serious adverse events may occur should be not be dismissed and should be actively considered, including 'worst case scenarios'. In pharmaceutical trials, the potential for serious adverse events to occur is evaluated in four phases of trial development. Phase I trials, historically referred to as 'toxicity trials', test a new drug in a small number of participants to identify the dose range and the drug's safety profile [16,37]. Phase II trials evaluate safety in a larger group of participants and set the dosage schedule for further phases. Phase III trials are usually double blind randomised controlled trials involving more participants and they assess efficacy and serious adverse events between intervention and control arms. Phase IV studies are post marketing studies and evaluate serious adverse events related to longer term use [16,38]. The four stages of the Medical Research Council framework for developing complex interventions reflect the phases of drug development [39,40]. As discussed previously, palliative care non-pharmaceutical trials typically involve complex interventions. The potential for serious adverse events to occur is something that should be explicitly explored earlier in their development and conduct. For example, in the feasibility/piloting stage, one of the trial's objectives should be to determine the type and consequences of any serious adverse events related to the intervention or study procedures prior to a definitive trial [41]. Reviews of feasibility/pilot studies, however, show that this is not always the case [42,43].
This paper also contributes to the discussion regarding trial safety oversight in the context of palliative care non-pharmaceutical trials. Setting up a Data Safety Monitoring Committee or Trial Steering Committee with appropriate expertise can be time consuming, an issue also raised in the general trial literature [44]. This can be more challenging for international studies when there may be a number of different local regulatory requirements to incorporate into the process. The criteria for determining the need for a Data Safety Monitoring Committee are not well defined, even in pharmaceutical trials [44]. Research ethics committees, as in pharmaceutical trials, should review whether potential serious adverse events have been considered and how they are going to be monitored in these type of studies [11].
The MORECare recommendations for evaluating complex interventions in end of life care do not cover serious adverse event reporting or how safety should be monitored in this context, including the role of ethics committees and other monitoring committees [5]. This is an area of palliative care trial methodology that requires further research. In this context, a risk assessment matrix may help researchers determine the type of oversight committee required for their trial (see Fig. 1) but this requires further research. In the palliative care context, risks associated with introducing the trial may also need to be considered, as this will be dependent on the patient's level of awareness and the communication skills of the recruiter [45].
Conclusions
There may be a greater level of risk associated with pharmaceutical trials but as our experience has highlighted non-pharmaceutical trials are not, as is sometimes assumed, risk free. There is a need for those involved in non-pharmaceutical trials to share their experiences of managing this challenging aspect of trial conduct. This will ensure the procedures for managing serious adverse events are continually refined and improved so optimising patient safety, with further research warranted. | 2021-01-21T15:13:43.178Z | 2021-01-20T00:00:00.000 | {
"year": 2021,
"sha1": "e43c6c56125aab43ba6ce85928a0fc6a68694ea0",
"oa_license": "CCBY",
"oa_url": "https://bmcpalliatcare.biomedcentral.com/track/pdf/10.1186/s12904-021-00714-5",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e43c6c56125aab43ba6ce85928a0fc6a68694ea0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18522104 | pes2o/s2orc | v3-fos-license | Role of Intestinal Bacteria in Gliadin-Induced Changes in Intestinal Mucosa: Study in Germ-Free Rats
Background and Aims Celiac disease (CD) is a chronic inflammatory disorder of the small intestine that is induced by dietary wheat gluten proteins (gliadins) in genetically predisposed individuals. The overgrowth of potentially pathogenic bacteria and infections has been suggested to contribute to CD pathogenesis. We aimed to study the effects of gliadin and various intestinal bacterial strains on mucosal barrier integrity, gliadin translocation, and cytokine production. Methodology/Principal Findings Changes in gut mucosa were assessed in the intestinal loops of inbred Wistar-AVN rats that were reared under germ-free conditions in the presence of various intestinal bacteria (enterobacteria and bifidobacteria isolated from CD patients and healthy children, respectively) and CD-triggering agents (gliadin and IFN-γ) by histology, scanning electron microscopy, immunofluorescence, and a rat cytokine antibody array. Adhesion of the bacterial strains to the IEC-6 rat cell line was evaluated in vitro. Gliadin fragments alone or together with the proinflammatory cytokine interferon (IFN)-γ significantly decreased the number of goblet cells in the small intestine; this effect was more pronounced in the presence of Escherichia coli CBL2 and Shigella CBD8. Shigella CBD8 and IFN-γ induced the highest mucin secretion and greatest impairment in tight junctions and, consequently, translocation of gliadin fragments into the lamina propria. Shigella CBD8 and E. coli CBL2 strongly adhered to IEC-6 epithelial cells. The number of goblet cells in small intestine increased by the simultaneous incubation of Bifidobacterium bifidum IATA-ES2 with gliadin, IFN-γ and enterobacteria. B. bifidum IATA-ES2 also enhanced the production of chemotactic factors and inhibitors of metalloproteinases, which can contribute to gut mucosal protection. Conclusions Our results suggest that the composition of the intestinal microbiota affects the permeability of the intestinal mucosa and, consequently, could be involved in the early stages of CD pathogenesis.
Introduction
Mucosal surfaces of the gastrointestinal tract are continuously exposed to environmental stimuli. The intestinal epithelium constitutes the largest and most important barrier against external environmental agents and has two critical functions: to prevent the entry of harmful intraluminal microorganisms, antigens, and toxins and to enable the selective translocation of dietary nutrients and electrolytes into circulation.
One of the basic properties of gut-associated lymphoid tissue (GALT) is oral tolerance (unresponsiveness) to harmless components of microbiota and diet. Inappropriate immunological reactions against food proteins, such as wheat components, can lead to the breakdown of oral tolerance and the development of intestinal immune disorders.
Celiac disease (CD) is a chronic immune-mediated enteropathy of small intestine that is triggered by dietary wheat gluten, or related rye and barley proteins in genetically susceptible individuals. More than 90% of patients carry HLA-DQ2/8 antigens. The expression of these high-risk haplotypes in general population, however, is 20% to 30%, only 3% to 5% of whom develop CD. The involvement of genes for cytokines interleukin (IL)-21 and IL-2 in CD pathogenesis has been reported recently [1][2][3][4][5]. The ingestion of gluten is the key environmental cause linked to the symptoms of CD, but also infections and the composition of the intestinal microbiota might play a role in CD pathogenesis [6][7][8][9][10]. Gluten proteins are partially hydrolyzed by peptidases in the gastrointestinal tract, so the gluten (gliadin)-derived peptides can cross the epithelium and be converted by tissue transglutaminase (TG) 2 into negatively charged peptides that have higher affinity for HLA-DQ2 and HLA-DQ8 molecules. Gliadin peptides are presented by dendritic cells (DC) to CD4+ a/b T lymphocytes in the jejunum. Activated gliadin-specific T cells up-regulate type 1 and 2 cytokines that activate other cell types. The substantial increase in interferon (IFN)-c promotes a proinflammatory environment and the activation of tissue enzymes, including metalloproteinases and TG2, which are involved in CD pathogenesis [11][12][13][14][15][16].
The outermost barrier of gut mucosa is formed by a single layer of epithelial cells covered by thick, viscous and relatively impermeable gel layer produced by goblet cells -mucus. This mucus layer prevents direct contact between enteric pathogens and epithelial cell surfaces, contains binding sites for resident microbiota and maintains high concentrations of secretory IgA to prevent pathogens from attaching and entering. Moreover, Paneth cells producing various antimicrobial peptides or lysozymes strengthen the first-line of defense against harmful agents [17][18][19].
The integrity and function of the intestinal epithelium depend on a protein network that joins epithelial cells and consists of transmembrane complexes: tight junctions (TJs), adherens junctions, and desmosomes. TJs are present in most apical regions, where they selectively regulate the paracellular passage of ions and solutes and prevent the translocation of luminal antigens, microorganisms, and their toxins. TJs are formed by integral membrane proteins, primarily occludins and claudins. Claudins, a family of at least 24 proteins, are expressed in specific tissues; claudins 1-5 are expressed in the gut intestine. Occludins and claudins contain a binding domain for a complex of proteins -the zonula occludens (ZO-1, ZO-2, and ZO-3) -which is linked to the actin cytoskeleton and signaling proteins. Increased permeability of the epithelial barrier has been proposed to increase one's predisposition to intestinal inflammation and gastrointestinal diseases, including CD. Gluten and its component, gliadin were shown to alter the expression of TJ proteins and TJ-associated ZO-1 and stimulate the production of zonulin [20][21][22][23].
Recently, the potential role of the microbiota in CD pathogenesis has attracted attention. Indigenous commensal microbiota is involved in the resistance to infection not only through their direct interaction with pathogenic bacteria but also through their influence on the host immune system. The microbiota of CD patients showed different composition in feces and duodenal biopsy specimens compared with healthy controls, characterized by a preferential increase in the proportions of Bacteroides and E. coli with virulence genes and by a reduction in Bifidobacterium proportions [6][7][8][9][10][24][25][26].
In this study, we examined the effect of gliadin and the proinflammatory cytokine IFN-c on the intestinal barrier in rat intestinal loops in the presence of potentially pathogenic enteric bacteria isolated from CD patients or a Bifidobacterium strain isolated from healthy controls. The effects of these stimuli on mucosal barrier (TJs), its architecture, the number of goblet cells, adhesion, gliadin translocation, and cytokine secretion were compared.
Ethics Statement
All animal experiments were approved by the Laboratory Animal Care and Use Committee of the Institute of Microbiology v.v.i., Academy of Sciences of the Czech Republic, approval ID: 244/2009.
Gliadin fragments
Peptic fragments of gliadin (Sigma, St Louis, MO) were prepared on a pepsin agarose gel (ICN, Biomedicals, Ohio) as described [27,28]. Protein concentrations were measured by bicinchoninic acid assay (BCA Protein assay, Pierce, Rockford, IL). All reagents were tested by E-toxate test for lipopolysaccharide (LPS) (Sigma, St. Louis, MO) and were below the limit of detection (2 pg/ml).
Bacterial strains and culture conditions
The following strains were used: Bifidobacterium bifidum IATA-ES2 (CECT 7365), Shigella CBD8, and Escherichia coli CBL2. B. bifidum IATA-ES2 was isolated from the feces of healthy babies and identified as described previously [29,30]. E. coli CBL2 and Shigella CBD8 were isolated from celiac patients and identified as described by Sanchez et al. [26]. The Shigella strain was included to exemplify the possible effects of an actual intestinal pathogen in this disease context.
Bifidobacteria were grown routinely in de Man, Rogosa, and Sharpe (MRS) broth (Scharlau Chemie SA, Barcelona, Spain) with 0.05% (w/v) cysteine and incubated at 37uC under anaerobic conditions (AnaeroGen; Oxoid, Basingstoke, UK) for 22 h. Enterobacteria were grown routinely in Violet Red Bile Dextrose (VRBD) agar (Scharlau Chemie SA, Barcelona, Spain) at 37uC for 24 h under aerobic conditions. Cells were harvested by centrifugation (60006 g for 15 min) at the stationary growth phase, washed 2 times with PBS, and resuspended in PBS that contained 20% glycerol. Aliquots of these suspensions were frozen in liquid nitrogen and stored at 280uC until use. The number of live cells after storage was determined as colony-forming units (CFUs) on MRS-C, Schadler, or VRBD agar after 48 h incubation under optimal conditions. For all strains, more than 90% cells were alive on thawing, and no significant differences were observed during storage (4 months). One fresh aliquot was thawed for each new experiment to avoid variabilities in live bacterial cell numbers between experiments.
Bacterial adhesion assay
Rat epithelial cells (IEC-6 line) were grown in 24-well plates in DMEM to confluence; the monolayers were washed twice with PBS, and 250 ml of labeled bacterial cell suspension (at an absorbance of 0.5 (10 6 CFU/ml) at 600 nm) was added to each well.
Bacterial staining was performed with 10 mM 5-CFDA (5carboxyfluorescein diacetate) (Sigma, St. Louis, MO) as described by Izquierdo et al. [29]. Briefly, labeled bacterial suspensions were added to IEC-6 cultures at A 600 0.50. The epithelial cells and labeled bacteria were incubated together at 37uC for 1 h. IEC-6 cells were washed 2 times with PBS to remove nonadherent bacteria, and adherent cells were lysed in 200 mL 1% SDS (Sigma, St. Louis, MO) in 0.1 M NaOH at 37uC for 1 h [29].
Supernatants were collected in Costar black round-bottom 96-well plates (Corning Inc., Corning, NY, USA), and the fluorescence was measured on a microplate fluorometer (Fluoroskan Ascent, Labsystem, Oy, Finland) with excitation and emission wavelengths of 485 nm and 538 nm, respectively. Adhesion was expressed as the percentage of fluorescence that was recovered from adherent bacteria, relative to the initial fluorescence of the bacterial suspension per well.
Rat intestinal loops
The bacterial strains and gluten were tested in ligated ileal loops of GF rats. Two-month-old GF inbred AVN rats (approximately 200 grams) were deprived of food for the 24 h before surgery (with free access to water). The rats were premedicated intramuscularly with 1 ml of a mixture of ketamine (10 mg/ml) and xylazine (2 mg/ml).
The three ligated loops (each approximately 2 cm long) were created with nylon ligatures in the jejunum and proximal ileum, beginning approximately 3 cm from the ileocecal junction. Each loop was followed by a short intervening segment (2 cm) that was not inoculated [32]. Five hundred microliters of inoculum, containing 10 6 CFU of bacteria alone or with gliadin (250 mg) and/or IFN-c (250 U, AbD Serotec), was injected into the intestinal loops. After inoculation, the jejunum was returned to the abdomen, and the laparotomy incision was closed. After 8-9 h, the rats were euthanized by severing of the carotid artery. Tissue samples and contents of the loops were collected for further analysis.
Immunohistology
Tissue from the loop was fixed immediately in 10% neutral buffered formalin or Carnoy's solution. The fixed tissues were cut and processed using routine methods. Paraffin sections (5 mm) were deparaffinized in xylene, rehydrated through an ethanol gradient to water, and stained in periodic acid-Schiff (PAS) to evaluate mucin-secreting goblet cells. The villi (10)(11)(12)(13)(14)(15) in these sections were examined by light microscopy to determine the number of PAS-positive goblet cells per 100 enterocytes in the intestinal tissue, expressed as the medians and quartiles from 5-10 independent measurements.
Gliadin was detected in the intestinal loops by immunolocalization. Briefly, snap-frozen intestinal loop samples, embedded in OCT (Tissue-tek, Sakura Fine Tek, Torrance, CA, USA), were cryosectioned at 6 mm, air-dried, fixed for 5 min in acetone, and stored at 220uC. The sections were washed and endogenous peroxidase blocked by 1% H 2 O 2 . Then, the sections were incubated with peroxidase-labeled monoclonal anti-gliadin antibodies (Elisa Development Prague, Czech Republic) overnight at 4uC, washed, and incubated with Tyramide Signal Amplification -TSA TM Plus Fluorescence system (PerkinElmer, USA) for 30 min. The samples were counterstained with Evans blue and Hoechst to visualise tissue cells and nuclei. Afterwards the sections were embedded in Vectashield mounting medium (Vector Laboratories, UK). All speciemens were examined using a confocal microscope Olympus FV 1000 SIM.
Control sections were treated similarly, except that they were incubated with secondary antibodies only. Images of the specimens were viewed under an Olympus BX 40 microscope that was equipped with an Olympus DP 70 digital camera.
Western blot of tissue lysates
Intestinal tissue from the loops was homogenized on ice in protein extract buffer (Pierce, Rockford, IL) with a protease inhibitor cocktail (Pierce) for 10 min and sonicated. Samples were centrifuged at 10,0006 rpm for 10 min at 4uC and stored at 280uC until use. Protein concentrations were measured using the BCA Protein Assay Kit (Pierce).
Rat cytokine array
The cytokine spectra in the rat intestinal loop washes were measured using the semiquantitative RayBio TM Rat Cytokine Antibody Array 1 (RayBiotech, Norcross, GA, USA), which detects 19 growth factors, cytokines, and chemokines, following the manufacturer's recommendations. The signal intensity was measured on an LAS-1000 luminescence detector (Fujifilm), and the resulting images were analyzed using AIDA software (version 3.28; Raytest) to quantify spot densities. The background staining was subtracted, and the data were normalized as described [33].
Statistical analysis
Statistical analysis was performed using SPSS, version 17.0 (SPSS Inc., Chicago, IL, USA). To establish the homogeneity of variances and the distribution of the data, the Levene test was run. As a result of the non-normal distribution of the data and the nonhomogeneity of the variances, Mann-Whitney U-test was used to assess the effect of each variable. The data were expressed as medians and quartiles. Different letters (a-e) mean statistically significant differences between stimuli, the identical letters correspond to non-significant differences. P,0.05 was considered statistically significant.
Goblet cell population in the jejunum is influenced by gliadin and intestinal bacteria
The effect of gliadin and the proinflammatory cytokine IFN-c on epithelial cells in the presence or absence of various bacterial strains was examined in vivo using loops of small intestine that were ligated surgically from rats kept on a gluten-free diet and reared under germ-free (GF) conditions. As shown in Figure 1, the various stimuli led to changes in the number of PAS-positive goblet cells (examples A-F). To evaluate these changes, the number of PAS-positive goblet cells per 100 epithelial cells was counted (as summarized in Figure 1G). The addition of gliadin into the loops decreased the number of PAS-positive goblet cells compared with PBS used as a control. A similar effect was observed after applying IFN-c alone and with gliadin ( Figure 1B,G). The number of goblet cells after combination of gliadin with E. coli CBL2 ( Figure 1E,G) or Shigella CBD8 ( Figure 1C,G) was even lower. The addition of IFN-c to above mentioned samples slightly increased the number of goblet cells (Figure 1D,F,G).
When gliadin was combined with B. bifidum IATA-ES2 ( Figure 1A), the number of PAS-positive goblet cells increased, attaining the same value as in PBS-treated loops ( Figure 1G). Moreover, the combination of B. bifidum IATA-ES2 with Shigella CBD8, gliadin, and IFN-c increased the PAS-positive goblet cell population. The effect of B. bifidum IATA-ES2 was less evident when loops were exposed to E. coli CBL2 ( Figure 1G). By scanning electron microscopy, the addition of B. bifidum IATA-ES2 did not affect mucin secretion and did not evoke any changes in intestinal loop architecture (Figure 2A). Mucin secretion was slightly higher after the addition of gliadin (data not shown). IFN-c, however, induced mucin release ( Figure 2B), and higher effect was observed when gliadin and E. coli CBL2 were injected with IFN-c ( Figure 2D). The combination of Shigella CBD8, gliadin, and IFN-c boosted mucin secretion into the lumen and impacted the architecture of the epithelial layer, as shown in Figure 2F. Interestingly, addition of B. bifidum IATA-ES2 to ''harmful agents'' (gliadin and IFN-c with/without E. coli) slightly decreased the mucin secretion as compared to those agents alone ( Figure 2C,E).
Translocation of gliadin into intestinal villi is influenced by intestinal bacteria
We determined whether intestinal epithelial layer permeability and gliadin peptide translocation was affected by bacterial strains. Using mouse anti-gliadin antibody, we monitored the transfer of gliadin peptides through the epithelial layer after exposure of the intestinal loops to IFN-c and bacterial strains.
Gliadin, when applied with B. bifidum IATA-ES2 and IFN-c, was observed only in low amounts inside the lamina propriaforming foci mainly on the apical section of certain villi ( Figure 3A). In contrast, the combination of E. coli CBL2, gliadin and IFN-c induced small, local changes (crypt widening), and gliadin was detected primarily below the epithelial layer ( Figure 3B). The combination of Shigella CBD8, gliadin, and IFN-c increased gliadin translocation, and gliadin was detected primarily inside the lamina propria ( Figure 3C). Furthermore, using differential interference contrast ( Figure 3D-F corresponding to samples in upper row), we confirmed the decrease in a number of goblet cells in loops treated with E. coli CBL2 and their loss after treatment with Shigella CBD8, documented in Figure 1.
These data are also consistent with our fluorescence microscopy results, which demonstrated the distribution of TJ components, claudin-1, and ZO-1 in intestinal loops that were treated with gliadin, IFN-c, and/or various bacterial strains ( Figure 4A-J). Gliadin alone or with IFN-c downregulated ZO-1 expression ( Figure 4A,C) compared to PBS-exposed loops ( Figure 4I). On the other hand, simultaneous addition of B. bifidum IATA-ES2 with IFN-c and gliadin upregulated ZO-1 expression ( Figure 4E). When the loops were simultaneously exposed to E. coli CBL2, gliadin and IFN-c, ZO-1 fluorescence was reduced ( Figure 4G).
In contrast, the typical pattern of claudin-1 expression at the periphery of intercellular (enterocyte) contacts was unaffected by addition of gliadin alone or with IFN-c, B. bifidum IATA-ES2 or E. coli CBL2 (Figure 4B,D,F,H) compared to PBS-treated loops ( Figure 4J). Nevertheless, the combination of gliadin, IFN-c, and Shigella CBD8 nearly extinguished ZO-1 and claudin-1 signals (data not shown).
To support the fluorescence microscopy findings, intestinal tissue from the stimulated loops was extracted, and changes in TJ proteins were measured by western blot. As shown in Figure 4K, ZO-1 expression was more sensitive to various stimuli than claudin-1. Gliadin, IFN-c, and, particularly, their combination with E. coli CBL2 reduced ZO-1 levels in tissues. The addition of B. bifidum IATA-ES2 to this mixture increased ZO-1 levels, confirming the fluorescence microscopy data. When B. bifidum IATA-ES2 was added with gliadin, ZO-1 levels approximated to those of the PBS control. When Shigella CBD8 was used, the fragmentation of TJ proteins was detected (data not shown).
Interaction of bacteria with the epithelial layer in vitro
The different effects of bacterial strains on gliadin translocation and expression might be a consequence of differences in the adhesion properties of individual bacterial strains that determine host-microbe interactions. The interaction of various bacterial strains from celiac patients or healthy subjects (which comprise potentially beneficial and pathogenic bacteria) with epithelial cells was analyzed in vitro using the IEC-6 rat cell line; the adherence of bacteria to IEC-6 cells and the impact of gliadin were measured. As shown in Figure 5, the percentage of adhered bacteria varied only slightly, and the differences between E. coli CBL2, Shigella CBD8 and B. bifidum IATA-ES2, were not statistically significant. The simultaneous addition of gliadin fragments and bacteria to cell cultures had an insignificant effect on bacterial adhesion.
Cytokine secretion into the gut lumen
Cytokine production in response to administration of food and bacterial antigens and IFN-c to rat intestinal loops was measured in intestinal washes by cytokine array (Figure 6A-H). The secretion of cytokines, such as chemotactic factor for monocytes and neutrophils (MCP)-1, tissue inhibitor of metalloproteinase (TIMP)-1, vascular endothelial growth factor (VEGF), and betanerve growth factor b-(NGF), increased.
The most abundant cytokines, MCP-1 and TIMP-1, which play a role in tissue protection, were induced by B. bifidum IATA-ES2 in a mixture of gliadin and IFN-c. The addition of E. coli CBL2 to this mixture decreased MCP-1 and TIMP-1 release into the intestinal loops. VEGF secretion rose, particularly by the addition of E. coli CBL2 to gliadin and IFN-c but was unaffected by simultaneous addition of B. bifidum IATA-ES2 to this mixture.
The spontaneous production of b-NGF was independent of any stimulus. Further, cytokine-induced neutrophil chemoattractant (CINC)-3, IFN-c, IL-10, IL-1a, IL-1b, IL-6, macrophage inflammatory protein (MIP)-3a, and TNF-a levels were low. Although it was difficult to determine the effect of the stimuli on low cytokine production, CINC-3 was detected only in loops that were inoculated with E. coli CBL2. In PBS treated loops the cytokine IL-10, IL-1a, IL-1b, and TNF-a were undetectable (as summarized in Figure 6H).
When Shigella CBD8 replaced E. coli CBL2, cytokine levels increased markedly. Nevertheless, the high background of the microarrays, reflecting the impact of Shigella CBD8 on intestinal tissue, rendered the precise evaluation of these data impossible.
Discussion
There is limited data on the effects of bacteria and their components on the intestinal barrier and the immune response to dietary proteins. In this study, we observed the effects of potentially pathogenic bacterial strains, isolated from the feces of celiac patients or bifidobacteria, on gliadin-and IFN-c-induced immune reactions.
Gliadin, when applied into the intestinal loops of germ-free rats with the Gram-negative bacterial strain E. coli CBL2 and Shigella CBD8, significantly reduced the number of PAS-positive goblet cells in the jejunum; the opposite effect was observed when B. bifidum IATA-ES2 was applied. The decreased caused by gliadin alone was nearly completely reversed by the addition of B. bifidum IATA-ES2. Moreover, the decline of PAS-positive goblet cell population that was caused by gliadin and E. coli CBL2 or Shigella CBD8 was lower when they were combined with B. bifidum IATA-ES2 and/or IFN-c. The decrease in number of goblet cells appeared to be caused by massive mucin secretion or cell exhaustion, accompanied by changes in jejunal architecture, similar to the changes that occur in the early stages of CD [34].
The direct effect of intestinal microbiota on the number of PASpositive goblet cells and on the composition and secretion of mucins occurs on colonization of GF animals -namely, mice and rats. In GF rodents, goblet cells were shown to be fewer in number and smaller in size and mucus layer is thicker compared with conventionally raised animals. In rats that are raised under GF conditions and inoculated with human fecal microbes (human microbiota-associated rats), the number of mucin-containing goblet cells in the small intestine is higher than in conventionally raised rats [35][36][37].
Commensal and pathogenic bacteria and bacterial LPS induce host goblet cells to produce glycosylated mucins that are digestible and beneficial for their own metabolism. An example is the monoassociation of GF mice with wild-type Bacteroides thetaiotaomicron (gut commensal), which induce the production of fructosylated glycoconjugates, used by the bacterium as a nutrient source [38][39][40][41].
Studies have shown that dietary factors affect goblet cell numbers and modulate their secretory activity [37,42,43]. The activating property of gliadin was also demonstrated in vivo in GF rats; where repeated oral administration of gliadin to neonatal rats led to effects like colonization with SPF (specific pathogen-free) microbiota [32]. In earlier reports, increased glycoprotein synthesis in jejunal tissue were observed in untreated celiac patients [44,45].
Our finding of mucin secretion by goblet cells, as documented by scanning electron microscopy, suggests that IFN-c induced secretion is partially compensated by increased mucin synthesis. The markedly increased mucin secretion that is induced by enterobacteria with gliadin and IFN-c, however, is accompanied The highest percentage of adhered bacteria was observed for E. coli CBL2 and Shigella CBD8. The differences between tested bacterial strains, as well as the effect of simultaneously added gliadin fragments were non-significant as established by applying the Mann-Whitney Utest. Data are expressed as medians and interquartile ranges (25% to 75%) of adhesion of four independent experiments. None of the differences was found to be statistically significant (P,0.05). The separate dot indicates an outlier. doi:10.1371/journal.pone.0016169.g005 membranes were stained with anti ZO-1 or claudin-1 antibodies and re-probed with antibodies against b-actin to document the same protein concentration in all samples. doi:10.1371/journal.pone.0016169.g004 by a decrease in the number of PAS-positive goblet cells, damage to tight junctions, and remodeling of the epithelial layer.
Recently, the effect of gliadin on the epithelial layer was noted in in vitro studies using epithelial cell lines. Exposure to peptictryptic fragments of gluten or gliadin leads to increased permeability of Caco-2 monolayers, a human colon epithelial cell line, due to lower expression of TJ proteins [21,22,46,47]. Our experiments with rat intestinal loops confirmed the decreased expression of the TJ protein ZO-1 after in vivo stimulation with gliadin, IFN-c, and/or enterobacteria from CD patients by immunofluorescence and western blot. The second protein band reacting with anti ZO-1 antibodies in some samples, also shown by others [48][49][50], could be a consequence of partial aggregation, complex formation, or external stimuli. In addition, our results demonstrate that these adverse effects are partially restored by B. bifidum IATA-ES2.
We noted a spectrum of cytokines in the intestinal washes after various stimuli. Secretion of TIMP-1 (inhibitor of metalloproteinase, an enzyme of the endopeptidase family, important in resorption and remodeling of extracellular matrix) was decreased after gliadin treatment and increased after the addition of B. bifidum IATA-ES2 and IFN-c. The effect of gliadin is consistent with the upregulation of intestinal metalloproteinases and changes in TIMPs in patients with celiac disease and dermatitis herpetiformis [51][52][53].
In a recent study, we observed that the two enterobacteria studied E. coli CBL2 and Shigella CBD8, induced proinflammatory signals in PBMCs (peripheral blood mononuclear cells) through an intact epithelial barrier (Caco-2 cells). This property appeared to be associated with the pathogenic potential of the strains. Stimulation of Caco-2 cells with other Bifidobacterium strains did not exert similar effects, confirming that the intestinal epithelial cells provided a physical barrier, preventing overstimulation and inhibiting monocyte activation [54].
It has been suggested that the beneficial effects of bifidobacteria are related to their ability to adhere to the epithelial layer, preventing the adhesion of pathogenic bacteria. Yet, the potentially pathogenic strains that we tested have similar adhesion properties as B. bifidum IATA-ES2. The adhesion of pathogens to host tissues might be a potentially negative hallmark, especially adhesion to the damaged tissue, which is often the first step in pathogenesis [55,56].
In conclusion, our data in GF rat intestinal loops highlight the potential for gliadin fragments and/or IFN-c to reduce the number of PAS-positive goblet cells and increase mucin secretion; changes typical for early stages of enteropathies in general. Interestingly, the changes induced by gluten and IFN-c were more pronounced when these agents were combined with potentially pathogenic enterobacteria. The decrease in PAS-positive goblet cells by gliadin was reversed in the presence of B. bifidum IATA-ES2. Moreover, enterobacteria can contribute to the translocation of gliadin fragments into intestinal loops and to changes in ZO-1 expression. Interestingly, B. bifidum IATA-ES2 has beneficial effects on cytokine secretion into intestinal loops, upregulating chemotactic factors and inhibitors of metalloproteinases and thus contributing to gut mucosal protection. Therefore, we hypothesize that the composition of the intestinal microbiota and the presence or absence of specific bacteria could play a role in CD pathogenesis. | 2014-10-01T00:00:00.000Z | 2011-01-13T00:00:00.000 | {
"year": 2011,
"sha1": "110af1e05304c3bd0aafb5f26668b2b564669e7a",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0016169&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "110af1e05304c3bd0aafb5f26668b2b564669e7a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
216156004 | pes2o/s2orc | v3-fos-license | Atrioventricular Nodal Reentrant Tachycardia in Very Elderly Patients: A Single-center Experience
We present a series of elderly patients older than 80 years who had recurrent palpitations for decades and who were subsequently diagnosed with atrioventricular (AV) nodal reentrant tachycardia (AVNRT). Through a retrospective chart analysis, we identified 12 patients (nine females and three males) aged 88 years ± 3.7 years (range: 80–92 years) seen at our center from 2015 to 2016 for recurrent palpitations and supraventricular tachycardia (SVT) who were ultimately diagnosed with AVNRT. These patients had palpitations and had been treated for anxiety and panic attacks for decades. They underwent electrophysiology (EP) study and successful ablation of the slow pathway. The demographic data, symptoms, and EP characteristics during the EP studies of the patients were evaluated. All 12 patients experienced palpitations and all but three had documented SVT on a loop recorder or an event monitor. During EP study, all patients displayed slow-pathway conduction. Nine patients demonstrated discontinuous AV nodal conduction curves, while three showed continuous AV nodal conduction curves. The observed tachycardia rates were 496.7 ms ± 25.7 ms. Three patients had atrial fibrillation (AF), which was noted during monitoring with the implanted loop recorders. Tachycardia was induced with both burst atrial pacing and atrial extrastimuli in five patients and with extrastimuli only in two patients. In five patients, no tachycardia induction was noted, but these individuals showed evidence of dual AV node physiology. Successful elimination of residual slow-pathway conduction postablation and/or noninducibility of tachycardia in the postablation period were achieved in all patients. All patients remained symptom-free over a period of one year. The patients who had AF in addition to AVNRT also did not present any recurrent AF following AVNRT ablation but are being monitored for recurrence. AVNRT in elderly people is often confused with panic attacks; hence, reports of panic attacks in elderly people should be properly evaluated for an arrhythmic etiology.
Introduction
Atrioventricular (AV) node reentrant tachycardia (AVNRT) is the most common supraventricular tachycardia (SVT). It usually affects children and young adults. [1][2][3][4] We present a series of elderly patients who had recurrent palpitations but who had been incorrectly labeled as having panic attacks for decades. They subsequently were diagnosed with AVNRT and underwent ablation of the slow pathway. In this report, we present the clinical and electrophysiology (EP) characteristics of these patients.
Methods
In a retrospective chart analysis, we identified 12 patients aged 88 years ± 3.7 years who were diagnosed with AVNRT. These patients had palpitations and had been treated for anxiety and panic attacks for decades. They underwent long-term cardiac monitoring for better symptom-rhythm correlation, EP study, and successful ablation of the slow pathway. We collected data pertaining to the patients' demographics, symptoms, and EP characteristics during the EP studies as well as their symptoms after ablation.
Long-term event monitoring
A 30-day event monitor (MCOT™ Monitor; CardioNET, Malvern, PA, USA) was deployed in patients who continued to have palpitations. In case the event monitor did not capture any symptomatic episodes, implantable loop recorders (Reveal LINQ™; Medtronic, Minneapolis, MN, USA) were also inserted.
Electrophysiology study
Patients were selected for EP study if they had a documented episode of SVT and/or were experiencing recurrent palpitations. All patients were brought to the EP laboratory in a postabsorptive fasting state. Conscious sedation was initiated and maintained throughout the procedure. The patients were then prepared and draped in the usual sterile fashion. The right femoral site was locally anesthetized and four venous sheaths (one 6-French, two 5-French, and one SR-0; St. Jude Medical, St. Paul, MN, USA) were placed using a modified Saldinger technique with a 5-French micropuncture kit under ultrasound guidance. EP catheters were advanced under fluoroscopic guidance to the high right atrium, His bundle, right ventricle apex, and coronary sinus (CS).
Electrophysiology study protocol
Baseline intervals were measured. Atrial burst pacing was performed up to the AV block cycle length (AVBCL) or to induction of the tachycardia. Atrial extrastimuli were performed to look for AV jump, the AV node effective refractory period (AVNERP), tachycardia induction, or the atrial effective refractory period. If patients were not inducible for any tachycardia, isoproterenol up to a maximum dose of 10 mg was initiated and the induction protocol was repeated.
Diagnosis of atrioventricular nodal reentrant tachycardia
Patients were diagnosed with AVNRT as a mechanism of their tachycardia if they had evidence of dual AV node physiology or an inducible AVNRT during the EP study. Dual AV node physiology was diagnosed if the patient demonstrated a 50-ms increase in the A-H interval with a 10-ms decrement in atrial extrastimuli. 4,5 Separately, if an SVT was induced, AVNRT was diagnosed using standard criteria. During tachycardia, the septal ventriculoatrial (VA) interval was less than 70 ms. Ventricular entrainment during tachycardia demonstrated a V-A-H-V response and a long postpacing interval-tachycardia cycle length (> 115 ms). If the tachycardia was terminated during tachycardia, the transition zone criterion was used. [3][4][5][6] Some of the characteristics of typical AVNRT are shown in Figures 1 through 3.
Ablation strategy
Slow-pathway ablation was performed by localizing it anterior to the CS ostium. Radiofrequency (RF) ablation was conducted using a 4-mm nonirrigated-tip catheter (Biosense Webster, Diamond Bar, CA, USA). RF ablation was completed in a temperature-control mode up to a maximum of 45 W. Intermittent junctional beats were seen during ablation. If intermittent junctional beats were not observed, the catheter was moved superiorly to the midseptal level. If junctional beats were still not observed, programmed stimulation was repeated to assess for residual slow-pathway conduction or reinduction of the tachycardia. Programmed stimulation was performed both on and off isoproterenol. Patients were followed up with in the arrhythmia clinic at six and 12 months for any recurrence of symptoms. 7
Results
Twelve patients (nine females and three males) were included in the analysis; their clinical characteristics are summarized in Table 1.
AV Nodal Reentrant Tachycardia in Very Elderly Patients
The Journal of Innovations in Cardiac Rhythm Management, February 2020 Symptoms All included patients had been experiencing palpitations for decades at the time of this study. These palpitations initially lasted for a few minutes to hours. Most of these patients had previously been to the emergency room (ER) and, by the time they arrived in the ER, the palpitations had already subsided, with all of them found to have sinus tachycardia while at the ER. All 12 patients had a history of more than five ER visits and, each time, they were found to be in sinus tachycardia. They were subsequently misdiagnosed as having panic attacks and were given antianxiety medications. They were ultimately seen in our arrhythmia clinic and underwent long-term cardiac monitoring either using a 30-day event monitor or with insertion of a loop recorder.
Cardiac monitoring
Three patients were found during the study period to have a documented episode of a regular, narrow complex tachycardia. In nine patients, the 30-day event monitor could not capture a symptomatic episode of palpitations, and they subsequently underwent loop recorder insertion. Documented episodes of SVT captured on said loop recorders between two and nine months occurred in all nine patients. Three of these patients were also found to have atrial fibrillation (AF). These patients were started on oral anticoagulation.
Electrophysiology study results
Findings obtained during EP testing are summarized in Table 1. All patients demonstrated slow-pathway conduction. Nine patients had discontinuous AV node conduction curves with a clear jump and echo phenomenon, whereas three demonstrated continuous AV node conduction curves without A-H jump during decremental atrial pacing.
Tachycardia was induced with both burst atrial pacing and atrial extrastimuli in five patients and with extrastimuli only in two patients. In five patients, no induction of tachycardia was noted, although these individuals did show evidence of dual AV node physiology. Of the seven patients who were inducible for tachycardia, three required isoproterenol for tachycardia induction.
All 12 patients underwent slow-pathway ablation, which was localized in the posteroseptal location just superior to the CS ostium in eight patients. The slow pathway was localized in the midseptal location in four patients. Five patients had no intermittent junction noted during the ablation of the slow pathway. All patients presented successful elimination of slow-pathway conduction postablation and/or noninducibility of tachycardia in the postablation period both on and off isoproterenol using burst pacing and atrial extrastimuli. There were no complications observed immediately. Three patients had bruising noted at the site of catheter insertion one day after the procedure, but all cases resolved completely within a reasonable time frame.
Follow-up
All patients were followed in the arrhythmia clinic for a period of one year. They remained symptom-free and had no recurrence of any arrhythmia. Nine patients with insertable loop recorders had no evidence of any arrhythmia at follow-up. Three patients who had AF as noted on loop recorders had no recurrence of their AF following ablation of the slow pathway. All patients were taken off their antianxiety medications and have done well. We continued oral anticoagulation in the three patients with AF.
Discussion
SVT is a common arrhythmia and usually affects young people, with AVNRT being the most common SVT affecting young patients. [1][2][3][4] The diagnosis of such is straightforward in most patients. We report on a series of 12 patients who had a long-standing history of palpitations and were wrongly labeled as having panic attacks; AVNRT was subsequently found to be the reason for the palpitations.
Symptoms in our study population
The diagnosis of SVT and AVNRT, which is usually straightforward, can be elusive because episodes of the tachycardia may subside before the patient seeks medical help for the episode. Unfortunately, our study population had been experiencing palpitations for decades before a diagnosis was made. This could have been partly due to the nature of the arrhythmia in question, which is shortlived and may terminate before the patients were seen in the ER. A common theme in our study population was the abrupt onset of symptoms and termination before arriving at the ER.
The known female predominance in AVNRT persisted in older age as well. Almost all patients were found to have sinus tachycardia when they arrived in the ER in years past. This led to the incorrect attribution of their symptoms to either anxiety or panic attacks. Although SVT and especially AVNRT are usually seen in young adults, physicians need to be aware that elderly patients can experience such arrhythmias as well. The reason for sinus tachycardia could be a component of anxiety secondary due to palpitations and rapid heart rates from AVNRT even if the episode terminated. This led to incorrect labeling of these patients as having panic attacks. ER physicians need to be aware of these clinical scenarios.
Electrophysiology characteristics
There were certain interesting findings noted during EP study in our cohort. We found AVNRT with a heart rate as slow as 110 bpm. This could be due to the existence of a slower conduction system in this aged population, as was suggested by the longer AVBCL and AVNERP. Furthermore, tachycardia was not induced in five of 12 patients; however, they all demonstrated dual AV node physiology during EP study. Most of the patients required isoproterenol infusion for induction of the tachycardia, which was likely due to a slower baseline heart rate and long AVBCL. Five patients had no intermittent junctional beats noted during slow-pathway ablation, but a repeat programmed stimulation after each set of ablations failed to demonstrate any residual slow-pathway conduction or an inducible tachycardia in these patients. Previous reports have demonstrated that intermittent junctional beats, although a marker of slow-pathway ablation, are not essential for successful AVNRT ablation. 7 It is very important to repeat programmed stimulation after each series of ablations to demonstrate continued slow-pathway conduction or inducible tachycardia before attempting further ablation, especially in elderly patients with a prolonged A-H interval. This strategy is important to minimize the risk of complete heart block or a fast-pathway injury in such individuals.
Three patients had AF as noted using a loop recorder during the monitoring period. Elimination of slow-pathway conduction in these patients resulted in the resolution of their symptoms, and none of the three patients with AF showed such during follow-up to one year. In these individuals, we continued monitoring as well as anticoagulation given the risk of stroke. Patients with SVT (AVNRT or AVRT) have been shown to degenerate into AF and, interestingly, the elimination of an accessory pathway or slow-pathway conduction results in the resolution of AF in these individuals. In one study, a significant proportion of candidates who underwent AF ablation were inducible for SVT. SVT ablation showed a preventive effect on AF recurrence. The authors of this previous study suggested that these patients should be selected to undergo simpler ablation procedures tailored only toward the suppression of the triggering arrhythmia. 8
Role of long-term cardiac monitoring
Our patients had symptoms of palpitation for decades. Unfortunately, they were wrongly labeled as having anxiety and panic attacks. Prolonged monitoring with an event monitor or a loop recorder allowed for proper diagnosis and treatment in each patient. Patients with anxiety or panic attacks should be evaluated with a prolonged cardiac monitor if their symptoms do not respond to antianxiety medications. Due to the infrequent and short-lived nature of episodes of AVNRT, the diagnosis can often be challenging to make and possible only with prolonged monitoring with an implantable loop recorder. Successful ablation resulted in the complete elimination of symptoms of tachycardia and palpitations in our study group.
Of note, there is often a tendency to withhold invasive therapy in the elderly, for fear of complications. RF catheter ablation for supraventricular and ventricular arrhythmias has been shown to be not only effective but safe as well. 9,10 In our study group, no major complications were noted. All patients had complete resolution of their palpitations. They were completely taken off their antianxiety medications as well.
Elderly patients should be evaluated for an arrhythmic etiology when reporting palpitations, as many may benefit from EP study and ablation. These procedures are safe and should be offered to all patients as one available management option regardless of patient age.
Limitations
There are certain important limitations in this study design and analysis that should be highlighted. This was a nonrandomized, descriptive analysis involving a small group of highly selected patients. However, we believe that there are certain important points that were brought up in this study. All of our patients had a common story of long-standing palpitations that would terminate before arriving at the ER. All of these patients were found to have only sinus tachycardia and were thus wrongly labeled as having anxiety or panic attacks. Further, they were treated for these false panic attacks for decades; it was only after prolonged cardiac monitoring that they received a proper diagnosis. Therefore, the authors feel that this study supports the validity of the need to better evaluate patients who present with a similar clinical scenario.
Conclusion
AVNRT in the elderly is often confused with panic attacks.
Panic attacks in the elderly should be properly evaluated for an arrhythmic etiology with long-term cardiac monitoring either using an event monitor or a loop recorder, as these patients may benefit from subsequent RF ablation. | 2020-04-09T09:03:17.882Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "ecd6d6c42e0c8e4272bd19daa218047569832300",
"oa_license": "CCBY",
"oa_url": "https://www.innovationsincrm.com/images/pdf/CRM1192_Kanjwal.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7da48c0535fa8c777ac8aa2b9cf745e293f74b8c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
85465215 | pes2o/s2orc | v3-fos-license | Mining with crush pillars by
This paper
was first presented at the AfriRock 2017
International Symposium, 30 September –6
October 2017, Cape Town Convention Centre,
Cape Town.
Crush pillars have been extensively used in Merensky Reef stopes.The key function of the pillar system is to prevent the occurrence of back-breaks (large-scale collapses in the back area of a stope) occurring as a result of hangingwall separation along parting planes or fractures.One such problematic parting is the Bastard Merensky Reef, situated between 5-45 m above the Merensky Reef.The pillar dimensions are selected such that the pillars should be fractured while being formed at the mining face.This is typically achieved when a pillar is cut at a width to height (w:h) ratio of approximately 2:1 (Ryder and Jager, 2002).Once crushed, the residual strength of the pillar provides the required support function.
Factors influencing pillar stress (i.e.mining depth, pillar width, mining height, percentage extraction) will impact on crush pillar behaviour.Du Plessis and Malan (2015) demonstrated how oversized pillars could potentially result in unpredictable pillar behaviour.The execution of a mining layout can impact on the size of the pillar cut at the mining face.This is demonstrated by the examples and case study presented in this paper.Similarly, the presence of potholes or blocks of unmined ground will influence pillar crushing.While these are common occurrences on most mines using crush pillars, these factors have historically not been associated with poor pillar crushing or pillar seismicity.
A limit equilibrium model (Napier and Malan, 2014) implemented in the TEXAN displacement discontinuity boundary element code was used to simulate the impact of both geological losses and sidings on crush pillar behaviour.The model used was representative of the behaviour of crush pillars in a typical layout.This provided insights into when pillars will crush, where they will crush relative to the mining face, and why some pillars can potentially burst.
The effect of unmined blocks of ground or geological losses on crush pillar behaviour has not previously been considered as a factor affecting pillar crushing or pillar seismicity.For this reason, it was investigated in this study.In the platinum mines, intact blocks of ground are left in situ where poor ground conditions are encountered, or where geological features such as potholes are intersected.A pothole can be described as a random, approximately circular area where the reef slumps and pinches to such an extent that regular mining cannot be conducted.In the Mining with crush pillars by M. du Plessis* and D.F. Malan † Crush pillars have been extensively applied on the Merensky Reef horizon since the late 1970s.Once in a crushed state, the residual strength of the pillar provides a local support function and must support the hangingwall to the height of the highest known parting.The design of crush pillars is mainly limited to specifying a width to height ratio (w:h) of approximately 2:1.It is also required that a pillar crushes close to the face, while the pillar is being formed.On many mines the crush pillar system is problematic owing to the difficulty of controlling pillar sizes.This is mainly caused by poor drilling and blasting practices.As a result, pillar crushing is not always achieved.Crush pillars are implemented at relatively shallow depth, the pillar dimensions have remained essentially unchanged over many years, and the impact of regional pillars and geological losses contributing to the regional behaviour of the rock mass are overlooked.In many cases the pillar system is the source of seismicity.In this paper, the influence of mining losses (potholes) and the use of sidings are discussed as possible contributors impacting on crush pillar behaviour.A limit equilibrium model implemented in a displacement discontinuity boundary element program is used to demonstrate crush pillar behaviour.The results are compared to the pillar behaviour at an underground investigation site, which supports the preliminary findings.crush pillar behaviour, limit equilibrium model, regional pillars, geological losses.
Mining with crush pillars
Bushveld Complex (BC), potholes make up the largest component of 'mining and geological' losses.Potholes can contribute to an extraction loss of between 5-25%.Figure 1 shows the pothole distribution at a site along the western limb of the BC.The potholes vary in size from 5-420 m in diameter.Most of the pothole diameters range between 20-100 m and the spans between adjacent potholes are typically less than 100 m.
To quantify the effect of a pothole adjacent to a line of crush pillars, an idealized crush pillar layout (Figure 2) was simulated.The layout consists of a 30 m × 70 m stope panel with a second panel being mined in a sequential fashion adjacent to this first panel.The layout was simulated as eight mining steps with seven crush pillars being formed during this process.The unmined block was simulated as a square block, the dimensions of which [(x m) × (y m)] were selected to simulate the percentage reef locked up in the area defining mining step 1 [e.g.pothole area (10 m × 10 m)/(30 m × 70 m) 5%].For the second panel, the size of each mining step was 10 m and the sizes of the crush pillars were 4 m × 6 m.A 2 m mining height was used (w:h = 2:1).The element sizes were 0.5 m.
The parameters used for the simulations were as follows: Young's modulus 70 GPa, Poisson's ratio 0.25, contact friction angle 35°, intact and residual material strength 5 MPa, mining depth 600 metres below surface (mbs), and reef dip 0°.These values were chosen arbitrarily.The intent was to establish trends regarding the pillar behaviour, even though the parameters selected may not fully represent the underground environment.A sensitivity analyses was conducted to determine the effect of each input parameter on the behaviour of the model and simulated pillars in the layout.The results indicated that the choice of parameters produced qualitative agreement with observed crush pillar behaviour and historical underground measurements.
The results highlighted the importance of taking the geological environment into consideration when implementing a crush pillar system.The pillar stress is affected by the additional stability provided by unmined blocks of ground.This causes a reduction in the pillar stress and prevents effective pillar crushing.To overcome this would require the cutting of smaller pillars, which could be impractical.
The preliminary modelling results indicated that: ® Crush pillars implemented at a depth of 600 mbs with a w:h = 2:1 will not crush if a 10% mining loss is present adjacent to the pillar line.Pillars with a reduced width will be required in the area to ensure that pillar crushing is achieved (e.g.w:h = 1.5 is required at 600 mbs).® Pillars located as far as 20 m behind or ahead of an unmined block will also be affected, resulting in either partially crushed (core still solid) or intact pillars.® Crush pillars implemented at depths of more than 800 m below surface are impacted to a lesser extent when in close proximity to a pothole.Large mining losses (>10%) and potholes situated closer than 10 m from the pillar line can nevertheless prevent pillar crushing.
The case study presented in the second half of the paper indicated that a crush pillar situated in close proximity to a pothole at a depth of 1300 mbs was not in a crushed state.
A siding is a 1-2.5 m wide ledge or heading carried on the one side of an on-reef development end, adjacent to the panel being mined (Figures 3 and 4).These sidings are typically carried at between 3 and 6 m behind the panel face (depending on the standard applied by the particular mining company).The main function of the siding is to either modify the fracture patterns resulting from high face stress or to move the crush pillars away from the travelling way, to prevent failed rock from falling on people.The sidings, being approximately 2 m wide, are difficult to clean (hand-lashed) and support.For this reason, mining of the siding is frequently behind schedule.
In some cases, sidings lag the face by 20-30 m and are then developed as a single mining face.A lagging siding will impact the width of the pillar being formed at the mining face (Figure 4).Until now, the impact of a lagging siding on the pillar width has not been identified as a contributor to undesired pillar behaviour, or a source of pillar seismicity.
Once the siding of an advancing panel lags behind the adjacent lagging panel face, oversized pillars are created.The pillars will be reduced in size to the required dimension only when the siding is blasted.At this point the pillar might not be able to crush sufficiently, as it is already in the back area of the stope.
To investigate the impact of a lagging siding on crush pillar behaviour, the simulated mining sequence of the layout in Figure 2 was adjusted.The mining loss indicated in step 1 was excluded.A 2 m wide siding was added to the layout by initially simulating the pillars as being 6 m wide (w:h = 3).The siding was then mined by simulating the additional 2 m portion of pillar as being mined.The length of the oversized pillar resulting from a lagging siding was controlled by the delayed mining of the pillar holings.Initially a 6 m wide by 20 m long pillar was formed.The siding was mined, reducing the pillar width (at the pillar position) to 4 m (w:h = 2:1).The final pillar (2 m × 4 m) was created only when the pillar holing was developed.This took place when the pillar was 20 m in the back area.
The layout was simulated at various depths as shown in Figure 5.All the results presented are for pillar D. Mining depth does not appear to have any impact on the overall behaviour of the pillar (although the pillar is subjected to a higher stress level).These findings illustrate how important it is to achieve pillar crushing while the pillar is close to, or is being formed at, the mining face.The results in Figure 5 can be compared to results presented by du Plessis and Malan (2014), where pillar crushing causes load shedding if the pillars are cut to the correct width.
The results indicate that a lagging siding could impact on pillar crushing.These pillars could therefore become sources of seismicity when located in the back area of a stope.
An investigation was conducted at a mine applying crush pillars on the Merensky Reef at a depth of approximately 1300 mbs.The objective was to verify some of the numerical modelling results and to investigate the failure mechanism of the pillars.
The mining layout requires 2.5 m wide × 4 m long crush pillars, separated by a 2 m wide pillar holing.The stoping width is approximately 1.2 m high and the reef dips approximately 10° towards the north.Conventional breast mining is applied with long panels (approximately 35 m inter-pillar spans) being mined adjacent to a gully.A 2 m wide siding is cut adjacent to the pillar line.The mine standard requires that the siding does not lag the mining face by more than 4 m.
As can be seen in Figure 6, actual pillar dimensions varied greatly as a result of poor mining practice.Pillar 19 is approximately the correct dimension (2.5 m × 3.8 m).Of the pillars cut, 63% had a width to height ratio greater than 2 and only four of the pillar holings were less than 2 m wide.The practice underground is to mine the pillar holings in an updip direction.The panel siding lag was kept at the 4 m standard.Accurate mining of the lagging panel was required to ensure that the pillars were cut to the correct dimension (pillar width).This was not done.The holings were also not always mined as required, impacting on the pillar length.This resulted in several significantly oversized pillars.Pillars 13 and 16 are examples of this.Pillars 17 and 18 were only split when the pillars were located some distance from the face.Pillars 13 and 16 are further examples of this bad practice.As a result, pillar 16 experienced a magnitude ML 1.9 seismic event.At the time of the event, the pillar holing indicated by the black square (step 2) and the holing between pillars 16 and 17 were being mined.Pillar 16, at this point was approximately 25-30 m in the back area.
Mining with crush pillars
The underground investigation revealed the following.
® Pillar 1, although adequately sized to ensure crushing (2 m × 4 m), was in an uncrushed state as a result of its proximity to the pothole (Figure 7).® The edge of the pothole was severely fractured due to the abutment stress (point A in Figure 6).® Similarly, the face abutment at point B was also severely fractured.This face was left unblasted for approximately 5 months to rectify the lead-lag sequence.® Pillar 11 was left oversized to clamp a fault.The pillar was in an unfractured state (similar to the condition of pillar 1).There were signs of footwall punching along the downdip side of the pillar.® Pillar 19, a newly cut pillar, was in a fractured state (Figure 8).The pillar displayed the same fracture profile as described by du Plessis and Malan (2016).
As can be seen from the figure, the majority of the fractures propagated towards the side of the pillar which was exposed first.The side of the pillar exposed by the lagging face displayed little to no fracturing.
Once the pillar is completely formed and the face advances, the fractures continue to dilate.Where fractures intersect at approximately the centre of the pillar, a wedge-like structure is formed.® Pillar 16 was, at the time of the investigation, in a completely fractured state.This was most likely a result of the seismic event.The updip side of the pillar bulged as the fractured material was pushed out (Figure 9).The downdip side of the pillar showed signs of ejected material scattered into the panel below.The footwall experienced heave and the timber support in the panel below the pillar was damaged as a result of the event (Figure 10).
The underground observations supported some of the modelling results described in the previous section.It was nevertheless important to understand the failure mechanism contributing to pillar instability (i.e.pillar 16).The limit equilibrium model was also used to simulate the behaviour of the crush pillars for this particular underground layout.Du Plessis and Malan (2016) demonstrated that by applying this method, they successfully simulated the observed and measured behaviour of crush pillars for a large-scale underground trial site.
The layout for the underground investigation site (Figure 6) was approximated using straight line polygons to enable the area to be easily discretised using triangular elements.The mining steps considered were: The element sizes selected for the mining steps were 1 m, and for the pillars 0.5 m.Following several successive cycles of parameter testing, the selected modelling parameters provided results which closely resembled the observed underground pillar behaviour.A dip of zero degrees was used in the model to simplify the analysis.The vertical stress at this depth was 38.5 MPa.The horizontal stress was assumed to be the same in both directions (k-ratio = 1.8).The intact and residual strengths of the limit equilibrium material were set to 1225 MPa and 20 MPa respectively.The high value for the intact strength is associated with the onset of pillar failure and was required to ensure that the model could replicate the observed underground pillar behaviour.A friction angle of 50° was used.
The numerical model was useful to establish trends and comparisons.The general results indicated that: ® The pothole adjacent to pillar 1 prevented the pillar from crushing.® High stresses were present along the solid abutments (Figure 11).This explains the significant scaling observed along the pothole edge and stopped panel face.® The average convergence across the entire mining region was approximately 36 mm.It increased by approximately 1.5 mm during the extraction of mining step 1 and by another 1.5 mm during step 2. The additional convergence experienced during step 2 (mining of the slot along the pillar holing) was a result of pillar 16 crushing.It is insightful to note the impact which late pillar crushing in the back area has on the overall rock mass behaviour.This finding should be explored in more detail.® The oversized pillars (11,13,16) were intact (e.g. Figure 12).Pillar 16 only fails in step 2 when the pillar is partially mined by the slot defining the pillar holing.
® Pillar 19 was completely crushed and in a residual state.
The vertical stress across pillar 16 (section b-b' in Figure 6) indicated that the pillar had high stress levels on the edge and an intact core.Mining of step 1 caused some additional damage to the pillar edge, as can be seen in Figure 12.The outer 0.5 m of specifically the updip side of the pillar (initially exposed side) assumes a residual state.As the outer edges of the pillar fail, the high edge stresses are transferred towards the core of the pillar.Once the slot along the planned pillar holing is mined (step 2), the pillar fails completely and enters a residual state.
A convergence profile across section b-b' is shown in Figure 13.A significant change in convergence is experienced across the pillar when the pillar fails (approx.40 mm).Another convergence profile was constructed along section c-c', extending from the face position (including step 1) to 50 m in the back area of the mined-out panel, to also include the effect of pillar 16.The results show that the intact pillar has a significant impact on the convergence experienced in the panel in proximity to the pillar 16 position.Once the pillar fails, the convergence increases (step 2).However, there were other oversized intact pillars in the back area (i.e.pillar 13).As a result, a certain amount of convergence is prevented by the intact pillar.The system (pillar and rock mass) is therefore not at a state of equilibrium, and this can potentially result in violent pillar behaviour.
Du Plessis and Malan (2014) demonstrated the effect of oversized crush pillars in the back area of a stope.The findings indicated that if an oversized pillar did not crush at Mining with crush pillars the face while being cut, as these pillars move into the back area as the mining face advances, they experience a higher stress level.The change in stress caused by a mining increment is lower than when the pillar is formed at the face.The pillar may therefore either not crush (especially when oversized) or fail violently.The stresses on these pillars in the back area are much higher and the loading environment has become much softer, as the pillar is no longer close to the face abutment.The results in Figure 14 provide an illustration of the increase in convergence as a result of possible violent crush pillar behaviour.
Du Plessis and Malan (2016) determined that the amount of convergence experienced in a crush pillar site could be directly related to pillar deformation (pillar fracturing or dilation along fracture planes).If the convergence (deformation) is restricted as a result of an intact pillar, it will impact on the amount of energy potentially available to cause violent pillar failure.Salamon (1970) showed that the equilibrium between a pillar being loaded and the post-peak behaviour is stable irrespective of the convergence experienced by the pillar if: (k + ) >0 [1] where k = Stiffness of loading strata (rock mass) = Post-peak pillar stiffness.
Various stiffness criteria to ensure 'stable' pillar behaviour for rigid and yielding pillar systems were presented by Ozbay (1989), Ozbay and Roberts (1988), and Ryder and Ozbay (1990).Unfortunately, the literature and methodologies described do not fully support the behaviour of crush pillars and this needs to be further investigated.The preliminary analysis was therefore not included in the paper.This paper illustrates the importance for crush pillars to enter a residual stress state while being formed at the mining face.Factors such as geological (i.e.potholes) or mining losses, in close proximity to the pillar, will impact on the behaviour of a crush pillar.Furthermore, a lagging siding or delayed pillar holings will impact on the size of the pillar formed at the mining face.Early pillar crushing is therefore not achieved and this can result in unpredictable pillar behaviour.
The underground case study verified the preliminary modelling results.The model was able to replicate the behaviour of the pillars observed underground.It was insightful to note the impact of late pillar crushing in the back area on the convergence behaviour in the mined region.The study indicated that a reduced amount of convergence, as a result of an intact pillar, may be indicative of potential violent pillar failure.This finding should be further explored.
Part of the work described in this paper formed part of Dr Michael du Plessis' PhD studies at the University of Pretoria.The contribution of Professor John Napier with regard to the development of the limit equilibrium model as well as the TEXAN code is acknowledged.
®
Step 0: The layout with the face positions prior to the seismic event.®Step 1: Panel advance to determine the effect on the pillar stress.® Step 2: Mining of a 2 m × 2 m slot into pillar 16 at the holing position to determine the impact on the pillar stress. | 2019-03-13T05:02:01.381Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "d44501d5daf8779248fe46e76c2c1001a7db56db",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.org.za/pdf/jsaimm/v118n3/06.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d44501d5daf8779248fe46e76c2c1001a7db56db",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"History"
]
} |
265490686 | pes2o/s2orc | v3-fos-license | The Effect of Sowing Date on the Growth and Yield of Soybeans Cultivated in North-Eastern Poland
: Soybean yields are influenced by numerous factors, including environmental conditions, location, and agricultural practices. Sowing date affects plant growth, development, and yields, and it plays a particularly important role in soybean cultivation. The optimal sowing date should be selected based on soil temperature, precipitation, and rainfall distribution in a given region. The aim of this study was to determine the effect of various sowing dates (I—early, II—optimal, III—late) on the time from sowing to emergence of soybean seedlings, length of the growing season, morphological traits of soybean plants, yield components, and seed yields of soybeans grown in north-eastern Poland. Sowing date considerably affected the time from sowing to the emergence of soybean seedlings and seed yields. In north-eastern Poland, soybeans should be sown in the first half of May to minimize the risk of ground frost damage, which can occur even in late May. Sowing date also influenced soybean yields. In north-eastern Poland (Region of Warmia and Mazury), yields were maximized when soybeans were sown late (in mid-May), which was decisively influenced by climatic conditions, mainly temperature. The linear regression analysis revealed that the length of the growing season was correlated with the seed yields of soybeans sown on different dates.
Introduction
Legumes grown in Poland cover only 11% of the demand for feed protein in livestock farming [1].Soybean (Glycine max (L.) Merr.) is a species of the legume family and one of the most important crops in the world.In terms of cultivated area, soybean is the fourth largest crop in the world after wheat, maize, and rice [2].Soybeans are processed into food and feed because they are a rich source of protein with a balanced essential amino acid profile and are abundant in oil and soluble sugars [3][4][5].Soybean cultivars with high protein content are the most promising legumes in animal nutrition.The high demand for feed protein has led to an increase in the area under soybeans in the European Union (EU), including Poland [3].However, soybean production in the EU does not meet the current demand for feed protein [6].Soybeans have been cultivated in Poland for around 140 years, but a considerable increase in soybean acreage was noted only in the second decade of the 21st century.In Poland, the area under soybeans is relatively low due to limited progress in breeding new cultivars that are adapted to the local climate [7].Soybean yields are influenced by numerous factors, including the genetic traits of soybean cultivars, environmental conditions, location, and agricultural practices [8][9][10].
Sowing date plays a particularly important role in soybean production because it affects the development of vegetative and generative organs, as well as biomass yields [11][12][13].The selection of the optimal sowing date is the most important and the least expensive Agriculture 2023, 13, 2199 2 of 18 agronomic practice that affects soybean yields [14].Regional differences in precipitation levels and rainfall distribution should be considered in the choice of sowing date [14,15].Early sowing is limited mainly by low soil temperature [16].Early sowing can increase yields by prolonging the growing season, but only in years with sufficient precipitation [14].Low soil temperature and high soil moisture content during sowing can delay germination and seedling emergence, compromise the development of soybean stands, and decrease seed yields [17,18].Temperature and photoperiod are considered the main factors that affect the development of soybean plants.Higher temperature increases the rate of plant growth [19].Early sowing can lead to delayed and uneven seed germination due to low soil temperature, whereas delayed sowing increases the risk of damage caused by drought and ground frost in late spring [7,[20][21][22].According to Bastidas et al. [23], soybeans are sensitive to water deficit during germination.The thermal requirements of soybeans and their responses to daytime length are the main factors that limit soybean cultivation in northern latitudes.Poland is situated between the northern latitudes of 49 • and 54 • , and it does not have a favorable climate for soybean production [24].However, breeding progress and the development of cultivars with a shorter growing season (approximately 120-130 days) have increased the area under soybeans in Poland.New soybean cultivars are better adapted to the Polish climate and tolerant to longer daytime lengths and lower temperatures.Polish soybean cultivars produce flowers and ripen earlier than, for example, Japanese cultivars [25].
The aim of this study was to determine the influence of sowing date on the time from sowing to emergence of soybean seedlings, length of the growing season, morphological traits of soybean plants, yield components, and seed yields of soybeans grown under climatic conditions in north-eastern Poland.
Field Experiment
A small-area field experiment was conducted in the Agricultural Experiment Station in Bałcyny in north-eastern Poland (53 • The experimental factors were: A-soybean cultivar and B-sowing date.The following soybean cultivars were analyzed: Merlin (Saatbau Linz eGen, Austria), an early cultivar (a medium-early cultivar according to the Polish Research Center for Cultivar Testing, COBORU), which is characterized by early seedling vigor, frost tolerance, and can be grown in all Polish regions; Aldana (Hodowla Roślin Strzelce sp.z o.o.IHAR Group, Strzelce, Poland), an early cultivar that can be grown in all Polish regions; Lissabon (Saatbau Linz, Austria), a medium-early cultivar (a late cultivar according to COBORU) recommended for central and southern Poland.Soybeans were sown on three dates: I-early (24-25 April), II-optimal (4-6 May), and late (15-20 May) (Table 1).Each year, the experiment was established on Haplic Luvisol originating from boulder clay (IUSS Working Group WRB, 2006) [26].In each year of the study, the chemical properties of soil were determined in soil samples collected from each plot (at a depth of 30 cm) before fertilization and soybean sowing.Soil pH ranged from 5.9 to 6.6, and soil nutrient levels were determined at 85.1-129 mg P kg −1 , 132.8-190.8mg K kg −1 , and 47.0-109.0mg Mg kg −1 (Table 2).Soil pH was measured with a digital pH meter in deionized water with 1 mol dm −3 KCl (5:1).Phosphorus content was determined colorimetrically in the presence of vanadium and molybdenum (Shimadzu UV-1201 V spectrophotometer, Shimadzu Corporation, Kyoto, Japan).Potassium was determined by atomic emission spectroscopy (AES) (BWB flame photometer, BWB Technologies, UK Ltd., Newbyry, England).Magnesium was extracted with 0.01 M CaCl2 and quantified by atomic absorption spectroscopy (AAS) (AAS1N, Carl Zeiss, Jena, Germany).Nitrogen fertilizer (34% ammonium nitrate) was applied before sowing at 10.2 kg N ha −1 .Phosphorus (enriched superphosphate, 17.4% P) and potassium (potash salt, 49.8% K) fertilizers were applied before sowing at 34.88 kg P ha −1 and 99.6 kg K ha −1 , respectively.Winter wheat was the preceding crop in each year of the study.The experimental plots had a harvested area of 15 m 2 each.The experimental plots were harrowed and plowed with a tillage unit.Soybean seeds were sown on the indicated dates (Table 1).Before sowing, seeds of soybean cv.Aldana were inoculated with Nitragina (IUNG-PIB Puławy, Poland) and seeds of soybean cvs.Lissabon and Merlin were inoculated using Fix Fertig technology.The seeding rate was 90 live seeds m −2 , and seeds were sown at a depth of 3-4 cm, with 12.5 cm spacing between rows.The following crop protection agents were applied: herbicides, Stomp ® Aqua 455 CS (pendimethalin, 455 g L −1 ) at 1.5 L ha −1 after sowing (BBCH 00-01), Corum 502,4 SL (bentazon, 480 g L −1 ; imazamox, 22.4 g L −1 ) at 1.25 L ha −1 + Dasch HC at 1 L ha −1 at the first side shoot visible stage (BBCH 20-21); insecticide, Proteus 110 OD (thiacloprid, 100 g L −1 ; deltamethrin, 10 g L −1 ) at 0.7 L ha −1 in the pod development stage .In 2016 and 2017, Gwarant 500 S.C fungicide (chlorathonil (tetrachloroisophthalonitrile)) was applied at 2 L ha −1 at the beginning of flowering (BBCH 61-63).Soybean plants were harvested at full maturity with a plot harvester.The following parameters were determined: plant height [cm], height of the first pod [cm], number of pods per plant, number of seeds per pod, thousand seed weight [g], and seed yield per hectare [t•ha −1 ].Seed yields from each plot were adjusted to 15% moisture content and expressed in tons per hectare.In the fully ripe stage, 25 representative plants were harvested from each plot for the measurements of morphological traits and yield components.Thousand seed weight was determined after harvest at 15% moisture content.The protein content of seeds was calculated by multiplying nitrogen content (nitrogen %) by a conversion factor of 6.25 (ISO 5983-1:2005) [27] and then converted to protein yield per hectare (kg ha −1 ).
Statistical Analysis
Seed yields were analyzed statistically using Tukey's HSD test.The significance of differences between mean values was determined at α = 0.05.All analyses were conducted with the use of Statistica v. 13.3 software (Tibco Software Inc., Palo Alto, CA, USA) [28].The Spearman's rank correlation and linear regression method were used to determine the relations between soybean yields, time from sowing to emergence, length of the growing season, and weather conditions.Values at p ≤ 0.01 and p ≤ 0.05 levels were also significant.
Weather Conditions
Weather conditions varied across the experimental years (Figure 1a,b).In 2018, the mean daily temperature during the growing season (May to August) was 16.6-20.5• C, around 2.3 • C higher than the long-term average.Higher temperatures were noted in June and August of 2018 and 2019, whereas mean daily temperatures in May and July were below the long-term average.Considerable differences in the distribution of mean daily temperatures were observed in 2016 (Figure 1a,b).Precipitation levels in the Region of Warmia and Mazury were high in 2017, particularly in June (109.9mm), July (106.1 mm), and September (211.1 mm), which delayed and hindered seed harvest (the growing season was prolonged to around 147-156 days).Rainfall distribution was optimal in June, July, and August of 2016 and 2018 and in May and July of 2019, which promoted plant growth and increased yields (Figure 1a,b).In each year of the study, daily minimum temperatures affected the time from sowing to the emergence of soybean seedlings and the beginning of the growing season.In 2016, 2017, and 2019, low daily minimum temperatures and frost episodes in late April and in the first half of May affected the emergence of soybeans sown in late April and early May.In 2018, daily minimum temperatures in late April and early May were higher, which accelerated seedling emergence.Total precipitation in June, July, and August of 2018 also promoted soybean growth.
in late April and early May.In 2018, daily minimum temperatures in late April and ear May were higher, which accelerated seedling emergence.Total precipitation in June, Ju and August of 2018 also promoted soybean growth.
Effect of Sowing Date on Soybean Emergence Length and Growing Season
The study demonstrated that sowing dates in each year affected the grow development, and yield of the examined soybean cultivars, as well as the length of t growing season.The time from sowing to emergence of soybean seedlings differed acro years and cultivars (Table 1).The most favorable conditions for seedling emergence a the shortest time between sowing and emergence were observed in 2018.In turn, 2017 w characterized by the least favorable conditions and delayed seedling emergence (Table The time from sowing to emergence was determined at 12 to 28 days (19 days on averag in early sown plants (I), 9 to 21 days (14 days on average) in plants sown on the optim
Effect of Sowing Date on Soybean Emergence Length and Growing Season
The study demonstrated that sowing dates in each year affected the growth, development, and yield of the examined soybean cultivars, as well as the length of the growing season.The time from sowing to emergence of soybean seedlings differed across years and cultivars (Table 1).The most favorable conditions for seedling emergence and the shortest time between sowing and emergence were observed in 2018.In turn, 2017 was characterized by the least favorable conditions and delayed seedling emergence (Table 1).The time from sowing to emergence was determined at 12 to 28 days (19 days on average) in early sown plants (I), 9 to 21 days (14 days on average) in plants sown on the optimal date (II), and from 9 to 14 days (12 days on average) in late-sown plants (III).In north-eastern Poland, frost episodes occur frequently in late April and in the first half of May.The average number of days with frost episodes was determined at 3.25 for sowing date I, 1.16 for sowing date II, and 0 for sowing date III.The number of days with frost episodes significantly affected the time from sowing to emergence of soybean seedlings (R = 0.74) and, to a lesser extent, the length of the growing season (R = 0.34) (Table 3).The mathematical analysis revealed that the time from sowing to emergence of soybean seedlings was more strongly influenced by the number of days with unfavorable temperature (R = 0.76).The number of days with minimum temperature below 6 • C ranged from 5 to 17 (12 days on average) for sowing date I, from 4 to 7 (6 days on average) for sowing date II, and from 0 to 3 (0.75 days on average) for sowing date III.The time from sowing to emergence of soybean seedlings was also determined by daily minimum temperature (R = −0.44)and, to a smaller degree, by mean daily temperature (R = −0.28)(Table 4), which indicates that low temperatures were largely responsible for the delayed emergence of soybean seedlings.In the mathematical analysis, the time from sowing to emergence of soybean seedlings was positively correlated with the length of the growing season (R = 0.40) (Table 3), which suggests that delayed emergence compromised plant health and delayed seed ripening.The length of the growing season varied across years and cultivars.The greatest differences were observed in 2016 and 2017.The time from sowing to harvest ranged from 118 to 163 days in 2016, and from 136 to 156 days in 2017.The growing season lasted 130-140 days in 2018, and it was shortest in 2019 at 115-133 days.The time from sowing to harvest was shortened by 13 days on average when the sowing date was delayed by 20 days relative to the early sowing date.Soybean cv.Aldana was characterized by the shortest time between sowing and harvest, in particular in 2016 and 2019.In turn, cv.Lissabon was characterized by the longest growing season in 2016 and 2017 (Table 1).Plant development and soybean yields are influenced by temperature and daytime length, which are directly associated with the sowing date.The number of days with low temperature (R = 0.52), minimum temperature during seedling emergence (R = −0.62),and mean daily temperature during seedling emergence (R = −0.79)strongly influenced the length of the growing season.In Spearman's rank correlation analysis, the time from sowing to emergence of soybean seedlings was negatively correlated with soybean yields (R = −0.55)(Table 3), which confirmed that delayed emergence decreased seed yields.The number of days with frost episodes was also correlated with soybean yields (R = −0.33)(Table 3), which indicates that the occurrence of spring frost episodes in a given region should be considered when selecting the sowing date.
Effect of Sowing Date on Morphological Traits of Soybeans
Sowing date had a varied influence on plant height and yield components (Tables 4-8).In 2016, 2017, and 2018, the tallest plants emerged from late-sown seeds.In contrast, seeds sown early and on the optimal date produced the tallest plants in 2019.Regardless of the sowing date, soybean cv.Merlin was characterized by the tallest plants in all years of the experiment (Table 4).The height of the first pod was also greatest in cv.Merlin in 2017, 2018, and 2019, and the differences between the analyzed cultivars were not significant in 2016 (Table 5).The number of pods per plant was highest in late-sown plants in 2018 and 2019, in early sown plants in 2016, and in plants sown on the optimal date in 2017.Cultivars Merlin and Lissabon produced more pods per plant than cv.Aldana (Table 6).In 2018 and 2019, the number of seeds per pod was highest in early sown plants (24)(25), whereas no significant differences in this parameter were observed in 2016 and 2017 between plants sown on different dates.The analyzed soybean cultivars did not differ significantly in the number of seeds per pod (Table 7).Sowing date did not exert a clear influence on thousand seed weight.Thousand seed weight was highest in cv.Lissabon (Table 8).
Effect of Sowing Date on Yield and Protein Content of Soybean Seeds
Sowing date and weather conditions exerted varied effects on seed yields.Seed yields were highest in early sown plants (24)(25)) in 2016 and 2019, in late-sown plants in 2017, and in plants sown on optimal and late dates in 2018.The mean values for the four-year experiment indicate that sowing date influenced seed yields and that in north-eastern Poland (Region of Warmia and Mazury), soybeans should be sown late (in mid-May) to maximize seed yields (Figures 2 and 3).In addition, the linear regression analysis revealed a correlation between the length of the growing season and seed yields in late-sown plants (R = 0.47) (Figure 4a-c).
Effect of Sowing Date on Yield and Protein Content of Soybean Seeds
Sowing date and weather conditions exerted varied effects on seed yields.Seed y were highest in early sown plants (24)(25)) in 2016 and 2019, in late-sown plan 2017, and in plants sown on optimal and late dates in 2018.The mean values for the year experiment indicate that sowing date influenced seed yields and that in north-ea Poland (Region of Warmia and Mazury), soybeans should be sown late (in mid-Ma maximize seed yields (Figure 2 and Figure 3).In addition, the linear regression ana revealed a correlation between the length of the growing season and seed yields in sown plants (R = 0.47) (Figure 4a-c).Seed yields were highest in cv.Merlin (4.00 t ha −1 on average) and lowest in cv.Aldana (2.67 t ha −1 ) (Figures 3 and 4).Seed protein content was highest in late-sown plants.The highest protein content was determined in the seeds of cvs.Lissabon and Merlin (Figure 5A-D).Seed yields were highest in cv.Merlin (4.00 t ha −1 on average) and lowest in cv Aldana (2.67 t ha −1 ) (Figures 3 and 4).Seed protein content was highest in late-sown plants The highest protein content was determined in the seeds of cvs.Lissabon and Merlin (Figure 5A-D).Seed yields were highest in cv.Merlin (4.00 t ha −1 on average) and lowest in Aldana (2.67 t ha −1 ) (Figures 3 and 4).Seed protein content was highest in late-sown pla The highest protein content was determined in the seeds of cvs.Lissabon and Me (Figure 5A-D).
Discussion
Soybeans are among the leading agricultural crops that are produced on aroun of the world's arable land.Soybeans are processed into various products, including protein meals, livestock feed, and edible oil [29].In Poland, the area under soyb increased from 7642 to 9210 ha, and soybean production increased from 14,747 to 2 tons between 2016 and 2021, and it continues to grow [2].This increase resulted fr number of factors, including the development of high-yielding cultivars that are b adapted to the Polish climate.Sowing date is an important determinant of soybean y [11,12,[30][31][32].According to Kumar et al. [33], climate is the key factor that affects so date choices.In Poland, soybeans should be sown when the mean daily soil temper exceeds 8 °C, i.e., at the turn of April and May [6,7].Early sowing can lead to delayed uneven seed germination due to low soil temperature, whereas delayed sowing incr the risk of damage caused by spring drought [22].In the present study, sowing influenced the growth, development, and yields of the analyzed soybean cultivars, as as the length of the growing season.The time from sowing to emergence of soy seedlings varied across years and cultivars.The most favorable weather conditions fo emergence of soybean seedlings were noted in 2018, which was characterized b shortest time between sowing and emergence.In north-eastern Poland, frost epis frequently occur in late April and in the first half of May when soybeans are sown when seedlings emerge.During the study, the average number of days with frost epis was determined to be 3.25 for sowing date I, 1.16 for sowing date II, and 0 for sowing III, which affected the time from sowing to emergence of soybean seedlings.Accordi Dragańska et al. [34], late frost episodes at a height of 2 m above ground were not north-eastern Poland in the first days of May in the eastern part of the region and in April in the remaining parts of the region between 1981 and 2010.Ground frost at a h
Discussion
Soybeans are among the leading agricultural crops that are produced on around 6% of the world's arable land.Soybeans are processed into various products, including high-protein meals, livestock feed, and edible oil [29].In Poland, the area under soybeans increased from 7642 to 9210 ha, and soybean production increased from 14,747 to 20,970 tons between 2016 and 2021, and it continues to grow [2].This increase resulted from a number of factors, including the development of high-yielding cultivars that are better adapted to the Polish climate.Sowing date is an important determinant of soybean yields [11,12,[30][31][32].According to Kumar et al. [33], climate is the key factor that affects sowing date choices.In Poland, soybeans should be sown when the mean daily soil temperature exceeds 8 • C, i.e., at the turn of April and May [6,7].Early sowing can lead to delayed and uneven seed germination due to low soil temperature, whereas delayed sowing increases the risk of damage caused by spring drought [22].In the present study, sowing date influenced the growth, development, and yields of the analyzed soybean cultivars, as well as the length of the growing season.The time from sowing to emergence of soybean seedlings varied across years and cultivars.The most favorable weather conditions for the emergence of soybean seedlings were noted in 2018, which was characterized by the shortest time between sowing and emergence.In north-eastern Poland, frost episodes frequently occur in late April and in the first half of May when soybeans are sown and when seedlings emerge.During the study, the average number of days with frost episodes was determined to be 3.25 for sowing date I, 1.16 for sowing date II, and 0 for sowing date III, which affected the time from sowing to emergence of soybean seedlings.According to Draga ńska et al. [34], late frost episodes at a height of 2 m above ground were noted in north-eastern Poland in the first days of May in the eastern part of the region and in mid-April in the remaining parts of the region between 1981 and 2010.Ground frost at a height of 5 cm above the soil occurred in late May in the east and in mid-May in other parts of the region.The cited authors also reported ground frost episodes in the last ten days of June in the studied period.The average number of days with spring frost episodes ranged from 9 to 16.In the current study, the time from sowing to the emergence of soybean seedlings was influenced by the number of days with unfavorable temperatures (R = 0.76).Other contributing factors were daily minimum temperature (R = −0.44)and, to a lesser extent, daily mean temperature (R = −0.28).Low temperatures considerably delayed seedling emergence, compromised plant health, and delayed seed ripening.Similar observations were made by Uslu and Esendal [22].According to Kumagai [15], early sown soybeans are at greater risk of exposure to extremely low temperatures and late spring frost that can inhibit germination, seedling emergence, and early stand development.The optimal sowing date is determined by the local climate [35,36].In the work of Serafin-Andrzejewska et al. [7], the growing season was shortened by 14 days when soybeans were sown 20 days past the earliest date, which corresponds with the present findings.Bateman et al. [37] also found that late-sown plants were unable to harness their growth potential fully.Delayed sowing shortens the growing season and could potentially affect plant height, stand density, and seed yields [23,37,38].In the current study, the prolonged time from sowing to emergence of soybean seedlings was negatively correlated with seed yields (R = −0.55),which indicates that delayed seedling emergence decreases seed yields.The number of days with frost episodes was correlated with seed yields (R = −0.33),which suggests that frost risk should be considered when selecting sowing dates in a given region.According to Mandić et al. [39] and Shah et al. [40], the optimal sowing date and climate-adapted genotypes promote the uptake of soil nutrients and water, thus maximizing seed yields.
In this study, sowing dates exerted varied effects on plant height and yield components across years.In general, late-sown seeds produced the tallest plants.In turn, the mean values of yield components for four years of the experiment were not significantly affected by sowing dates.Soybean cvs.Marlin and Lissabon produced more pods per plant than cv.Aldana and thousand seed weight was highest in cv.Lissabon.In the work of Bateman et al. [37], plant height increased by 0.3 cm per day when soybeans were sown between 25 March and 2 June, but it decreased by 2.7 cm per day when soybeans were sown late between 2 June and 16 July.Jarecki and Bobrecka-Jamro [20] found that early sowing increased the number of pods per plant and thousand seed weight relative to the optimal sowing date.Księ żak and Bojarszczuk [41] also reported that yield components in the studied soybean cultivars were influenced by weather conditions during the growing season and sowing date.Between 2017 and 2019, soybeans sown on the optimal date were characterized by the highest seed weight per plant, whereas delayed sowing induced only minor differences in the number of seeds per pod [41].In the work of Pedersen and Lauer [9] and Kumar et al. [42], the number of pods per plant and the number of seeds per pod were higher in early sown than in late-sown soybeans.In turn, Borowska and Prusi ński [43] reported that the number of pods per plant was the only yield component that was significantly affected by sowing date.In other studies, the number of pods per plant, the number of seeds per pod and seed weight per plant were lower in late-sown soybeans than in early sown soybeans [44,45].Shah et al. [40] also found that late-sown soybean plants were shorter and produced fewer pods.
Sowing date and weather conditions exerted varied effects on seed yields.The average values in the four-year study indicate that sowing date influenced seed yields, and total seed yields in north-eastern Poland (Region of Warmia and Mazury) were highest when soybeans were sown late (in mid-May).The linear regression analysis revealed a correlation between the length of the growing season and seed yields in late-sown plants (R = 0.47).In a long-term study conducted by Borowska and Prusi ński [43], seed yields peaked when soybeans were sown at the turn of April and May, which corroborates the findings of other authors [23,46].Umburanas et al. [47] also concluded that optimal sowing dates and seeding rates promote plant growth and increase seed yields.In the cited study, delayed sowing compromised yields by decreasing above-ground biomass per unit area, leaf area index, plant height at harvest, height of the lowest pod, number of pods per unit area, number of seeds per unit area, and seed weight.A higher seeding rate increased seed yields, in particular in late-sown plants, by increasing above-ground biomass per unit area, leaf area index, plant height at harvest, height of the lowest pod, number of pods per unit area, and number of seeds per unit area.In a study by Kumagai and Takahashi [44], the number of seeds per pod was one of the key determinants of soybean yields.Delayed sowing reduced the number of seeds per pod, mainly due to low temperatures 20 days after the beginning of seed filling.In turn, Mandić et al. [39] observed a significant reduction in seed yields in all soybean plants that were not sown on the optimal date.Soybeans sown in late April were characterized by a smaller number of pods per plant, lower seed weight per plant, and lower thousand seed weight, which decreased seed yields.These observations were attributed to accelerated plant senescence and the adverse influence of high temperature and low precipitation during seed filling.The cited study was conducted in Serbia, where soybeans are sown at the beginning of April and harvested in September.Therefore, flowering, pod and seed development, and ripening stages take place in July and August when temperatures are high and precipitation is low [45].These stressors can decrease soybean yields by up to 74% relative to unstressed plants [48].In the present study, seed yields were highest in cv.Merlin (4.00 t ha −1 on average) and lowest in cv.Aldana (2.67 t ha −1 on average).Similar results were reported by Borowska and Prusi ński [43], where Merlin was also the highest-yielding cultivar (3.17 t ha −1 ), and Aldana was the lowest-yielding (1.91 t ha −1 ) cultivar.In the current study, seed protein content was highest in late-sown plants.Seeds of soybean cvs.Lissabon and Merlin were characterized by the highest protein yield.In a four-year study conducted by Borowska and Prusi ński [43], average seed yields were highest in cv.Merlin.In south-eastern Poland, the average seed yield of soybean plants was determined at 4.18 t ha −1 by Jarecki and Bobrecka-Jamro [20].In the cited study, sowing date had no significant influence on seed yields.In 2017, seed yields were significantly higher in late-sown than in early sown plants.Soybeans cv.Aldana were characterized by the lowest seed yields in all years of the study.In the present study, Aldana was also the lowest-yielding cultivar in north-eastern Poland.Seed yields were lowest in 2017 and highest in 2018, which is consistent with the findings of Jarecki and Bobrecka-Jamro [20] and Księ żak and Bojarszczuk [41].In the cited studies, seed protein content was significantly higher in late-sown than in early sown plants.In turn, protein and oil yields were not modified by sowing date.In the group of soybean cultivars analyzed by Jarecki et al. [6], cv.Aldana was characterized by low protein and oil yields.Numerous researchers reported higher protein concentrations in late-sown soybeans [14,23,30,46].Mandić et al. [39] also found that sowing date significantly influenced the protein and oil content of soybeans, especially under water stress in the reproductive stage.Delayed sowing induces a significant decrease in protein content and an increase in the oil content of soybean seeds [49] because high temperature increases the protein content but has a marginal influence or no effect on oil content [50].In a study conducted by Serafin-Andrzejewska et al. [7] in south-western Poland (Region of Lower Silesia), seed yields were lowest in late-sown soybeans.Therefore, in Lower Silesia, soybeans should be sown in the second or third week of April or at the beginning of May.Soybean cv.Lissabon was characterized by high seed yields [7].Delayed sowing also negatively affected seed yields in the work of Bateman et al. [37] who found that seed yields decreased by more than 26 kg ha −1 when soybeans were sown past 20 April in the southern USA.Robinson et al. [14] reported higher seed yields in soybeans sown in April and early May and lower seed yields in soybeans sown in late March and early June.In a study by Kumagai and Takahashi [44], seed yields were reduced when soybeans were sown around three weeks past the optimal date.In north-eastern China, soybean yields were affected by variations in climatic factors associated with latitude, and in high-altitude regions, yields were positively correlated with temperature but negatively correlated with accumulated sunshine hours.Climate was responsible for −24% to 38% of the variation in seed yields, and temperature was the most significant climatic factor [51].A study conducted by Kumagai and Takahashi [44] in the cool region of northern Japan demonstrated that delayed sowing and, consequently, lower temperature during the reproductive stage decreased seed yields and the values of yield components.Mean daily temperature was negatively and significantly correlated with the fraction of available soil water (FASW), which suggests that excess soil water caused by high precipitation was associated with cold weather.In turn, Borowska and Prusi ński [43] found that total precipitation in June and July was significantly correlated with seed yields in early sown soybeans, whereas total precipitation in August was also significantly correlated with seed yields in soybeans sown on later dates.Seed yields were significantly highest when soybeans were sown at the turn of April and May, whereas seed and protein yields and seed protein content were highest in the medium-early cv.Merlin.Seed yields were also significantly correlated with total precipitation in other studies [45,52,53].According to Kumagai [15], water supply plays a particularly important role in soybean production, and soybean yields in northern Japan were influenced by precipitation levels and distribution across years and experimental sites.Thomasz et al. [54] analyzed the relationship between soil water content and soybean yields in 28 agricultural districts in Argentina.The data provided by local weather stations were used in correlation and regression analyses and to forecast soybean yields.Correlation and regression analyses revealed that, in most cases, soil water content explained at least 50% of the variation in soybean yields.
Conclusions
The study demonstrated that sowing date influenced seedling emergence, yield components, and soybean yields in north-eastern Poland.The most favorable weather conditions for the emergence of soybean seedlings were observed in 2018, characterized by the shortest time between sowing and emergence.Spring frosts are common in the studied region, and sowing dates should be optimized to minimize the risk of plant damage.Frost events were noted during the emergence of soybean plants sown on early and optimal dates (I and II), which significantly affected the time from sowing to emergence.On average, seed yields were highest in late-sown plants (14-20 May), although differences in this parameter were observed across years.In north-eastern Poland, soybeans should not be sown early due to a high number of days with a low temperature (below 6 • C) and frequent frost episodes in April and May, which can delay seedling emergence, prolong the time between sowing and harvest, and decrease yields.In the analyzed group of soybean cultivars, seed yields were highest in the medium-early cv.Merlin and lowest in the early cv.Aldana.Despite the above, Aldana was the earliest-ripening soybean cultivar.It should be stressed that northern and north-eastern Polish regions are characterized by the shortest growing season, lower temperature, and a lower risk of prolonged drought.
35 49.7 N, 19 • 51 17.3 E) from 2016 to 2019.The station is operated by the University of Warmia and Mazury in Olsztyn.The experiment had a randomized block design with three replications and two experimental factors.
Figure 2 .
Figure 2. The effect of sowing date on the seed yield of soybean cultivars (A-D): 2016-201 ANOVA with the Tukey test at p ≤ 0.05; different capital letters show statistical significance.
Figure 2 .
Figure 2. The effect of sowing date on the seed yield of soybean cultivars ((A-D) 2016-2019) by ANOVA with the Tukey test at p ≤ 0.05; different capital letters show statistical significance.
Figure 3 .Figure 4 .
Figure 3.The effect of sowing date on the seed yield of soybean cultivars (average over the years o research) by ANOVA with the Tukey test at p ≤ 0.05; different capital letters show statistica significance.
Figure 3 .Figure 3 .Figure 4 .
Figure 3.The effect of sowing date on the seed yield of soybean cultivars (average over the years of research) by ANOVA with the Tukey test at p ≤ 0.05; different capital letters show statistical significance.
Figure 4 .Figure 5 .Figure 5 .
Figure 4. Linear regression analysis of the relationship between the length of the growing season and seed yields in soybean cultivars sown on different dates: (a) sowing date I, (b) sowing date II, and (c) sowing date III.riculture 2023, 13, x FOR PEER REVIEW
Figure 5 .
Figure 5.The effect of sowing date on the protein yield of soybean cultivars ((A-D) 2016-2019) by ANOVA with the Tukey test at p ≤ 0.05; different capital letters show statistical significance.
Table 1 .
Sowing date and the number of days from sowing to emergence and from sowing to harvest in the analyzed soybean cultivars.
Table 2 .
Chemical properties of soil.
Table 3 .
Spearman's rank correlation between soybean yields, time from sowing to emergence, length of the growing season, and weather conditions.
Between Sowing to Emergence Length of the Growing Season Time from Sowing to Emergence Number of Days with Frost Episodes Number of Days with Temperature below 6 • C Minimum Temperature Mean Daily Temperature
*-significant difference at p ≤ 0.05; **-significant difference at p ≤ 0.01.
Table 4 .
The effect of sowing date on plant height of soybean plants (2016-2019).-by ANOVA with the Tukey test at p ≤ 0.05; different capital letters show statistical significance.
Table 5 .
The effect of sowing date on height of first pod of soybean plants (2016-2019)., B, C, D, E, F-by ANOVA with the Tukey test at p ≤ 0.05; different capital letters show statistical significance. A
Table 6 .
The effect of sowing date on number of pods per plant of soybean plants (2016-2019).
A, B, C, D, E, F, G-by ANOVA with the Tukey test at p ≤ 0.05; different capital letters show statistical significance.
Table 7 .
The effect of sowing date on number of seeds per pod of soybean plants (2016-2019).
A, B, C-by ANOVA with the Tukey test at p ≤ 0.05; different capital letters show statistical significance.
Table 8 .
The effect of sowing date on thousand seed weight of soybean plants (2016-2019).
A, B, C, D, a-by ANOVA with the Tukey test at p ≤ 0.05; different capital letters show statistical significance. | 2023-11-29T16:07:39.902Z | 2023-11-25T00:00:00.000 | {
"year": 2023,
"sha1": "71817e11673e7b460a3ef36870dbbb141c49e883",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0472/13/12/2199/pdf?version=1700896703",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "eaed3101a862f8eda6430a36174a6c83a9361d5a",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
4630025 | pes2o/s2orc | v3-fos-license | Haloferax volcanii N-Glycosylation: Delineating the Pathway of dTDP-rhamnose Biosynthesis
In the halophilic archaea Haloferax volcanii, the surface (S)-layer glycoprotein can be modified by two distinct N-linked glycans. The tetrasaccharide attached to S-layer glycoprotein Asn-498 comprises a sulfated hexose, two hexoses and a rhamnose. While Agl11-14 have been implicated in the appearance of the terminal rhamnose subunit, the precise roles of these proteins have yet to be defined. Accordingly, a series of in vitro assays conducted with purified Agl11-Agl14 showed these proteins to catalyze the stepwise conversion of glucose-1-phosphate to dTDP-rhamnose, the final sugar of the tetrasaccharide glycan. Specifically, Agl11 is a glucose-1-phosphate thymidylyltransferase, Agl12 is a dTDP-glucose-4,6-dehydratase and Agl13 is a dTDP-4-dehydro-6-deoxy-glucose-3,5-epimerase, while Agl14 is a dTDP-4-dehydrorhamnose reductase. Archaea thus synthesize nucleotide-activated rhamnose by a pathway similar to that employed by Bacteria and distinct from that used by Eukarya and viruses. Moreover, a bioinformatics screen identified homologues of agl11-14 clustered in other archaeal genomes, often as part of an extended gene cluster also containing aglB, encoding the archaeal oligosaccharyltransferase. This points to rhamnose as being a component of N-linked glycans in Archaea other than Hfx. volcanii.
Introduction
N-glycosylation, the covalent attachment of oligosaccharides to select asparagine residues, is performed by members of all three domains of life [1][2][3][4][5]. Still, understanding of the archaeal version of this protein-processing event remains relatively limited. In the last decade, however, substantial progress has been realized in deciphering pathways of N-glycosylation in several archaeal species, including the halophile Haloferax volcanii [5].
In Hfx. volcanii, the surface (S)-layer glycoprotein, a well-studied glycoprotein and the sole component of the protein-based shell surrounding the cell, is modified by a pentasaccharide comprising a hexose, two hexuronic acids, a methyl ester of hexuronic acid and mannose. Through a series of genetic and biochemical studies, a series of Agl (archaeal glycosylation) proteins involved in the assembly and the attachment of this glycan to the S-layer glycoprotein Asn-13 and Asn-83 positions was described [6][7][8][9][10][11][12]. Most recently, a second glycan composed of a sulfated hexose, two hexoses and a rhamnose was shown to be N-linked to position Asn-498 of the S-layer glycoprotein [13]. Moreover, whereas the Asn-13-and Asn-83-linked pentasaccharide was identified when cells were grown across a range of NaCl concentrations, the novel Asn-498-bound tetrasaccharide was observed when cells were grown in 1.75 M but not 3.4 M NaCl-containing medium.
Relying on bioinformatics, gene deletions and mass spectrometry, Agl5-Agl15 have been identified as components of the pathway responsible for the assembly of the so-called 'low salt' tetrasaccharide N-linked to S-layer glycoprotein Asn-498 [14]. Based on these studies, Agl11-Agl14 were deemed to be involved in the appearance of the final sugar of the 'low salt' tetrasaccharide, rhamnose, on the dolichol-phosphate carrier upon which the glycan is initially assembled.
Rhamnose, a naturally occurring deoxy-hexose, is found in the L-rather than the D-configuration assumed by most other sugars. In Bacteria, plants and fungi, rhamnose is a common component of the cell wall [15][16][17], and was also recently found in viruses [18]. At present, two pathways for synthesizing nucleotideactivated rhamnose are known. In Bacteria, RmlA, RmlB, RmlC and RmlD act sequentially to convert glucose-1-phosphate and deoxy-thymidine triphosphate (dTTP) into thymidine diphosphate (dTDP)-rhamnose [19,20]. Specifically, RmlA, the first enzyme of the pathway, is a glucose-1-phosphate thymidylyltransferase that combines thymidine monophosphate with glucose-1-phosphate to create dTDP-glucose. RmlB, a dTDP-glucose-4,6-dehydratase, then catalyzes the oxidation and dehydration of dTDP-glucose to form dTDP-4-keto 6-deoxy-glucose. RmlC, a dTDP-4-dehydro-6deoxy-glucose-3,5-epimerase, next performs a double epimerization at the C3 and C5 positions of the sugar. Finally, RmlD, a dTDP-4-dehydrorhamnose reductase, catalyzes the last step of the pathway, namely reduction of the C4 keto group of the sugar to yield dTDP-rhamnose. In plants, uridine diphosphate (UDP)rhamnose rather than dTDP-rhamnose is generated by RHM (UDP-L-rhamnose synthase), a single polypeptide that contains all of the enzymatic activities required [21]. Here, UDP-glucose is converted to UDP-4-keto-6-deoxy-glucose by an enzymatic activity similar to bacterial RmlB. Next, and in contrast to the bacterial process, whereby RmlC and RmlD operate sequentially to generate dTDP-rhamnose, plants instead rely on nucleotiderhamnose synthase/epimerase-reductase, a bifunctional enzyme mediating both the epimerization and reduction reactions that lead to the biosynthesis of UDP-rhamnose [21][22][23]. More recently, the same pathway was shown to catalyze UDP-rhamnose biogenesis in large DNA viruses [18]. The two pathways for nucleotide-activated rhamnose biosynthesis are depicted in Fig 1. Although rhamnose has been identified in several archaeal species [13,24,25], studies addressing rhamnose biosynthesis in Archaea are few. In Sulfolobus tokodaii, one of three RmlA homologues was shown to possess sugar-1-phosphate nucleotydyltransferase activity using either glucose-1-phosphate or Nacetylglucosamine-1-phosphate and all four deoxyribonucleoside triphosphates or UTP as substrates [26], while S. tokodaii RmlB and RmlD were reported to be functionally identical to their bacterial counterparts [27]. At the same time, the crystal structure of Methanobacter thermautotrophicus RmlC has been reported [28], as have those of S. tokodaii RmlC and RmlD (PDB 2B9U and 2GGS, respectively). Still, it remains to be determined whether rhamnose is used for glycosylation by these species. Thus, to better understand the biosynthesis of this deoxy-hexose in Archaea, the present study addressed the involvement of Hfx. volcanii Agl11-Agl14 in the biogenesis of nucleotide-activated rhamnose. In addition, the presence and genomic distribution of homologues of genes involved in such activity across the Archaea were considered.
Plasmid construction
To generate a plasmid encoding CBD-Agl11, the agl11 gene was PCR-amplified using primers designed to introduce NdeI and KpnI restriction sites at the 59 and 39 ends of the gene, respectively (primers listed in Table 1). The amplified fragment was digested with NdeI and KpnI and ligated into plasmid pWL-CBD, previously digested with the same restriction enzymes, to produce plasmid pWL-CBD-Agl11. Plasmid pWL-CBD-Agl11 was then introduced into Hfx. volcanii cells. Plasmids encoding CBD-Agl12, CBD-Agl13 and CBD-Agl14 were similarly generated, using the primers listed in Table 1, and also introduced into Hfx. volcanii parent strain cells.
Protein purification
To purify the CBD-tagged proteins, 1 ml aliquots of Hfx. volcanii cells transformed to express CBD-Agl11, CBD-Agl12, CBD-Agl13 or CBD-Agl14 were grown to mid-logarithmic phase, harvested and resuspended in 1 ml solubilization buffer (1% Triton X-100, 1.75 M NaCl, 50 mM Tris-HCl, pH 7.2) containing 3 mg/ml DNaseI and 0.5 mg/ml PMSF. The solubilized mixture was nutated for 20 min at 4uC, after which time 50 ml of a 10% (w/v) solution of cellulose was added. After a 120 min nutation at 4uC, the suspension was centrifuged (5,000 rpm for 5 min), the supernatant was discarded and the cellulose pellet was washed four times with wash buffer containing 1.75 M NaCl, 50 mM Tris-HCl, pH 7.2. After the final wash, the cellulose beads were centrifuged (5,000 rpm for 5 min), the supernatant was removed and the pellet, containing cellulose beads linked to CBD-tagged Agl11, Agl12, Agl13 or Agl14, was either subjected to further in vitro assays or resuspended in SDS-PAGE sample buffer, boiled for 5 min, centrifuged (5,000 rpm for 5 min) and subjected to SDS-PAGE and Coomassie Brilliant Blue staining.
Agl11 activity assay
Cellulose-bound CBD-Agl11 were resuspended in reaction buffer containing 1.75 M NaCl, 5 mM MgCl 2 , 50 mM Tris-HCl, pH 7.2 and incubated with 5 mM glucose-1-phosphate and 5 mM dTTP (or UTP) at 42uC. As controls, glucose-1-phosphate, dTTP (or UTP) or both were omitted from the reaction. Aliquots were removed immediately following substrate addition and at several time points up to 40 min and incubated for 10 min at room temperature (RT) with 1 U/ml of pyrophosphatase. The extent of phosphate release was determined using a malachite green-based assay [29]. Briefly, 10 ml aliquots were incubated for 5 min at RT with 850 ml of a Malachite green solution followed by addition of 100 ml of 34% citric acid and incubation for an additional 40 min at RT. Phosphate concentration was calculated using a standard curve based on the 660 nm absorbance of a 0-1000 mM phosphate solution.
Thin layer chromatography
To perform TLC, 10 ml of the products generated in the Agl11 assay described above were spotted onto a Partisil K6 silica gel plate (Whatman, Maidstone, UK). In addition, 10 ml of 2 mM glucose-1-phosphate and dTDP-D-glucose solutions were applied to the same plate as standards. The plates were developed in 95% ethanol/1 M acetic acid (5:2, pH 7.5). The separated spots were detected by spraying the plate with orcinol monohydrate solution (0.1% in 5% H 2 SO 4 in ethanol) and then heating the plate for 10 min at 120uC.
Agl12 activity assay
The dTDP-D-glucose 4,6-dehydratase of Agl12 activity was assayed as described previously [30]. Briefly, cellulose-bound CBD-Agl12 was resuspended in reaction buffer containing 1.75 M NaCl, 5 mM MgCl 2 , 50 mM Tris-HCl, pH 7.2 and incubated at 42uC with 4 mM dTDP-D-glucose or UDP-D-glucose. Aliquots were removed immediately following substrate addition and at several time points up to 40 min and mixed with 750 ml of 100 mM NaOH. Each mixture was incubated for 20 min at 42uC and absorbance at 320 nm was measured.
Combined Agl13 and Agl14 activity assay
Cellulose-bound CBD-Agl13 and CBD-Agl14 were resuspended in reaction buffer containing 1.75 M NaCl, 50 mM Tris-HCl, pH 7.2 and incubated with 4 mM dTDP-4-keto-6- Table 1. Primers used in this study. deoxy-glucose and 10 mM NADPH at 42uC for 20 h. As controls, CBD-Agl13, CBD-Agl14 or NADPH was omitted. After incubation, the mixtures were centrifuged (5,000 rpm for 5 min), and the supernatant was examined by nano-ESI/MS analysis. For nano-ESI/MS analysis, a 10 ml aliquot was dried using a SpeedVac apparatus, resuspended in 10 ml methanol:water (1:1; v/v) containing 10 mM ammonium acetate and injected into a LTQ Orbitrap XL mass spectrometer using static medium NanoES Spray capillaries (Thermo Fisher Scientific, Bremen, Germany). Mass spectra were obtained in the negative mode.
Reverse transcriptase polymerase chain reaction (RT-PCR)
RT-PCR performed as previously described [31]. Briefly, RNA from Hfx. volcanii cells was isolated using TRIzol reagent (Invitrogen, Carlsbad, CA). cDNA was prepared for each sequence from the corresponding RNA (2 mg) using random hexamers (150 ng) in a SuperScript III First-Strand Synthesis System for RT-PCR (Invitrogen). The single-stranded cDNA was then used as PCR template in a reaction containing forward and reverse primers to sequences within agl13 and agl11, respectively (Table 1). In control reactions, genomic DNA or RNA served as template, or no nucleic acid was added to the reaction. The generation of PCR products was assessed by electrophoresis in 1% agarose followed by detection using ethidium bromide.
Bioinformatics analysis
Predicted archaeal RmlABCD proteins were identified using Hfx. volcanii Agl11, Agl12, Agl13 and Agl14 as query in a BLAST Cellulose-bound CBD-Agl11 or cellulose beads alone (blank) were resuspended in reaction buffer and incubated in the presence of dTTP and glucose-1-phosphate, with each substrate separately or without both substrates. Aliquots removed immediately after substrate addition and up to 40 min later were incubated with pyrophosphatase and the extent of phosphate release was measured 29]. The results represent average of triplicates 6 standard deviation for one of three repeats of the experiment. C. The assay products obtained after a 5 h incubation at 42uC were separated by TLC, along with glucose-1-phosphate and dTDP-glucose standards, as described in Experimental Procedures. doi: 10 RmlC and RmlD homologues, respectively. Archaeal RmlA-, RmlB-, RmlC-and RmlD-encoding genes were deemed as being clustered with the oligosaccharyltransferase-encoding aglB gene based upon the presence of these genes within previously identified aglB-based glycosylation gene clusters [32], or when rmlA, rmlB, rmlC and rmlD were clustered and found 10 genes or less away from clusters containing aglB and other glycosylation-or sugar processing-related genes.
Results
Agl11 is a glucose-1-phosphate thymidylyltransferase/ uridylyltransferase As a first step towards defining the precise function of Agl11, a BLAST homology-based search was conducted using Hfx. volcanii Agl11 as query. This revealed the homology of Agl11 to RmlA, the bacterial glucose-1-phosphate thymidylyltransferase (EC 2.7.7.24) that catalyzes the formation of dTDP-glucose from dTTP and glucose 1-phosphate [33]. For instance, Agl11 shared 53% identity, with 100% coverage and an E-value of 8e-120, to RmlA from the bacterium Sulfobacillus acidophilus TPY. To biochemically confirm that Agl11 indeed acts as does RmlA, Hfx. volcanii cells were transformed with a plasmid encoding Agl11 bearing an Nterminally-fused CBD tag [34]. The presence of the CBD tag allows for cellulose-based purification compatible with the hypersaline conditions in which Hfx. volcanii grow. PCR amplification using DNA extracted from the transformed strain as template, together with forward and reverse primers directed against regions within the CBD and agl11 sequences, respectively, confirmed uptake of the plasmid (not shown). Cellulose-based purification of an extract prepared from the transformed cells captured a single 55 kDa protein, corresponding to the predicted molecular mass of the 17 kDa CBD moiety and the 38 kDa Agl11 protein (Fig 2A).
The predicted glucose-1-phosphate thymidylyltransferase activity of purified Agl11 was next considered. Glucose-1-phosphate thymidylyltransferase, like RmlA, transfers the deoxy-thymidine monophosphate (dTMP) group of dTTP to glucose-1-phosphate to yield dTDP-glucose and pyrophosphate. Hence, the actions of Agl11 as a glucose-1-phosphate thymidylyltransferase was tested using a malachite green-based assay to detect the formation of phosphate following the conversion of pyrophosphate into inorganic phosphate upon addition of pyrophosphatase [29]. The assay revealed that Agl11 was able to generate phosphate only when incubated with dTTP and glucose-1-phosphate but not with either substrate alone or without both substates (Fig 2B). Thin layer chromatography (TLC) was also employed to further confirm the glucose-1-phosphate thymidylyltransferase activity of Agl11. In these experiments, the product generated upon incubation of Agl11 with dTTP and glucose-1-phosphate migrated to the same position as a dTDP-glucose standard (Fig 2C). Similar results were obtained when UTP was used in place of dTTP (not shown). As such, Agl11 acts as a glucose-1-phosphate thymidylyltransferase and a glucose-1-phosphate uridylyltransferase, namely the first enzyme in the biosynthesis of nucleotide activatedrhamnose in bacteria and in plants, respectively.
To test whether Agl12 indeed acts as a dTDP-glucose-4,6dehydratase, working downstream to Agl11 in the biosynthesis of dTDP-rhamnose, Hfx. volcanii cells were transformed to express CBD-tagged Agl12. Again, successful transformation was verified by PCR amplification using DNA from the transformed strain as template, together with forward and reverse primers directed against regions within the CBD and agl12 sequences, respectively (not shown). Cellulose-based purification of an extract prepared from Hfx. volcanii cells transformed to express CBD-Agl12 captured a 51 kDa species, corresponding to the predicted molecular mass of the 17 kDa CBD moiety and the 34 kDa Agl12 protein (Fig 3A).
Cellulose-purified CBD-Agl12 was incubated in the absence or presence of dTDP-glucose, the product of the Agl11-catalyzed reaction, and the formation of dTDP-4-keto-6-deoxy-glucose was assessed spectrophotometrically, following the increase in absorption at 320 nm, indicative of the formation of the product keto group. dTDP-4-keto-6-deoxy-glucose was only generated when Agl12 was combined with dTDP-glucose, confirming that Agl12 is indeed a dTDP-glucose-4,6-dehydratase, like RmlB (Fig 3B). When CBD-Agl12 was combined with UDP-glucose, the product generated when UDP-rhamnose serves as substrate, no UDP-4keto-6-deoxy-glucose was formed (Fig 3C).
To determine whether Agl13 and Agl14 indeed participate in the biosynthesis of dTDP-rhamnose by acting as RmlC and RmlD, respectively, Hfx. volcanii cells were transformed to express CBD-tagged versions of Agl13 and Agl14. Here as well, each transformation was verified by PCR amplification using DNA from the transformed strain as template, together with forward and reverse primers directed against regions within the CBD and agl13, or the CBD and agl14 sequences, respectively (not shown). Following transformation, cellulose-based purification of extracts prepared from Hfx. volcanii cells transformed to express either CBD-Agl13 or CBD-Agl14 was conducted. Following SDS-PAGE separation of cellulose-captured proteins, only bands corresponding to CBD-Agl13 or CBD-Agl14 were observed (Fig 4 inset, left and right panels, respectively).
The ability of Agl13 and Agl14 to act as RmlC and RmlD respectively in the production of dTDP-rhamnose was next considered in a combined assay. Briefly, dTDP-4-keto-6-deoxyglucose was incubated together with CBD-tagged Agl13 and Agl14, along with NADPH, the substrate for the dehydrogenase reaction putatively catalyzed by Agl14. The appearance of dTDPrhamnose was revealed by nano-electrospray ionization mass spectrometry (nano-ESI/MS) [18], since initial attempts to detect dTDP-rhamnose formation spectrophotometrically as previously described [38,39] were unsuccessful. Nano-ESI/MS analysis revealed the formation of a m/z 547.07 peak corresponding to dTDP-rhamnose (m/z 547.07 calculated [M-H] 2 mass) and a peak at m/z 569.05 corresponding to the sodium adduct (m/z 569.05 calculated [M-2H+Na] 2 mass) (Fig 4). In the absence of CBD-Agl13, CBD-Agl14 or NADPH, peaks corresponding to dTDP-4-keto-6-deoxy-glucose were observed (m/z 545.06 calculated [M-H] 2 mass); no peaks corresponding to dTDP-rhamnose were seen (Fig S1A-C, respectively). In the absence of dTDP-4keto-6-deoxy-glucose, no peaks corresponding to either sugar were detected (Fig S1D).
agl11 and agl13 are co-transcribed
To obtain further insight into the actions of Agl11-Agl14, the transcription of each gene was addressed. Specifically, given that agl11 is found adjacent to agl13 in the Hfx. volcanii genome and that both are similarly oriented (Fig 5A), the co-transcription of these genes was considered. Accordingly, RT-PCR amplification was performed using primers directed at regions corresponding to the beginning of agl13 and the end of agl11 together with cDNA produced from RNA isolated from Hfx. volcanii cells. A PCR product of approximately 1500 bp, consistent with the genomic sizes of agl13 (471 bp) and agl11 (1074 bp), was observed ( Fig 5B). Given the identification of an rmlABCD gene cluster in Hfx. volcanii, similar clusters were sought in other available archaeal genomes. Towards this aim, the 166 completed archaeal genomes listed at the Joint Genome Institute Database for Integrated Microbial Genomes (January, 2014) were subjected to a BLAST search seeking homologues of Hfx. volcanii Agl11, Agl12, Agl13 and Agl14. In addition, these genomes were also scanned for genes encoding proteins listed as EC 2.7.7.24 (RmlA), EC 4.2.1.46 (RmlB), EC 5.1.3.13 (RmlC) or EC 1.1.1.133 (RmlD). In this manner, 69 genomes were shown to encode an rmlABCD gene cluster, including Hfx. volcanii. Of these, 16 included rmlABCD as part of a previously defined larger cluster anchored by aglB, encoding the archaeal oligosaccharyltransferase [32] (Table 2). In addition, 19 species were found to encode partial rmlABCD gene clusters, where two or three of these genes are clustered (Table S1). Of these species, four (Methanobrevibacter ruminantium, Methanosarcina acetivorans, Sulfolobus islandicus Y.G.57.14 and Sulfolobus solfataricus P2) also encode a complete rmlABCD gene cluster.
Discussion
In addition to the pentasaccharide linked to select Asn residues of the Hfx. volcanii S-layer glycoprotein, it was recently shown that at least one additional Asn can be modified by a novel tetrasaccharide [14]. While many of the enzymes involved in the assembly of the N-linked pentasaccharide have been characterized biochemically [9,10,12,40], virtually nothing is known of the enzymes responsible for the assembly of the N-linked tetrasaccharide. As such, this study reports the first biochemical analysis of enzymes contributing to this novel N-glycosylation pathway. The results reveal that Agl11 is a glucose-1-phosphate thymidylyltransferase, Agl12 is a dTDP-glucose-4,6-dehydratase, Agl13 is a dTDP-4-dehydro-6-deoxy-glucose-3,5-epimerase and Agl14 is a dTDP-4-dehydrorhamnose reductase.
While rhamnose is a common component of both the bacterial and the plant cell wall, different biosynthetic pathways are employed in each case, leading to the generation of differentially nucleotide-activated species. At the same time, it is not clear which of these strategies Archaea employ for nucleotide-activated rhamnose biogenesis. Indeed, numerous examples of Archaea relying on the same biochemical pathways as used by either their bacterial or eukaryal counterparts have been reported, as have examples of archaeal pathways comprising selected aspects of the parallel bacterial and eukaryal processes or even biosynthetic pathways unique to this form of life [41][42][43][44][45][46][47][48][49][50]. In the case of Hfx. volcanii, the current study revealed that Agl11-Agl14 are homologous to RmlA-D, enzymes that catalyze the conversion of glucose-1-phosphate to dTDP-rhamnose in Bacteria [19,20]. Indeed, examination of available archaeal genomes detected the presence of RmlA-D in numerous species, pointing to Archaea and Bacteria as relying on the same route for nucleotide-activated rhamnose generation. At the same time, no gene encoding a homologue of the bifunctional nucleotide-rhamnose synthase/ epimerase-reductase used in eukaryal UDP-rhamnose biosynthesis was detected in Archaea. Still, the fact that several archaeal species encode only a partial rmlABCD cluster (Table S1) raises the TERMP_02079 TERMP_02080 TERMP_02084 TERMP_02089 TERMP_02078 Thermococcus onnurineus TON_1842 TON_1843 TON_1848 TON_1851 TON_1820 Thermococcus sibiricus TSIB_2044 TSIB_2045 TSIB_2047 TSIB_2048 TSIB_0007 Thermogladius cellulolyticus TCELL_0180 TCELL_0179 TCELL_0177 TCELL_0178 Thermogladius shockii Des1633_00001920 Des1633_00001910 Des1633_00001890 Des1633_00001900 Thermoproteus tenax TTX_1336 TTX_1335 TTX_1333 TTX_1334 Thermosphaera aggregans Tagg_0563 Tagg_0562 Tagg_0560 Tagg_0561 1 Clustering with aglB is defined as occurring when rmlABCD are part of a gene cluster containing aglB as described in ref. [39] or #10 genes away from such aglB-based clusters. doi:10.1371/journal.pone.0097441.t002 possibility that those enzymes present are recruited for the synthesis of molecules other than dTDP-rhamnose.
In addition to determining the route of nucleotide-activated rhamnose biosynthesis in Hfx. volcanii, the present study also represents the first biochemical characterization of components of a second N-glycosylation pathway recently identified in this species [14]. Based on earlier work revealing the presence of one or more genes encoding AglB, the oligosaccharyltransferase of the archaeal N-glycosylation machinery, in all but two of 168 genomes considered, it would appear that this protein-processing event is common in Archaea [40]. Yet, the diverse composition of the few N-linked archaeal glycans characterized to date points to archaeal N-glycosylation as largely relying on species-specific pathways [5]. The finding that some species contain rmlABCD homologues as part of a larger gene cluster containing aglB and other sugarrelated genes implies that as in Hfx. volcanii, rhamnose is a component of N-linked glycans in these other Archaea as well. Continued investigation into archaeal protein glycosylation will test this prediction.
Finally, the simultaneous modification of the same protein by two completely different N-linked glycans has only been reported to date in two haloarchaeal species, namely Halobacterium salinarum and Hfx. volcanii [13,51]. Of these, it is only in Hfx. volcanii that two N-glycosylation pathways have been identified [14]. Moreover, it was shown that N-glycosylation by both pathways occurs as a function of salt levels in the growth medium [13]. At present, it is not clear why the Hfx. volcanii S-layer glycoprotein is modified by two distinct N-linked glycans in 1.75 M NaCl-containing medium but not when cells are grown at higher salinity, nor what advantages such differential N-glycosylation offer the cell. The results obtained in this study will help answer these and other outstanding questions related to Hfx. volcanii N-glycosylation. Figure S1 Agl13, Agl14 and NADPH are required for the conversion of dTDP-4-keto-6-deoxy-glucose into dTDPrhamnose. Reactions were conducted as described in the legend to Figure 4, albeit in the absence of cellulose-bound CBD-Agl13 (A), CBD-Agl14 (B) or NADPH (C). In each case, nano-ESI/MS analysis detected peaks corresponding to dTDP-4-keto-6-deoxyglucose (m/z 545.06 calculated [M-H] 2 mass) but not peaks corresponding to dTDP rhamnose (m/z 547.07 calculated [M-H] 2 mass). When the reaction was conducted in the absence of dTDP-4-keto-6-deoxy-glucose (D), no peaks corresponding to either sugar were detected. As standards, 10 ml of 1 mM dTDP-4-keto-6deoxy-glucose (E) and dTDP rhamnose (F) solutions were examined by nano-ESI/MS. (TIF) | 2017-05-29T18:19:23.945Z | 2014-05-15T00:00:00.000 | {
"year": 2014,
"sha1": "a3f91f5cdc37189c10ce1c9b9b163d6d524c70c4",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0097441&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3f91f5cdc37189c10ce1c9b9b163d6d524c70c4",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
263210041 | pes2o/s2orc | v3-fos-license | Safety attitudes culture remain stable in a transplant center: evidence from the coronavirus pandemic
Background We sought to understand how safety culture may evolve during disruption, by using the COVID-19 pandemic as an example, to identify vulnerabilities in the system that could impact patient outcomes. Methods A cross-sectional analysis of transplant personnel at a high-volume transplant center was conducted using the Safety Attitudes Questionnaire (SAQ). Survey responses were scaled and evaluated pre- and post-COVID-19 (2019 and 2021). Results Two-hundred and thirty-eight responses were collected (134 pre-pandemic and 104 post-pandemic). Represented organ groups included: kidney (N = 89;38%), heart (N = 18;8%), liver (N = 54;23%), multiple (N = 66;28%), and other (N = 10;4%). Responders primarily included nurses (N = 75;34%), administration (N = 50;23%), and physicians (N = 24;11%). Workers had high safety, job satisfaction, stress recognition, and working conditions satisfaction (score >75) both before and after the pandemic with overlapping responses across both timepoints. Stress recognition, safety, and working conditions improved post-COVID-19, but teamwork, job satisfaction, and perceptions of management were somewhat negatively impacted (all p > 0.05). Conclusions Despite the serious health care disruptions induced by the pandemic, high domain ratings were notable and largely maintained in a high-volume transplant center. The SAQ is a valuable tool for healthcare units and can be used in longitudinal assessments of transplant culture of safety as a component of quality assurance and performance improvement initiatives.
Introduction
A landmark report in 2,000 estimated that nearly 100,000 deaths in the United States annually were related to medical errors, calling for actions to reduce medical errors and to develop a culture of safety (1).In the wake of its publication, hospital systems worldwide sought for solutions (2)(3)(4).Many drew parallels between healthcare and the aviation industry and compared the operating room to the cockpit (5,6).Adoption of checklists into medicine helped standardize workflows, and there was less acceptance of deviation from the norm.The field of nursing has long embraced these changes and championed for a culture of "speaking up" in the interest of patient safety (7).Overtime, fading authority hierarchies has improved healthcare safety, with evidence that high levels of hospital safety were associated with reductions in readmission rates, mortality rates, and length of stay (8)(9)(10).Consequently, improvements in patient outcomes lag the establishment of a robust safety climate.
The culture of safety requires the combined effort of all stakeholders, including nurses, physicians, patients, policy makers, and more.In the field of organ transplantation, multidisciplinary care is the norm.Collaboration between medical subspecialists, transplant surgeons, perioperative care providers, nurses, social workers, dieticians, administrative teams, and many others is required.Despite the complexity of interactions, there is a paucity of data as it pertains to transplant health worker perception of the culture of safety.Additionally, safety culture is subject to erosion by disruptive forces.External events, such as pandemics, or internal events, such as major organizational structural changes, may impact how teams function on the ground day-to-day.Their attitudes toward safe care delivery may change as a result.
We evaluated safety culture in a high volume transplant center using the Safety Attitudes Questionnaire (SAQ), a validated tool used in several health care contexts, before and during the coronavirus disease-19 (COVID) pandemic (10)(11)(12).We aim to describe the changes in worker perception of institutional safety attitudes at a single high-volume transplant center in the context of the COVID pandemic which was highly disruptive to healthcare overall.We hypothesized that the SAQ can be an effective longitudinal tool in the field of organ transplantation and used to target further quality improvement opportunities and to identify vulnerabilities within large teams.
Data collection
A prospective survey study was conducted at a single highvolume transplant center to assess institutional culture of safety during a time of major healthcare disruption.The first survey was administered before the declaration of the public health emergency in March 2020.The second survey was administered 18 months later in 2021.All transplant staff affiliated with our high-volume multi-organ transplant center (heart, liver, kidney, pancreas transplants), including medical and surgical attending level staff, medical and surgical fellows, resident physicians, outpatient nurse coordinators from all phases of transplant, advanced transplant providers (physician assistant and nurse practitioners), pharmacists, dieticians, social workers, administrative staff, and other affiliated personnel were eligible and invited to anonymously and voluntarily complete the survey.Surveys were distributed over email to a REDCAP link.Participants were provided with two reminders for survey completion, spaced one week apart.Employees were considered non-responders if no contact was established within 30-days of initial contact.All surveys responses were digital.Employment records show that the organization supported 180 employees at the time of the first survey and 253 at the time of the second survey.
Survey tool and data analysis
The SAQ is a 60-item questionnaire that takes approximately 10-15 min to complete on average (10).It was developed in 2006 and has been validated in many languages.It aims to assess six core factors: teamwork climate, job satisfaction, perceptions of teamwork climate, safety climate, perceptions of management, job satisfaction, working conditions stress recognition.Our version included transplant-specific questions, which were analyzed with teamwork climate questions.Each of the questions are answered in a 5-point Likert scale (1-strongly disagree to agree 5-strongly agree) with some that were negatively worded.Negatively worded questions were reverse scored.All results were linearly transformed to a score from 0 (worst) to 100 (best).Domain scores were compared before and during the COVID pandemic.Additionally, responders who scored ≥75 were compared against scores <75 both pre and intra-pandemic.This cutoff was made based on previous literature demonstrating that scores ≥75 were associated with excellent safety (10,13,14).Descriptive analysis was performed using ANOVA and Chi-square in demographic data.P < 0.05 was considered statistically significant.The study was approved by the institutional review board.Supplementary S1 contains the survey tool used for our study.
Scores ≥75 were compared against those <74 both before and during the pandemic.There were more responses with a score of ≥75 within each time point and across all domains except for satisfaction with work conditions.For working conditions, 53 responses (40%) were ≥75 pre-COVID, which increased to 44 (43%) intra-COVID (p = 0.66).Similar increases were seen in safety climate, job satisfaction, and stress recognition (Figure 2).Pre-COVID, 112 of responders (84%) scored in the top quartile for safety climate, which increased to 87% (n = 90; p = 0.53).A higher percentage of responders were also more satisfied with their jobs intra-pandemic with 84% (n = 87) scoring ≥75 compared to before the pandemic (n = 109, 81%; p = 0.64).Stress recognition also improved; 59% (n = 61) scored ≥75 for stress recognition compared to 51 (n = 68; 0.19).Teamwork climate satisfaction remained the same through the pandemic with 76% scoring ≥75 both pre (n = 102) and intra-pandemic (n = 79).For perception of management, the top scorers decreased from 72% (n = 96) to 68% (n = 71; p = 0.57).These changes were not statistically significant.Figure 3 demonstrates the degree of overlap between before and during COVID SAQ scores for individual domains.
Discussion
Over the past few years, COVID has proved to be highly disruptive to organ transplantation, which is a gross understatement of the day-to-day reality in transplant centers.Clinical processes for deceased donors, living donors, transplant candidates, and transplant recipients had to shift monumentally overnight.Concerns about clinical outcomes and patient safety created collective anxiety experienced at all levels of transplant teams.This disruption is unprecedented, but serves as a poignant example how change can tear the fabric of transplant patient safety (15).This cross-sectional study, which measured safety attitudes in a high volume academic transplant center, demonstrated that safety culture can remain stable within transplant team during periods of disruption in health care delivery.
The SAQ has been used in multiple medical contexts including use after medical team training, the intensive care unit, trauma, pharmacy, primary care, the operating room, and in the context of the pandemic (16)(17)(18)(19)(20).In the era of COVID, Denning et al. showed that nurses had lower SAQ scores after the pandemic, especially in working conditions and job satisfaction categories (12).Interestingly, in contrast to our results which showed the stability of staff satisfaction, the high pre-pandemic nursing satisfaction scores were not protective of their intra-pandemic evaluation results, which the authors conclude to be largely related to increased rates of burnout and decreased supportive initiatives available.In Taiwan, a group showed substantial improvement in all metrics of the SAQ when compared to the perceptions at the beginning of the pandemic.While their results could be reflective of the recovery from early pandemic-induced pressures, they could also potentially be explained by government reduction in workload for healthcare professionals as Taiwan (and the world) transitioned out of a state of emergency (21).Clearly, global policies can dictate institutional culture of safety, and different guidelines can elicit opposite effects.Despite major changes to transplantation workflow including the limitation of transplant surgeries to life-threatening situations only, suspension of living donations, restricting procurements to local hospitals to prevent the transmission of the virus via long distance air travel, Distribution of domain-specific survey responses pre-and intra-COVID-19.Comparing the pre and intra pandemic SAQ domain scores of those top quartile scorers against the rest, there was high levels of satisfaction across five of the six domains pre-pandemic, which was stable when measured intra-pandemic.There is a steady increase in the number of organs transplanted at a single transplant center from 2018 to 2022.There was a 1.4-fold increase in the total transplant volume in 2022 (n = 807) compared to 2018 (n = 563).and numerous updates as we learned more about the virus, our results show that the culture of safety at our high-volume transplant center was resistant to the unprecedented pressures induced by the pandemic (22).
Our demonstrated stability across all six main domains assessed by the SAQ before and during the disruption caused by the COVID pandemic is surprising but likely attributable to several factors.First, our transplant center has evolved over time into a vast clinical enterprise with stability of personnel in leadership positions in all disciplines including medical and surgical directors, nursing, social work, and administration.Additionally, the transplant team collectively is deeply familiar with the challenges of rapid growth and high clinical volume.In fact, there was growth in transplant volume during the pandemic; in total, 664 organs were transplanted in 2019 to 704 organ transplants in 2021, and 807 in 2022.Additionally, in keeping with social distancing recommendations, the normalization of teleconferencing relieved time constraints and promoted multidisciplinary communication, allowing for consistent presence of multiple department representatives at daily morning rounds, clinical conferences such as transplant selection conferences, donor selection conferences, quality meetings, organ reviews, and departmental case reviews of adverse events.The daily gathering also facilitated the early creation of a COVID toolkit for our center as well as the ability to deliver frequent updates.
The COVID pandemic is a dramatic example of a disruption that can impact transplant care delivery.This study is highly relevant as clinical transplant teams in the United States are constantly subject to multiple types internal and external disruptions.Additionally, the fields of organ donation and transplantation are subject to the most regulation of any field in medicine.New metrics and regulation can significantly disrupt the norms of transplant care within a transplant program as it seeks to adapt to change (23).Poor performance in waitlist mortality, organ acceptance, and intra-transplant outcomes may challenge perceptions of safety within transplant programs (24, 25).New technologies and innovation may bring about several new challenges that impact safety culture (26).Also, both internal and external leadership transitions can be highly disruptive.Many other disruptions can impact transplant care delivery, but it is necessary for patient safety to maintain and improve safety culture in the face of adversity.This study demonstrates that the SAQ can be robustly applied by transplant leaders within their programs as a longitudinal model of safety cultural assessment.Culture is one of the hardest areas to change within transplant programs, and the SAQ can provide data to inform leadership and frontline staff on areas of vulnerability.
Our study is limited by its inherent survey-based nature related to reliance on self-reporting and the limited response rate, which introduces selection bias.The decrease in response rate seen intra-pandemic could be explained by the expansion of transplant employees who perhaps did not know that they were also eligible to complete the survey and could also be related to survey fatigue within the institution, work demands, and stress that precluded participation.Though it measures six distinct domains, the survey potentially suffers from the cluster effect.For example, while high scores in response to "I like my job."indicate high job satisfaction, it may lead to positive attitudes to multiple statements from other domains such as within teamwork ("I have the support I need from other personnel to care for patients."),safety climate ("I am encouraged by my There was considerable overlap between pre-and intra-pandemic SAQ responses.The starkest difference was in stress recognition, which did not reach statistical significance.Despite these shortcomings, it has been validated and adapted to many other medical fields, global cultures, and different languages, though our study is the first to use it to study transplant program culture.Our findings could have also been skewed by the ceiling effect as a large percentage of responses scored ≥75 both before and during the pandemic, making differences in satisfaction between these time points difficult to detect.Despite a higher percentage of employees reporting higher satisfaction in most of the domains during the pandemic, these changes were not statistically significant.Given the potential overlap of different domains, the necessary sample size to have an adequately powered study could be considered astronomical.Healthcare faced major disruptions and collapse of essential health services due to high COVID burden and global lockdowns.While our study takes advantage of dramatic and unexpected changes induced by the pandemic, our results can be extrapolated to other program transitions that occur as well and may be less dramatic.Additionally, though our study showed no significant changes in staff perception of the culture of safety at our high-volume transplant center despite these intrusions due to high pre-pandemic satisfaction, there is likely variation across all transplant centers in the United States.Beyond the present study, serial measurements at our own institution and at all other transplant centers has the potential to reveal areas that can be targeted for quality improvement based on perceptions of frontline staff.The implementation of a program to include serial safety culture assessments using transplant program specific SAQs could help transplant and hospital leaders develop actionable intelligence to improve care, address workforce concerns, hone processes, and ensure programs can meet the standard of efficient, highly reliable, and safe transplant care.Since transplant programs are required by regulation to be robustly engaged in quality assurance and performance improvement activities, this approach could serve as a valuable foundation for multiple initiatives.Additionally, future multi-institutional work should assess safety culture and its association with postoperative outcomes and promote generalizability of these results.
While COVID has placed unprecedented pressure on the healthcare system worldwide, the results of the SAQ revealed stability of the culture of safety at our high-volume transplant center despite external pressure.Serial examinations of transplant centers using this methodology can detect areas of vulnerability that can be actionable changes for the betterment of transplant care delivery.
Management supports by daily efforts."),and working conditions ("The levels of staffing in this clinical area are sufficient to handle the number of patients."). | 2023-09-28T15:20:48.102Z | 2023-09-26T00:00:00.000 | {
"year": 2023,
"sha1": "cb3020741b7f4fa7b1d39f8ee87a5aa9c3d7a2df",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/frtra.2023.1208916/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0eeed01e54750ddc9798457b951f413be72832b9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
218802315 | pes2o/s2orc | v3-fos-license | Glycan-Dependent and -Independent Dual Recognition between DC-SIGN and Type II Serine Protease MSPL / TMPRSS13 in Colorectal Cancer Cells
: A class of glycoproteins such as carcinoembryonic antigen (CEA) / CEA-related cell adhesion molecule 1(CEACAM1), CD26 (DPPIV), and mac-2 binding protein (Mac-2BP) harbor tumor-associated glycans in colorectal cancer. In this study, we identified type II transmembrane mosaic serine protease large-form (MSPL) and its splice variant transmembrane protease serine 13 (TMPRSS13) as ligands of Dendritic cell-specific intercellular adhesion molecule-3-grabbing nonintegrin (DC-SIGN) on the colorectal cancer cells. DC-SIGN is a C-type lectin expressed on dendritic cells, serves as a pattern recognition receptor for numerous pathogens such as human immunodeficiency virus (HIV) and M. tuberculosis. DC-SIGN recognizes these glycoproteins in a Ca 2 + dependent manner. Meanwhile, we found that MSPL proteolytically cleaves DC-SIGN in addition to the above glycan-mediated recognition. DC-SIGN was degraded more e ffi ciently by MSPL when treated with ethylenediaminetetraacetic acid (EDTA), suggesting that glycan-dependent interaction of the two molecules partially blocked DC-SIGN degradation. Our findings uncovered a dual recognition system between DC-SIGN and MSPL / TMPRSS13, providing new insight into the mechanism underlying colorectal tumor microenvironment. N-terminal FLAG-tag. These different patterns of DC-SIGN–ECD Ca . Given DC-SIGN MSPL a Ca 2+ -dependent manner, the results of these co-incubation studies suggest that glycan-mediated MSPL DC-SIGN with DC-SIGN
The interaction of DC-SIGN with pathogens is mediated by its carbohydrate recognition domain (CRD) in a Ca 2+ -dependent manner. DC-SIGN shows specificity to mannose-and fucose-containing glycans and high affinity to high-mannose and fucose-containing Lewis (Le) glycans [5,16]. DC-SIGN recognizes pathogens that are heavily decorated with those glycans as non-self ligands. For example, lipoarabinomannan (ManLAM) is a major lipoglycan covering the cell wall of M. tuberculosis, with a structure that is not present in the host. DC-SIGN binds to ManLAM capped with dimeric and trimeric mannose residues but does not bind single mannose residues [10]. It has been proposed that the high binding affinity of DC-SIGN is achieved through tetramerization of DC-SIGN. Recognition of ManLAM by DC-SIGN inhibits DC maturation and induces strong upregulation of immune-inhibitory IL-10 production [15].
In addition to the recognition of foreign antigens, DC-SIGN is involved in the recognition of cancer cells through newly synthesized tumor-associated "non-self" glycans, as we and another group have demonstrated [16][17][18]. We found that DC-SIGN recognizes colon cancer cell lines (e.g., SW1116 and COLO205) and human colorectal cancer tissues based on Le (Le a /Le b ) glycans [17,18]. Conditioned medium from MoDC-COLO205 co-cultured cells blocked MoDC maturation and attenuated TLR4-mediated immune activation. Considering that the ligand recognition step carried out by DC-SIGN regulates the subsequent induction of immunosuppressive responses, elucidating the mechanism of the DC-SIGN glycan recognition system is critical for the development of anti-cancer therapeutics. To date, carcinoembryonic antigen (CEA), CEA-related cell adhesion molecule 1 (CEACAM-1), and mac-2 binding protein (Mac-2BP) have been identified as endogenous DC-SIGN ligands carrying Le glycans [16][17][18].
In this study, we identified mosaic serine protease large-form (MSPL) and its alternative splicing variant transmembrane protease serine 13 (TMPRSS13) [19], which are type II transmembrane serine proteases [20][21][22], as novel DC-SIGN ligands. The binding of DC-SIGN to MSPL/TMPRSS13 was mediated by N-glycans of MSPL/TMPRSS13. Meanwhile, the soluble recombinant extracellular domain of DC-SIGN (DC-SIGN-ECD) was cleaved via MSPL/TMPRSS13 protease activity, indicating a different mode of recognition of DC-SIGN by MSPL/TMPRSS13. In the absence of Ca 2+ , MSPL digested DC-SIGN more efficiently, suggesting that the molecular association mediated by N-glycans impeded DC-SIGN digestion. Clarification of the dual recognition processes between DC-SIGN and MSPL/TMPRSS13 may lead to the development of a treatment that efficiently suppresses colorectal cancer.
Cell Culture and Preparation of Recombinant Proteins
Human embryonic kidney HEK293 cells, human colon cancer COLO205 cells, and human monocyte U937 cells were obtained from the American Type Culture Collection. Human hepatoma HLF cells were obtained from the Japanese Collection of Research Bioresources cell bank. HEK293 and HLF cells were cultured in Dulbecco's modified Eagle's medium (Wako, Osaka, Japan), and COLO205 and U937 cells were cultured in RPMI-1640 (Wako) containing 10% fetal bovine serum at 37 • C with 5% CO 2 . DC-SIGN-expressing U937 cells (U937-DC-SIGN) were generated through transfection with the pcDNA3-DC-SIGN plasmid [17] using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA).
Appl. Sci. 2020, 10, 2687 3 of 12 Stable transfected cells were selected with 1 mg/mL G418 (Invitrogen). Full-length MSPL and TMPRSS13 were each subcloned into the p3XFLAG-CMV plasmid previously [19]. The plasmids were transfected in HEK293 cells and their expression was confirmed through western blotting using anti-FLAG M2 antibody (Sigma-Aldrich, St. Louis, MO, USA). We subcloned cDNA for the extracellular domains of DC-SIGN (DC-SIGN-ECD) and MSPL (MSPL-ECD) into the p3XFLAG-CMV plasmid. The plasmids were transfected into HLF cells and selection was conducted using G418 at a concentration of 1 mg/mL. Stable transfected cells of DC-SIGN-ECD and MSPL-ECD were grown in ASF104 serum-free medium (Ajinomoto, Tokyo, Japan). The soluble recombinant proteins were purified with an affinity column of anti-FLAG M2 antibody.
DC-SIGN-Fc Affinity Chromatography
Purification of the membrane fraction of COLO205 cells, DC-SIGN affinity chromatography, and mass spectrometry were performed as described previously [18]. Briefly, the affinity column for soluble recombinant DC-SIGN-Fc (R&D Systems, Minneapolis, MN, USA) was prepared using Protein G sepharose and disuccinimidyl suberate cross-linker. COLO205 cells were suspended in hypotonic buffer [10 mM Tris-HCl (pH 7.6) and 0.5 mM MgCl 2 containing protease inhibitors], and homogenized using a Dounce homogenizer. The solution was restored to isotonic conditions through addition of NaCl solution. After centrifugation at 150,000 × g for 45 min at 4 • C, the pellet was solubilized with lysis buffer [150 mM NaCl, 20 mM Tris-HCl (pH 7.5), 1 mM EDTA, and 1% Triton X-100 containing protease inhibitors], and centrifuged at 10,000 × g for 60 min at 4 • C. The supernatant was retained as the membrane protein fraction, which was applied to the DC-SIGN-Fc column in the presence of Ca 2+ . DC-SIGN ligand proteins were eluted with Tris-buffered saline (TBS) containing 10 mM ethylenediaminetetraacetic acid (EDTA). The eluate was re-applied to the column and a second elution step was conducted with TBS containing 50 mM mannose. The proteins eluted with mannose were subjected to sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) on a 10% gel under reducing conditions. The gel was then stained with a silver staining kit (Wako). The DC-SIGN ligand bands were excised and digested with trypsin, and the fragments were analyzed via liquid chromatography-mass spectrometry (LC-MS/MS) with a hybrid quadrupole/time-of-flight spectrometer (Qstar pulsar I, Applied Biosystems, Foster City, CA, USA) interfaced to a Paradigm MS4 HPLC (Michrom BioResources, Auburn, CA, USA).
Co-Precipitation Assay, MSPL Protease Digestion Assay, and DC-SIGN Lectin Blot
HEK293 cells stably expressing full-length MSPL or TMPRSS13 were lysed with lysis buffer [150 mM NaCl, 50 mM HEPES (pH 7.4), and 5 mM CaCl 2 containing 1% NP -40]. The whole-cell lysate was incubated with DC-SIGN-Fc or IgG-Fc recombinant proteins, which were precipitated using Protein G sepharose beads (Thermo Fisher Scientific, Waltham, MA, USA) in the presence of Ca 2+ . After washing with wash buffer (TBS containing 5 mM CaCl 2 and 0.05% Tween 20), the beads were suspended in elution buffer (TBS containing 10 mM EDTA) to chelate Ca 2+ . The eluted solution was subjected to SDS-PAGE and western blotting using anti-FLAG M2 antibody. For the MSPL protease digestion assay, purified DC-SIGN-ECD (3 µg) and MSPL-ECD (0-1.0 µg) were mixed and co-incubated in the presence of Ca 2+ or EDTA for 0-6 h at 37 • C. To confirm the effect of N-glycosylation, MSPL-ECD was pre-treated with recombinant N-glycosidase F (PNGase F) (Roche, Basel, Switzerland). For this treatment, MSPL-ECD (1 µg) was dissolved in reaction buffer (TBS containing 0.5% SDS, 40 mM EDTA, and 1% 2-mercaptoethanol) and heated to 105 • C for 5 min. Then, PNGase F (3.2 units) was added to the MSPL-ECD solution, which was incubated for 24 h at 37 • C. The solution was treated with five times SDS sample buffer (2% SDS, 10% glycerol, 0.001% bromophenol blue, and 65 mM Tris-HCl, pH 6.8) and heated to 98 • C for 3 min under reducing conditions. The samples were separated via SDS-PAGE using a 5%-20% gradient gel, followed by Coomassie Brilliant Blue (CBB) staining. DC-SIGN lectin blotting was performed using DC-SIGN-ECD as the primary reaction solution. Reacted DC-SIGN-ECD on a nitrocellulose membrane was incubated with anti-DC-SIGN monoclonal antibody (R&D Systems), followed by detection with HRP-conjugated anti-mouse IgG antibody.
Immunohistochemical Staining of COLO205 Cells and Human Colorectal Cancer Tissues
COLO205 cells were cultured on chambered cell culture slides (Corning, Inc., Corning, NY, USA) and fixed with phosphate-buffered paraformaldehyde (4%). For DC-SIGN staining, the cells were blocked with blocking buffer (TBS containing 10 mM CaCl 2 and 1% bovine serum albumin) and then incubated with DC-SIGN-ECD (0.7 µg/µL) in TBS containing 10 mM CaCl 2 , 10 mM EDTA, or 50 mM mannose. The samples were incubated with anti-DC-SIGN monoclonal antibody (R&D Systems) followed by Alexa Fluor 546 secondary antibody. For anti-MSPL/TMPRSS13 antibody staining, rabbit polyclonal antibody against the TMPRSS13 catalytic domain (Abcam, Cambridge, UK), which reacts with both TMPRSS13 and MSPL, was used as a primary antibody solution, followed by visualization with Alexa Fluor 488 secondary antibody. A colorectal carcinoma-tissue array slide was obtained from SuperBioChips Laboratories (Seoul, Korea). After deparaffinization with xylene, the slide was hydrated with ethanol and immersed in 0.01 M citrate buffer. Then, the slide was boiled for 5 min in a microwave oven to obtain antigens. DC-SIGN staining and anti-MSPL/TMPRSS13 antibody staining were conducted as described above. All stained samples were observed using a Fluoview FV1000 confocal laser-scanning microscope (Olympus, Tokyo, Japan).
Flow Cytometry
U937 and U937-DC-SIGN cells were suspended in FACS buffer (PBS containing 2% FCS). To analyze DC-SIGN expression, the cells were incubated with primary anti-DC-SIGN monoclonal antibody (R&D Systems) diluted with FACS buffer. After washed with PBS three times, the cells were incubated with Alexa Fluor 488 anti-mouse IgG antibody. Then the cells were analyzed with BD FACSCalibur (BD Biosciences, CA, USA).
Identification of MSPL/TMPRSS13 as DC-SIGN Ligands
We previously identified Mac-2BP as a DC-SIGN ligand expressed in COLO205 colon cancer cells based on recombinant DC-SIGN-Fc, a fused protein of the DC-SIGN extracellular domain and human IgG-Fc [18]. In the results of DC-SIGN-Fc affinity chromatography of the COLO205 membrane protein fraction, we found a sharp band at 100 kDa located above the Mac-2BP band (90 kDa) ( Figure 1a). To identify this new DC-SIGN ligand protein, we analyzed the 100-kDa and 90-kDa bands using LC-MS/MS. The results showed that the band at 100 kDa and the broad band at 90 kDa contained the peptide sequence NKPGVYTK, corresponding to TMPRSS13 isoform 1 (MSPL) while Mac-2BP was detected at 90 kDa ( Figure 1b) (see Tables S1 and S2). MSPL and TMPRSS13 are splicing variants of a single type II membrane serine protease, which was cloned from a human lung cDNA library [19]. When FLAG-tagged full-length MSPL and TMPRSS13 were expressed in HEK293 cells, each showed two bands (Figure 1d), presumably due to different post-translational modifications. Next, to validate the molecular interaction between DC-SIGN and MSPL/TMPRSS13, we conducted a DC-SIGN-Fc pull-down assay using cell lysates of MSPL and TMPRSS13 transfectants (Figure 1c
Localization of MSPL/TMPRSS13 in Colorectal Carcinoma Tissue
The expression of MSPL/TMPRSS13 has been reported in normal tissues such as lung, skin, and prostate, but not in the colon [19,23]. To assess its expression in human colorectal cancerous and adjacent noncancerous tissues, we performed fluorescent immunohistochemistry ( Figure 2). We found that MSPL/TMPRSS13 is highly expressed in various colorectal cancerous tissues but has lower expression in noncancerous tissues. Notably, we observed intense staining of MSPL/TMPRSS13 at the apical epithelial surface of various colorectal cancer tissues, whereas weak, broad staining was observed throughout the noncancerous mucosa. These results suggest that, during oncogenesis, localization of MSPL/TMPRSS13 shifts to the luminal side of the colon epithelium, where MSPL/TMPRSS13 obtains its DC-SIGN-bindable glycan structure.
Localization of MSPL/TMPRSS13 in Colorectal Carcinoma Tissue
The expression of MSPL/TMPRSS13 has been reported in normal tissues such as lung, skin, and prostate, but not in the colon [19,23]. To assess its expression in human colorectal cancerous and adjacent noncancerous tissues, we performed fluorescent immunohistochemistry (Figure 2). We found that MSPL/TMPRSS13 is highly expressed in various colorectal cancerous tissues but has lower expression in noncancerous tissues. Notably, we observed intense staining of MSPL/TMPRSS13 at the apical epithelial surface of various colorectal cancer tissues, whereas weak, broad staining was observed throughout the noncancerous mucosa. These results suggest that, during oncogenesis, localization of MSPL/TMPRSS13 shifts to the luminal side of the colon epithelium, where MSPL/TMPRSS13 obtains its DC-SIGN-bindable glycan structure.
The expression of MSPL/TMPRSS13 (green) and DC-SIGN ligands (red) was visualized through laser confocal microscopy. Yellow fluorescence indicates merged green and red signals, as shown in the right panels. Nomarski images are shown on the right side of each panel. Scale bars, 100 µm. The expression of MSPL/TMPRSS13 (green) and DC-SIGN ligands (red) was visualized through laser confocal microscopy. Yellow fluorescence indicates merged green and red signals, as shown in the right panels. Nomarski images are shown on the right side of each panel. Scale bars, 100 µm.
Glycan-Dependent Recognition of MSPL by DC-SIGN
It has been reported that MSPL/TMPRSS13 retains its protease activity when it is phosphorylated, shed, and released from the cell surface [24]. Next, to explore the functional aspects of the molecular interaction between DC-SIGN and MSPL/TMPRSS13, we constructed a plasmid containing the MSPL extracellular domain (MSPL-ECD). A stable transfectant of MSPL-ECD showed multiple bands after CBB staining (Figure 3a), as previously reported [24], due to self-cleavage and glycosylation variants. When we conducted the DC-SIGN-ECD lectin blot assay, we found that DC-SIGN-ECD binds to MSPL-ECD in a Ca 2+ -dependent manner (Figure 3b) and that all self-digested peptide fragments contained DC-SIGN-bindable glycans. Next, to test for direct involvement of Nglycans on MSPL-ECD in this molecular association, MSPL-ECD was treated with PNGase F. In CBB staining, most bands were shifted to lower molecular weights (Figure 3c, left), indicating the presence of several N-glycans in MSPL-ECD. Moreover, the DC-SIGN lectin blot showed that DC-SIGN binding was markedly attenuated with PNGase F treatment (Figure 3c, right). Together, these results demonstrate that MSPL recognition by DC-SIGN occurs in an MSPL N-glycan-dependent manner.
Glycan-Dependent Recognition of MSPL by DC-SIGN
It has been reported that MSPL/TMPRSS13 retains its protease activity when it is phosphorylated, shed, and released from the cell surface [24]. Next, to explore the functional aspects of the molecular interaction between DC-SIGN and MSPL/TMPRSS13, we constructed a plasmid containing the MSPL extracellular domain (MSPL-ECD). A stable transfectant of MSPL-ECD showed multiple bands after CBB staining (Figure 3a), as previously reported [24], due to self-cleavage and glycosylation variants. When we conducted the DC-SIGN-ECD lectin blot assay, we found that DC-SIGN-ECD binds to MSPL-ECD in a Ca 2+ -dependent manner ( Figure 3b) and that all self-digested peptide fragments contained DC-SIGN-bindable glycans. Next, to test for direct involvement of N-glycans on MSPL-ECD in this molecular association, MSPL-ECD was treated with PNGase F. In CBB staining, most bands were shifted to lower molecular weights (Figure 3c, left), indicating the presence of several N-glycans in MSPL-ECD. Moreover, the DC-SIGN lectin blot showed that DC-SIGN binding was markedly attenuated with PNGase F treatment (Figure 3c, right). Together, these results demonstrate that MSPL recognition by DC-SIGN occurs in an MSPL N-glycan-dependent manner.
Glycan-Independent Digestion of DC-SIGN by MSPL
Next, to evaluate the effect of the MSPL-DC-SIGN interaction on MSPL protease activity, we performed a co-incubation study. When MSPL-ECD and DC-SIGN-ECD were co-incubated in the presence of Ca 2+ , the band at 45 kDa in the DC-SIGN-ECD sample disappeared and three new bands were detected (Figure 4a, lane 3). This result indicates that DC-SIGN-ECD was degraded upon incubation with MSPL-ECD. To determine the optimal conditions for DC-SIGN digestion, we tested various MSPL-ECD concentrations (Figure 4b) and incubation times (Figure 4c). The results showed that DC-SIGN digestion was dependent on MSPL-ECD dose and time and incubation with 1 µg MSPL for 6 h was the optimal condition. We next performed a co-incubation study in the presence of EDTA (Figure 4d,e). The digested fragment of DC-SIGN-ECD showed a different pattern from that observed in the presence of Ca 2+ , and one DC-SIGN-ECD band at 33 kDa overlapped with the MSPL band (Figure 4d, green and purple arrowheads). Notably, the full-length DC-SIGN-ECD band at 45 kDa disappeared with even a low level (0.5 µg) of MSPL-ECD, indicating that DC-SIGN-ECD degraded more efficiently when treated with EDTA. To confirm the DC-SIGN-ECD cleavage sites, we performed N-terminal amino acid sequencing analysis (Figure 4f). We identified the AAVGE sequence, which frequently appears in the DC-SIGN-Fc repeat domain, the SNRFTW sequence of the CRD, and the DYKDD sequence of the N-terminal FLAG-tag. These results clearly showed different cutting patterns of DC-SIGN-ECD in the presence or absence of Ca 2+ . Given that DC-SIGN binds to MSPL in a Ca 2+ -dependent manner, the results of these co-incubation studies suggest that glycan-mediated MSPL recognition by DC-SIGN interferes with DC-SIGN digestion by MSPL (see Figure S1). phosphorylated, shed, and released from the cell surface [24]. Next, to explore the functional aspects of the molecular interaction between DC-SIGN and MSPL/TMPRSS13, we constructed a plasmid containing the MSPL extracellular domain (MSPL-ECD). A stable transfectant of MSPL-ECD showed multiple bands after CBB staining (Figure 3a), as previously reported [24], due to self-cleavage and glycosylation variants. When we conducted the DC-SIGN-ECD lectin blot assay, we found that DC-SIGN-ECD binds to MSPL-ECD in a Ca 2+ -dependent manner (Figure 3b) and that all self-digested peptide fragments contained DC-SIGN-bindable glycans. Next, to test for direct involvement of Nglycans on MSPL-ECD in this molecular association, MSPL-ECD was treated with PNGase F. In CBB staining, most bands were shifted to lower molecular weights (Figure 3c, left), indicating the presence of several N-glycans in MSPL-ECD. Moreover, the DC-SIGN lectin blot showed that DC-SIGN binding was markedly attenuated with PNGase F treatment (Figure 3c, right). Together, these results demonstrate that MSPL recognition by DC-SIGN occurs in an MSPL N-glycan-dependent manner.
Glycan-Independent Digestion of DC-SIGN by MSPL
Next, to evaluate the effect of the MSPL-DC-SIGN interaction on MSPL protease activity, we performed a co-incubation study. When MSPL-ECD and DC-SIGN-ECD were co-incubated in the presence of Ca 2+ , the band at 45 kDa in the DC-SIGN-ECD sample disappeared and three new bands were detected (Figure 4a, lane 3). This result indicates that DC-SIGN-ECD was degraded upon incubation with MSPL-ECD. To determine the optimal conditions for DC-SIGN digestion, we tested various MSPL-ECD concentrations (Figure 4b) and incubation times (Figure 4c). The results showed that DC-SIGN digestion was dependent on MSPL-ECD dose and time and incubation with 1 µg MSPL for 6 h was the optimal condition. We next performed a co-incubation study in the presence of EDTA (Figure 4d,e). The digested fragment of DC-SIGN-ECD showed a different pattern from that observed in the presence of Ca 2+ , and one DC-SIGN-ECD band at 33 kDa overlapped with the MSPL band (Figure 4d, green and purple arrowheads). Notably, the full-length DC-SIGN-ECD band at 45 kDa disappeared with even a low level (0.5 µg) of MSPL-ECD, indicating that DC-SIGN-ECD degraded more efficiently when treated with EDTA. To confirm the DC-SIGN-ECD cleavage sites, we performed N-terminal amino acid sequencing analysis (Figure 4f). We identified the AAVGE sequence, which frequently appears in the DC-SIGN-Fc repeat domain, the SNRFTW sequence of the CRD, and the DYKDD sequence of the N-terminal FLAG-tag. These results clearly showed different cutting patterns of DC-SIGN-ECD in the presence or absence of Ca 2+ . Given that DC-SIGN binds to MSPL in a Ca 2+ -dependent manner, the results of these co-incubation studies suggest that glycanmediated MSPL recognition by DC-SIGN interferes with DC-SIGN digestion by MSPL (see Figure S1).
Cellular DC-SIGN as a Target of MSPL
The observation of glycan-dependent and -independent dual recognition systems between DC-SIGN and MSPL prompted us to test whether cellular DC-SIGN can also act as a substrate for MSPL protease activity. Therefore, full-length DC-SIGN was stably transfected into human U937 monocytic cells. Flow cytometry demonstrated that DC-SIGN was successfully transfected into the U937 cells (Figure 5a). We then extracted the membrane protein fraction of U937-DC-SIGN cells (2.5 × 10 6 cells) and incubated it with MSPL-ECD protein (20 µg) at 37 • C for 6 h. Western blotting revealed that the DC-SIGN band shifted into two lower bands, indicating that full-length DC-SIGN was digested by MSPL-ECD (Figure 5b). These results demonstrated that full-length DC-SIGN is a substrate for MSPL. SIGN-ECD cleavage sites were identified through N-terminal amino acid sequencing (BIOSUMS, Shiga, Japan). The amino acid sequences obtained are shown using one-letter codes. CRD: Carbohydrate recognition domain.
Cellular DC-SIGN as a Target of MSPL
The observation of glycan-dependent and -independent dual recognition systems between DC-SIGN and MSPL prompted us to test whether cellular DC-SIGN can also act as a substrate for MSPL protease activity. Therefore, full-length DC-SIGN was stably transfected into human U937 monocytic cells. Flow cytometry demonstrated that DC-SIGN was successfully transfected into the U937 cells (Figure 5a). We then extracted the membrane protein fraction of U937-DC-SIGN cells (2.5 × 10 6 cells) and incubated it with MSPL-ECD protein (20 µg) at 37 °C for 6 h. Western blotting revealed that the DC-SIGN band shifted into two lower bands, indicating that full-length DC-SIGN was digested by MSPL-ECD (Figure 5b). These results demonstrated that full-length DC-SIGN is a substrate for MSPL.
Discussion
Type II transmembrane serine proteases (TTSPs) are a large family, containing 17 proteases categorized into four subfamilies in humans. All subfamilies have a serine protease domain at the Cterminus where histidine, aspartate, and serine residues form a catalytic triad [20,25,26]. The substrates of TTSPs are cytokines, growth factors, and extracellular matrix components. Soluble forms of TTSPs are often detected in culture media, suggesting that their extracellular domains are shed from the cell surface [27][28][29][30]. Since this shedding is dependent on their protease catalytic activity [30], it is assumed to occur as a result of self-cleavage. Among TTSPs, the hepsin/transmembrane protease serine (TMPRSS) subfamily characteristically contains a group A scavenger receptor domain in the stem region. Mosaic transmembrane serine protease (MSPL) and TMPRSS13, which belong to the TMPRSS subfamily, are splicing variants of a single gene cloned from a human lung cDNA library
Discussion
Type II transmembrane serine proteases (TTSPs) are a large family, containing 17 proteases categorized into four subfamilies in humans. All subfamilies have a serine protease domain at the C-terminus where histidine, aspartate, and serine residues form a catalytic triad [20,25,26]. The substrates of TTSPs are cytokines, growth factors, and extracellular matrix components. Soluble forms of TTSPs are often detected in culture media, suggesting that their extracellular domains are shed from the cell surface [27][28][29][30]. Since this shedding is dependent on their protease catalytic activity [30], it is assumed to occur as a result of self-cleavage. Among TTSPs, the hepsin/transmembrane protease serine (TMPRSS) subfamily characteristically contains a group A scavenger receptor domain in the stem region. Mosaic transmembrane serine protease (MSPL) and TMPRSS13, which belong to the TMPRSS subfamily, are splicing variants of a single gene cloned from a human lung cDNA library [19,25]. MSPL/TMPRSS13 is highly expressed in the skin, lung, and bladder, but is not detected in colon tissue [19].
The physiological functions of MSPL and TMPRSS13 have been defined in epidermal barrier development [23] and virus infections [31][32][33]. By contrast, a number of reports describe other TTSP proteins as extensively associated with tumor growth and metastasis [26,34]. For example, TMPRSS1 is upregulated in several types of cancer, including prostate [35] and ovarian cancers [36], and is involved in cancer cell migration and invasion [37]. TMPRSS1 expression is linked to poor prognosis in prostate cancer patients, suggesting that TMPRSS1 serves as a biomarker for prostate cancer [38].
Here, we provide the first report that MSPL/TMPRSS13, which is in the same protein subfamily as TMPRSS1, is overexpressed in colorectal cancer cells. We demonstrated that MSPL/TMPRSS13 is recognized by DC-SIGN based on N-glycans and that this molecular interaction partially blocks MSPL/TMPRSS13 protease activity. Moreover, a previous report indicated that substrates containing Arg at the P1 position and Arg or Lys at position P2 are preferably cleaved by MSPL [25]. In contrast, the N-terminal amino acid sequence analysis in this study showed Leu-Lys and Ser-Arg at positions P1-P2 of the AAVGE and SNRFW sequences, respectively. These results indicate a novel substrate specificity of MSPL. Moreover, the physiological significance of the molecular interaction between DC-SIGN and MSPL/TMPRSS13 should be considered in light of its bidirectionality. In previous studies, we revealed that recognition of CEACAM-1 and Mac-2BP by DC-SIGN attenuates DC maturation, resulting in promotion of immune escape by cancer cells [17,18]. In this context, further investigation could reveal whether MSPL/TMPRSS13 recognition by DC-SIGN affects DC maturation. Meanwhile, MSPL/TMPRSS13 may be involved in cancer cell invasion and metastasis, as observed in other members of its subfamily, by degrading the basement membrane through protease activity. The physiological relevance of the direct involvement of MSPL/TMPRSS13 in tumor metastasis, including the influence of its interaction with DC-SIGN, requires further study.
Lewis (Le) glycans are the determinants of blood group antigens in glycolipids and glycoproteins of normal tissues such as erythrocytes and epithelial cells [39]. There are two types of Le glycans: type I Le glycans include Le a [Galβ1-3(Fucα1-4)GlcNAc] and Le b [Fucα1-2Galβ1-3(Fucα1-4)GlcNAc] glycans, while type II Le glycans are comprised of Le x [Galβ1-4(Fucα1-3)GlcNAc] and Le y [Fucα1-2Galβ1-4(Fucα1-3)GlcNAc] glycans. Aside from their basal expression in normal colon epithelium cells, Le glycans reportedly emerge during the course of oncogenesis, and are then referred to as tumor-associated Le glycans. We previously identified a unique Le glycan complex expressed in the colon cancer cell line SW1116 through affinity chromatography of mannan-binding protein (MBP), which includes a C-type lectin with specificity to type I, but not type II, Le glycans [40,41]. The glycan structure was found to be tetraantennary, containing N-glycans with β1-6 branching that harbor an unusual Le a tandem repeat and end with Le b at the nonreducing terminus. Our findings strongly suggest the presence of tumor-associated Le glycans in human colon tumors. Indeed, we observed strong expression of MBP high-affinity ligands in 38.5% of human colorectal carcinoma tissues, but not in adjacent nonmalignant tissues where blood-type Le glycans were expressed [42]. Thus, it is possible that MSPL expressed in colon cancer cells harbors complicated tumor-associated Le glycans. Because antibodies generally recognize a few oligosaccharides at most, lectins such as MBP and DC-SIGN that form large multimers can be developed into new diagnostic systems for detecting complex glycan structures in colorectal cancer.
Conclusions
Our study revealed the dual recognition system between DC-SIGN and MSPL/TMPRSS13 in colorectal cancer, providing novel insights into the mechanisms active in the tumor microenvironment. A comprehensive understanding of this system would help to achieve more effective diagnostics and treatment of colorectal cancer. | 2020-04-16T09:04:17.512Z | 2020-04-13T00:00:00.000 | {
"year": 2020,
"sha1": "26724e26fb79ffc1be78adb68ba3333f16d6ed14",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/10/8/2687/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a0414c2529ff5c07f3f5b84dae6a72eb5110db36",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
2726848 | pes2o/s2orc | v3-fos-license | II A historical portrait?
1. ADRIAEN SLABBERAEN Adriaen Slabberaen is the surgeon on the left of the picture (P1. 1), who is seated directly opposite the praelector on the other side of the table. He resembles the surgeon seated in the comparable position in de Keyser's picture (PI. 5); they are not the same man, but like that surgeon, Slabberaen is one of the sitters who are usually thought to be turning unhistorically towards the viewer, or in other words not paying attention to the lecturer.7 A different interpretation is proposed here: his eyes are directed not at the viewer, but at the large open book which leans against a pile of smaller, closed books in the lower right corner of the canvas. This interpretation is no more subject to proof than the traditional one, but a possible objection to it can be refuted. Almost all writers who have referred to the mountain of books locate it (to use the recurring phrase) "at the feet of the corpse", meaning, presumably, on or beyond the end of the dissection-table.8 If the books were so placed, Slabberaen would certainly not have to turn so far to his right in order to look at them. However, the gap between the uinder edge of the dissection-table and the bottom edge of the recto page implies that, in three dimensions, the books are supposed to be standing not at the feet of the corpse but on a free-standing structure between the table and the picture-plane.9 Immediately above the open volume, on the right edge of the canvas, one sees a brown vertical object, in natural light now ,dim, which looks like the left vertical back-strut of a chair on the seat of which the books are piled. 10 These facts, together with the large page-height of the open volume approx. 46 cms. on the canvas suggest that the book is to be understood as almost leaning out through the picture-plane. Hence Slabberaen, in looking at the book, looks at the picture-plane, and therefore in the direction of the viewer. However, the object of his gaze is the book, not the viewer, and the book is not a symbol but a historical part of normal anatomical equipment. The engraving re-
several points".16 It seems unlikely that such a purely formal gesture could have had such a mesmeric effect on Mathys Calkoen. Hence the attraction of the rival interpretation which appears to have held the field long before the one just described: namely, the idea that Tulp's gesture was an illustration in the living limb of the function of the muscles and tendons being demonstrated in the dead one.'7 This idea deserves closer examination than it has received.
Dr. Tulp's gesture illustrates two anatomical points. His fingers are sharply flexed at each proximal interphalangeal joint, while the whole hand, to judge from the shading of the cuff, is slightly dorsiflexed (or "extended") at the wrist. Since the sharp palmar flexion of the fingers tends to induce the dorsiflexion of the wrist automatically as a synergic action,'8 the latter can probably be discounted as being merely incidental on the finger-flexion, which is therefore the object of the demonstration. There is an anomaly in the portrayal of the fingers: when they are so flexed at the proximal interphalangeal joint, they are normally also flexed at the terminal, but here only the proximal joint is flexed. Such things do occur abnormally in nature,'9 but considering how many opportunities for distortion the painter has at his command,10 we should prima facie attribute any variations to Rembrandt rather than to his model. Rem-brandt, therefore, by declining to shade the tips of Tulp's fingers, has divided the chiaroscuro cleanly between the shaded proximal phalanges, and the bright, middle and unguinal phalanges. The effect of this simplification is to emphasize the rigidity of the praelector's fingers.2" Hence, if Dr. Tulp's gesture illustrates his dissection, he should be dissecting those muscles and tendons in the forearm which flex the fingers: m. flexor digitorum superficialis (or sublimis) and m. flexor digitorum profundus, and the tendons that issue from them to the fingers.
Unfortunately, the interpretation of the dissection has long been a subject of dispute, and the most recent contributors to the debate have not even considered this identification of the muscles.22 Nevertheless, there are many independent arguments in its favour. Since the tendons which are visible in the fingers of the corpse have always been interpreted as the tendons which should emanate from precisely these two flexor 16 The interpretations of, respectively: Riegi (Heckscher [398] text vol. p. 182); R. H. Fuchs, Rembrandt en Amsterdam, Rotterdam, Lemniscaat, 1968, p. 25; Heckscher p. 40; Mauritshuis p. 100 (slightly emended) and n. 43. The Latin words are quoted from J. Bulwer, Chironomia or the arte of manuall rhetorique, London, 1664, f.p. 94: after Bulwer's first illustration, "A. Audientiam facit" follows "B. Quibus dein orditur" ("with which he then begins"). Engraved i is often undotted, whence the false reading quibusdem in Mauritshuis, loc. cit.
7
The paradox ofRembrandt's 'Anatomy of Dr. Tuip' muscles, the simplest interpretation of the two muscles being demonstrated is to identify them as the muscles which issue those tendons. As argued in Appendix I below, this interpretation is sound anatomically, provided one accepts a certain view (also the simplest) of the orientation of the limb. According to this interpretation, the muscle Dr. Tulp holds in his forceps is m. flexor digitorum superficialis, while m. flexor digitorum profundus is the long straight muscle running underneath it (P1. 9). By lifting the superficialis away from the profundus, he reveals the way in which the two muscles combine their strength to flex the fingers. Hence the action of Tulp's left hand does illustrate the function of the muscles which he has chosen to display in the corpse. Moreover, this interpretation explains Mathys Calkoen's eagerness, for the mere topographical anatomy of this process is a thrilling drama composed of the three classical constituents, complication, reversal, and resolution. The two muscles originate from the same place on the inside of the elbow joint, but they soon wander apart. Just before they reach the end of their course, their tendons re-converge, and the one runs clean through the other (Pls. 9, 13; Figs. 2, 3) so that the upper (superficialis) becomes the lower, and the lower (profundus) the upper: a double peripeteia. In the denouement, the two tendons find separate resting-places on the phalanges (P1. 13; Fig. 3). But topographical description, however remarkable, is only a prelude to functional demonstration; or, to speak in terms familiar to Tulp, situs, numerus, and figura lead into actio and usus. In order to demonstrate the function of these muscles and tendons, the lecturer, we imagine, solemnly raises his free hand, and of a sudden flexes the fingers rigid, so instantly catching the eye of Mathys Calkoen. The fascination on Calkoen's face is designed precisely to show that Dr. Tulp's gesture is something more than an "allocutio-gesture"." We may therefore say that Calkoen also has a historical role in the picture.
NICOLAES TULP
We have already reconstructed part of Nicolaes Tulp's role in the painting from the actions of his right and left hands, but more important still are the thoughts that give his face its meditative expression, and the words that fall from his open lips. These remain to be recovered. Fortunately they are still not quite beyond recall, but they can only be brought back to us through a study of the influences which shaped Nicolaes Tulp as an anatomist and as an Amsterdamer.
Two anatomists have already been proposed as Tulp's immediate models: Casserius and Vesalius.
(i) Julius Casserius of Piacenza (1552?-1616) was professor of anatomy at Padua. At his death in 1616, he left a set of unpublished anatomical illustrations without any text. His successor at Padua was Adrianus Spigelius of Brussels, who, on his death in 1625, left an unpublished anatomical text without any illustrations. The two works, A historicalportrait? though not intended to complement each other, were published together in Venice in 1627 as one doubly posthumous edition.24 In that edition, the second figure of Casserius's plate XXII (our Fig. 2, p. 10) shows the flexor muscles of the hand, and the belly of m. flexor digitorum superficialis is artificially pulled away from m. flexor digitorum profundus, as in Rembrandt's painting. It has therefore been suggested that Nicolaes Tulp modelled his dissection on Casserius's."5 It has been further proposed that the Casserian plate was followed not only by Tulp in his dissection, but also by Rembrandt in his painting of the dissection, on the ground that both pictures are said to show the same anatomical anomalies.26 We shall examine these proposals in detail later (pp. 13-16 below).
(ii) Andreas Vesalius (1514-1564) is believed to have determined Tulp's choice of pose through the woodcut portrait of himself, dated 1542, which Vesalius prefixed to his Fabrica and other books (P1. 10). In the woodcut, Vesalius is shown demonstrating the flexor-muscles and -tendons of the fingers, as Tulp is in Rembrandt's painting: the muscle-belly which Vesalius offers in his right hand to the viewer is the same, m. flexor digitorum superficialis, as that which Nicolaes Tulp, with his right hand, holds up for the Amsterdam surgeons to see. Both are demonstrating the divergence and eventual convergence of the finger-flexors. The resemblance between the two pictures has been interpreted as a comparison on Tulp's part between Vesalius and himself, showing Tulp to be "the 'Vesalius redivivus' of the seventeenth century"." Again, this suggestion will be further examined below (pp. 16-20).
However, these two anatomists were not the only sources of Dr. Tulp's anatomical knowledge, and before we test their influence on him, some of the others whom he knew should also be mentioned.
(iii) One anatomist who was especially esteemed by Tulp and his contemporaries was Andreas Laurentius or Dulaurens (1558-1609). Laurentius was appointed professor of anatomy at Montpellier in 1586. In 1598, he moved to Paris, and eventually became physician to Marie de Medicis and Henri IV. He wrote several books, on anatomy and medical subjects, which were republished many times up to 1778.28 The following sources, among others, indicate his reputation in the first half of the seventeenth century.
First, in 1637, Dr. Johannes Antonides van der Linden, then an examiner for the The paradox ofRembrandt's 'Anatomy ofDr. Tuip' A historicalportrait? Amsterdam college of physicians, and later to be the subject of one of Rembrandt's last etchings (P1. 12), published a guide to medical literature which was addressed to Pieter Tulp, Nicolaes Tulp's son and a recently qualified doctor of medicine at Leiden.29 This work, which contains eloquent tributes to the author's colleague Nicolaes Tulp, was re-edited in 1639 by Dr. Vopiscus Fortunatus Plemp, another Amsterdam physician who, since 1633-4, had occupied the chair of medicine at Louvain.30 Plemp had attended Nicolaes Tulp's public anatomy of 1632, as Rembrandt may have done, and also that of 1638.31 The choice of anatomy-books which van der Linden recommended to Pieter Tulp was unchanged in Plemp's edition. Considering the closeness and like-mindedness of these three physicians, one would provisionally expect the same choice of anatomy-books to agree with Nicolaes Tulp's own preferences.
The anatomy-books which van der Linden recommended, with Plemp's endorsement, were: the Historia anatomica of Laurentius, which was first published in 1589; the Theatrum anatomicum of the Basle anatomist Caspar Bauhin (Frankfurt a. M. 1605 and 1621); and the already mentioned De humani corporis fabrica of Casserius and Spigelius (Venice 1627). Pride of place, in the judgment of van der Linden and Plemp, should be given to Laurentius, who was said to surpass the others by his methodical organization, clarity, completeness, and care in discussing "controversial, doubtful, and obscure subjects". "Therefore", Pieter Tulp was advised, you should start with him [Laurentius], and he should be read through with full attention at least three times: the first time for the bare account of the parts; the second time the same, but comparing it with his plates orbetter stillwith the very accurate plates of Casserius; and the third time, so that it might stick more firmly in your mind, repeat the second reading [of Laurentius] but link his appendices on problems with their chapters in the text." Bauhin and Casserius-Spigelius were to be studied outside the anatomy-theatre in order to improve the student's understanding of parts already seen in the cadaver. Bauhin was also to be consulted durinrg the dissection of parts not dealt with in detail by either of the others.
11
The paradox ofRembrandt's 'Anatomy ofDr. Tulp' Second, there is a remark published by the anatomist Jean Riolan the younger in 1649. Riolan also coupled the names of Laurentius and Bauhin, not inaptly since their books share a certain likeness due to the fact that each had revised his successive editions in the light of the revisions of the other's. Riolan said, Laurentius and Bauhin are judged by all to be the most outstanding and skilful in the art of anatomy, and their works are lauded as being the most perfect and most accomplished, and are preferred to the others. For in this century the purest and truest anatomical science is sought from these two, because they wrote last, instructed by their own observations and thoughts, and also helped by the teachings of their predecessors. So ... in anatomical controversies they are cited and adduced as if they were, in anatomy, the supreme justices and referees from whom no appeal to others is allowed." Third, William Harvey, Tulp's counterpart as praelector anatomiae to the London surgeons, frequently cited Laurentius in the notes for his praelection of 1616.34 Chapter I of Harvey's De motu cordis (1628) opens with a paragraph derived from Laurentius's 'Quaestio de motu cordis', one of the "appendices on problems" which were recommended to Pieter Tulp by van (1641), I, c. 27, p. 57: near the sacrum of a dissected cadaver he failed to find "pilosa illa filamenta quae depingit Andraeas Laurentius, scriptor alioqui minime infidus". This plate by Laurentius would seem to be that on p. 179 of his Frankfurt 1599 edition, which shows the "horse's tail" effect produced only when a detached spinal cord is soaked in water.
12
A historicalportrait? medicarum libri, he showed that he had consulted not only the works of Laurentius and the Casserius-Spigelius book, but also the anatomical works of Caspar Bauhin, Volcher Coiter, Realdus Columbus, Fabricius ab Aquapendente, and others.'0 (v) Jean Riolan the younger (1580-1657) is also a possible influence on Tulp, since he too is cited in Tulp's book.4'1 Furthermore, Tulp's most quoted phrase, "Anatome verus medicinae oculus", appears to be taken without acknowledgement from Riolan's A nhropographia of 1626.42 (vi) Last, one cannot rule out the possible influence of Pieter Paaw ( 1564-1617). Paaw initiated the study of anatomy at Leiden, was professor of anatomy while Tulp was a student there, and presided at the delivery of Tulp's doctoral thesis in 1614.43 Our list of possible influences on Tulp now contains: Casserius and Spigelius, Vesalius, Laurentius, Bauhin, Coiter, Columbus, Fabricius ab Aquapendente, the younger Riolan, and Paaw. That this list is unexceptional is shown by the fact that substantially the same authorities were used by the London praelector of anatomy, William Harvey.4 But how, if at all, did these anatomists transfuse their influence through Nicolaes Tulp into Rembrandt's painting? We examine them one by one, returning to the beginning of the list with the book by Casserius and Spigelius. (i) The possible link between Tulp and Casserius has already been briefly stated.45 It seems to have several defects. It is incompatible with the Vesalian explanation, while unlike the latter it does not explain why, if Tulp did copy one of Casserius's seventyseven plates, he chose the plate showing the antebrachial musculature (Casserius's plate XXII). But it is not obvious that Tulp did imitate the Casserian dissection. The distinction between the deep and the superficial finger-flexors had been discussed by virtually all writers on general anatomy, and Casserius's dissection is entirely traditional. His plate XXII, fig. ii (our Fig. 2) shows an early stage in a dissection which Vesalius (Pls. 10, 11) and Vidius46 had already chosen to illustrate at the next, more revealing stage, in which the origin of theflexor superficialis is cut and the belly retracted towards the viewer. These illustrations had been republished in the works of fig. i) shows precisely this next stage, of which the stage illustrated by Tulp is the logical precursor. The observed resemblance between the demonstrations of Casserius and of Tulp may owe less to cause and effect than to common practice which both record.
The further idea that Rembrandt copied the Casserian engraving is also open to doubt. The first of the anomalies which are claimed to prove the relationship is that the flexor-muscles in each picture originate not in their normal place, the medial epicondyle, but at a point far lateral to it.S0 This certainly appears to be true of Casserius's plate (Fig. 2) in which the medial epicondyle, an important bony landmark near the letter AE, is made conspicuous by being stripped of the fascia which normally obscures it. To accommodate this lesson, the origin of the flexor-muscles is inaccurately removed to one side. Rembrandt, however, could not have made this dubious concession, for in his painting neither the medial epicondyle nor the origin of the muscles has even been uncovered. What in the painting was formally identified with the medial epicondyle has now been shown to be merely a strip of tendon from the upper arm.5" The second anomaly which is said to be shared by Casserius and Rembrandt is their common failure to show the parasagittal (or, on the canvas, vertical) stratification of the flexor-tendons as they leave the belly of m. flexor digitorum superficialis: the tendons to the index and little fingers should dive out from underneath the tendons to the middle two fingers, but in both the painting (P1. 9) and the engraving (Fig. 2, marked aaaa) they seem to be on a level.52 However, this detail is significant only in morphology, and in 1632, when anatomists were more interested in teleology, it was still too trivial to find a place in the anatomical literature. Moreover, Galen had unwittingly diverted all anatomists' attention from it by remarking, correctly, that the coronal (or, on the canvas, horizontal) angle between each tendon and the next was equal; on their parasagittal relationship he said nothing, and anatomists influenced by himsuch as Vesalius, Bauhin, Spigeliuswere also silent." The parasagittal stratification seems not to have been published at all until 1685, when it was recorded by, of all people, the painter Gerard de Lairesse in one of his incomparable caricatures for Bidloo's anatomy-book ( There is other evidence which confirms Rembrandt's independence of Casserius. Among other differences, Rembrandt includes items which Casserius omits, such as a terminal branch of the ulnar nerve running along the little finger, the skin clinging to the fingertips, and, of course, the realistic colouring. The two depictions of the perforation of the superficial flexor-tendons are also completely different: Casserius (Fig. 2, marked cccc) illustrates it as a loose loop through which the deep tendon meanders freely, while Rembrandt, like Leonardo da Vinci (P1. 13), shows it more accurately as a taut sling which holds the deep tendon firmly on course towards the finger-tip (P1. 9). There was no published woodcut or engraving from which Rembrandt's illustration could have been copied: he must have used a real limb, whether it was attached to a corpse or separated. But there is then no ground for introducing Casserius as his model. Indeed, if Tulp and Rembrandt had compared their finished picture with Casserius's equivalent engraving, they could only have agreed that their own work was far more accuratewhatever Doctors van der Linden and Plemp would tell the younger Tulp about the "very accurate plates of Casserius"."5 (ii) Vesalius. It can hardly be a coincidence that both Vesalius and Tulp chose to be portrayed demonstrating the flexor-muscles of the fingers (Pls. 10, 2). But what did Tulp mean by modelling his portrait on Vesalius's? Heckscher interpreted the likeness as implying that Tulp was to be thought a "Vesalius redivivus"," but for several reasons this seems improbable. There is no evidence that either that sobriquet or a similar one was claimed for Nicolaes Tulp by himself, by his contemporaries, or in fact by anyone before Heckscher (1958). Moreover, it is inappropriate for Tulp, since unlike Vesalius he was not an anatomist. Although, like many qualified physicians of that time, he had a working knowledge of anatomy, he was, as Heckscher remarks, "finally and principally a general practitioner"." His part-time appointment as lecturer in anatomy to the company of surgeons could not have led even his most extravagant admirers to rank Nicolaes Tulp with Vesalius. We must look for a different interpretation of Tulp's use of the Vesalian motif: such as, that the demonstration of the flexor-muscles of the fingers was supposed by Nicolaes Tulp, rightly or wrongly, to bear the same meaning for both Vesalius and himself. Fig. 3]; and how the lower ones extend to the first joints, and the upper ones to the second and third, in each finger perforating the lower tendons. This was certainly a most beautiful sight. And he showed how a kind of special membrane covered those tendons, which he then separated and followed right up to the joints of the fingers. "On these", he said, "read Galen: On the use ofparts books I and II, On the procedures ofdissection book I, and On muscles".'9 5" Quoted on p. 11 above.
16
A historicalportrait? But the aesthetic aspect merely reflects a system of ideas about these tendons, which Vesalius's students could not have failed to encounter if they followed up their lecturer's reading-list. For Vesalius's recommendations remind us that our list of influential anatomists (p. 13 above) omitted the two most influential flgures in early seventeenth-century anatomy: Aristotle, and his follower in many matters, Galen. Bauhin's Theatrum anatomicum (1621) cites Aristotle and Galen more than any other authorities, and many of Laurentius's appendices on "controversial, doubtful, and obscure subjects", which were admired by van der Linden and Plemp,60 were intended to vindicate Galen against his "neoteric calumniators", Vesalius and Realdus Columbus.6' Could not the link between Vesalius and TuIp be their common acceptance of the Galenic view, derived from Aristotle, of the hand, the fingers, and the flexor-tendons?
According to a view which was discussed by Anaxagoras, recorded by Aristotle, and elaborated by Galen, the human hand was not a specialist instrument like the claws of the predator or the hooves of the herbivore, but an instrument at a higher level, an instrument for using other instruments, each for a different purpose. In this respect the human hand was the physical counterpart of the human psyche, which, by performing rational thought over an unrestricted range of subjects, was also an instrument for using further instruments. It was this instrumental application of both reason and the hand that had created human civilization and so raised man above the beasts: among other achievements, man alone tamed animals of superior bodily strength and speed, built places of worship, played musical instruments, and recorded thoughts in writing. The faculty by which the hand controlled its subservient instruments was prehension; the hand was therefore "the prehensile organ" ( opytvoV (rvT127t lKoV ) and its primary part was its prehensile element, the flexor-muscles and -tendons of the fingers. These muscles and tendons therefore had this first importance: that they, together with reason, the divine part of man, acted as the organ of civilization.62 But they were also important for a second and intrinsic property: their design was found to be uncannily sophisticated. The intersection of the flexor-tendons was particularly admired for its mechanical artistry. In the argument that all the parts of the body declared the wisdom and goodness of God in the creation of man, the construction of the human hand was one of the classic examples which could not be gainsaid.63 59 Ibid., p. 96, "ostendit . . . quomodo duplici situ musculi super se situati sint quattuor semper super quattuor, et quomodo inferiores tenderent ad primos articulos: superiores autem ad secundos et tertios perforantes semper primos. Certe, hoc erat pulcherrimum uidere. Et quomodo isti tendines simul tecti erant quadam speciali pellicula, quos deinde separabat, et usque ad articulos digitorum perducebat. 'De his' inquit 'legatis Galeni 1. et 2. lib. de usu partium et 1. de administr. anath. Et de musculis membrorum'." In Heseler's nomenclature the "upper" tendon is that of the flexor profundus, the "lower" that of the flexor superficialis, both being named here from their relative positions after they have changed places in the perforation. This could be regarded as the obvious way of naming them from the point of view of a student standing near the feet of the cadaver. The paradox ofRembrandt's 'A natomy ofDr. Tulp' Vesalius's reading list for his students, quoted above, would have exposed them to Galen's interminable variations on this subject. Having launched the theme in books I and II of On the use ofparts, Galen brought it home in the last book (XVII) with the conclusion: "To a genuine investigator of Nature's works, the sight of the undissected arm alone is enough [to arouse admiration] . . . but even an enemy of Nature's, especially if he gazes on the art displayed in its inward parts as I explained it in books I and II, will lie awake at night if he seeks to find something to disparage among the things he has seen."64 And of the tendons that flex the fingers: "their insertions in the bones and their relations with each other are amazing and indescribable. No words can anyway explain accurately things perceived through the senses alone. Yet one must try to describe them, for until their construction has been explained it is not possible to admire Nature's artistry [as it deserves]".65 The flourishing state of Galenic studies in the early sixteenth century made these ideas more familiar then than ever before. In 1536, the anatomist Niccolo Massa wrote: "the composition of the hand and of the instruments [muscles and tendons] which move it is a most beautiful sight which arouses the greatest praise of the good Lord."66 On the combination of the superficial and deep flexor tendons of the fingers, Vesalius himself wrote with Galenic fervour that it was "a peculiar and rare occurrence . . . due to the marvellous labour of the supreme Creator of the world."67 It is this miracle of anatomy that Vesalius demonstrates in the woodcut frontispiece to the Fabrica (PI. 10).
It would be too easy to conclude that Vesalius's portrait was intended to show him revealing God's "marvellous labour" in the creation of the human hand. This interpretation could be supported on the ground that portrait-attributes were often selected to illustrate the sitter's piety, but it cannot be said to reflect an anatomical argument congenial to Vesalius. At this time ( 1542) Vesalius was fiercely obsessed with two ideas about anatomy. He supported the gathering of new facts as against the interpretation of established ones, and the dissection of human as against simian cadavers. These views were stated forcefully and frequently in Vesalius's preface and throughout his text. By comparison, the Galenic lessons of the philosophical and religious value of anatomy received little attention from Vesalius. Therefore, although the words on the hand which we have cited from Galen, Massa, and Vesalius suggest that the finger-flexor motif was able to serve as an illustration of the providence of the Creator, one may doubt whether Vesalius originally intended it to bear that meaning in his portrait, especially since it can be interpreted in other ways. Vesalius's dissection of the human hand and fingers does illustrate his two cardinal ideas about anatomy, and either its elegance or its difficulty alone could also have justified his choice of this dissection as his.attribute.
When we look at the portrait through the eyes of Vesalius's contemporaries and followers, however, we see it in a different light, for few if any of them shared his lukewarm attitude to the use of anatomy in the Argument from Design. Their position "Galen, De usu Platter, dated 1578 (P1. 14). Platter holds a tome inscribed "VESAL.", while the legend beneath declares "COMPAGO MIRA CORPORIS NOSTRI DEI MIRACVLVM EST SOLERTIAE"." For Platter and other admirers of Vesalius, to demonstrate the providence of the Creator was one of the main purposes of anatomy.
Since Galen had proclaimed, with a certain prolixity,69 that the human hand provided irrefutable evidence for precisely this argument, it was the hand, and especially its primary part the finger-flexors, which became in the sixteenth century one of the preferred organs to demonstrate God's manifestation in the human body.70 In the words of the English praelector anatomiae John Banester, the hand was "so notably of the omnipotent Creator created, as that ... no member more declareth the unspeakable power of almighty God in the creatyng of man.' Surely Banester and Platter would have interpreted the hand motif in Vesalius's portrait in this sense.
It is surely in this sense also that we should understand the allegorical design which the surgeon-anatomist Fabricius ab Aquapendente (1533-1619) used on the titlepages of his anatomical works published around 1600.72 A figure personifying surgery (Fig. 4, right) is identified as such from the three surgical instruments in her care, and the figure personifying anatomy (Fig. 4, left) displays as her attribute the flexor-1 --HIERONYMI FABRICII\I D~~~~AB Figure 4. Giacomo Valesio, "Anatomia" and "Chirurgia", detail of engraving for Hieronymus Fabricius ab Aquapendente, De visione voce auditu, Venice 1600, title-page. The distinguishing attribute ofAnatomia is her differentiation between m. flexor digitorum superficialis and m. flexor digitorum profundus. " "The marvellous construction of the human body is a miracle of the ingenuity of God". " As Jessenius, Universalis humani corporis contemplatio, Wittenberg, 1598, c. XXVIII, fol. C4r, complained, "Quinque horum digitorum, sive processuum singulorum utilitatem I de usu part. Gal. prolixe exaggerat, ad quem lectorem remittimus." 70 The anatomy of the eye was the favourite proof of this point, but it was on too small a scale to be demonstrated in a portrait, unlike the anatomy of the hand. 7' J. Banester, cited in Appendix II no. 9, p. 61 below.
19
The paradox ofRembrandt's 'Anatomy ofDr. Tuip' muscles and -tendons of the fingers. In her right hand she holds m. flexor digitorum profundus, while m. flexor digitorum superficialis floats out towards the viewer. This is the same dissection as in the portraits of Vesalius (P1. 10) and Tulp (Pls. 2, 9), and the fact that here Anatomia herself displays it refutes the idea that the demonstration of these tendons need imply homage to, or rivalry of, Vesalius.73 Instead, it implies that, if anatomy in general was, in the Galenic metaphor, "a hymn of praise to the gods", the anatomy of the finger-flexors served as its first, most eloquent, and representative part. (iii) Laurentius. Of all the later sixteenth-century anatomists it was Laurentius who produced the amplest encomium of the hand, in his chapter de praestantia manus.74 Not only Aristotle and Galen but also Cicero and Quintilian were here ransacked to show that the hand was "the most noble and perfect organ of the body", and therefore one of the outstanding "doctors and teachers of divine wisdom". Laurentius did not fail to note the "marvellous artistry" with which Nature perforated the tendons of the flexor superficialis in order to provide the tendons of the flexor profundus with a passage to the distal phalanges, precisely the point demonstrated in the Vesalius and Tulp portraits. Anyone who saw Vesalius's portrait through the eyes of a follower of Laurentius must have interpreted it as a demonstration, through anatomy, of the power, wisdom, and goodness of God. (iv) As a reader and admirer of Laurentius alone, Tulp would seem likely to have interpreted the Vesalian dissection in that sense. But the other anatomists whose works he also read interpreted the dissection of the hand in the same way, using phrases derived from Aristotle and Galen. The views of Columbus, Coiter, and Bauhin are given in Appendix II below.7 The fourth anatomist, Fabricius ab Aquapendente, did not complete his magnum opus in which he would have discussed the hand, but what he has left us, a pictorial allegory of Anatomy (Fig. 4), is probably to be interpreted in the same sense, as we have just suggested.
(v-vi) It is therefore no surprise to learn that the last two anatomists named above as having influenced Nicolaes Tulp -Riolan and Pieter Paawalso eulogized the hand in the words of Aristotle, Galen, or both.76 Paaw's writing on the hand is typical of his whole approach to anatomy. He had felt an inner drive to study it which he had not felt for his other responsibility, botany: this, he thought, was n De visione voce auditu, Venice, 1600; also used with same date for Deformatofoetu. which, however. 20 either because I was touched by a kind of numinous quality in that divine temple [the human body]; or because man, for whose sake all other things were made, seemed to require more labour for his consideration; or because I judged that God himself intended greater, and more certain, evidence of His wisdom, power, and goodness, to appear in the formation of the human body than elsewhere.7" From the point of view of the anatomists who shaped Tulp's anatomical style, the most important of whom were probably Laurentius and Paaw, Tulp's action in Rembrandt's painting must therefore have been interpreted as a deliberate demonstration that anatomy was a path to the knowledge of God. Is this opinion of Nicolaes Tulp's mentors a reliable guide to his own intention in choosing the dissection shown in Rembrandt's picture?
We have little direct evidence of Tulp's views on anatomy. He published no book on the subject, for he was, as we have stated, a general practitioner, whose chief interest was what has become known as pathology. We should not be misled by the fact that Tulp's name for pathology was anatome in his Latin writings and ontleding in Dutch: when Tulp wrote of anatome that it was the "very eye of medicine"78 and that it "brought forth the truth as it were out of the shadow into the light",79 he was thinking not of anatomical science, nor of public anatomies on the undiseased cadavers of executed criminals,80 but of a physician's dissections of his deceased patients, which (he hoped) would enable him to see, and not merely to guess, the causes of each symptom.8' But from Tulp's book on pathology we do have occasional glimpses of him at work in the anatomy-theatre. In one chapter he describes, as a prelude to a pathological case, the anatomical properties of the organ known as the ileo-caecal valve. We know that he lectured on this structure at his anatomy of 1632, and at several other anatomies in the 1630s.82 The style of this anatomical passage is markedly different from the cool, "Hippocratic" tone of Tulp's writings on pathology. The parochial simile, the political analogy, the theological conclusion all suggest that here, for once, Tulp was writing not as Amsterdam's Hippocrates but as its Galen or Laurentius, that is, in the style, perhaps even the very words, which he used in his capacity as the city's praelector anatomiae. We must imagine Tulp holding up the ileo-caecal valve to the people of Amsterdam with the following explanation:83 | 2016-05-12T22:15:10.714Z | 1982-01-01T00:00:00.000 | {
"year": 1982,
"sha1": "b51851031479366af0f62e86bb30ec37156f27c5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b51851031479366af0f62e86bb30ec37156f27c5",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
} |
233550955 | pes2o/s2orc | v3-fos-license | Gifted education: Perspectives and practices of school principals in Bahrain
Received Nov 3, 2020 Revised Mar 9, 2021 Accepted Apr 12, 2021 Research on giftedness and gifted education has a rich history. Researchers have consistently pointed to the educational leadership perspectives on giftedness, and inequitable identification of policies and practices in gifted education. Research suggests there is a widening gap in the level of comprehensive knowledge in gifted education that is critical for school improvement. This paper examined school principals’ (n=29) perceptions regarding giftedness among Bahraini students. The study focuses on exploring the characteristics school principals attribute to giftedness in their schools, the methods employed by schools to identify gifted students from the school principals' perspectives, and the educational provisions school principals used to support gifted students in their school. The study also searches for any significant differences among school principals in their views on these three dimensions. The study employed quantitative methodology and the analysis of the research questionnaire included descriptive and interpretive analysis (ANOVA and T-test). The findings indicate that the school principals looked at giftedness mainly from an academic and school perspective. The results indicate some dissonance between what the principals’ perceptions on giftedness are and the educational support that they provided to the gifted students in their schools.
INTRODUCTION
This study aimed to investigate the perceptions of a sample of Bahraini school principals on the characteristics of gifted students, and ways for identifying those students, and provisions to meet their educational needs. The reason for focusing on this particular research area is that there is currently a growing interest in facilitating gifted students' learning as an essential at the least, in any world class educational system, gifted education can never be disregarded [1]. There is a need for in-depth study about provisions for fulfilling the needs effectively through understanding characteristics, ways for identifying giftedness, and educational practices emplaced [2]. This study examines those dimensions. Gifted students need special support and social, psychological and emotional guidance [3], [4]. Such high aspirations are possible when we understand their special needs [5].
Smedsrud [6] argued that it is difficult to have one conclusive definition on the concept of giftedness due to different perspectives. Psychometric approach mainly use where students achieving high scores in intelligence tests are identified gifted students. Then, the cognitive approach came and used two tools in the Endepohls-Ulpe and Ruf [24] investigated the standards for identifying gifted students used by primary teachers in Germany. The study sample included 384 male and female teachers. Interviews and questionnaires were used in the data collection. Finding of the study indicated that most of the teachers considered high cognitive abilities as the main criterion of giftedness with some of the teachers indicated that motivation also play key role in this process. Most of the teachers showed positive attitudes toward dealing with gifted students and they indicated that characteristics of giftedness vary between primary and secondary school levels.
Moon and Brighton [25] gave an overview on the results of a research project that conducted by the National Research Centre for Giftedness and Talented in the United State. The study sample included 434 teachers and used questionnaire and case studies to investigate primary school teachers' perceptions and practices with regard to gifted students. Findings of the study indicated that most of the teachers held traditional perceptions about giftedness such as excellence in academic skills such as reading and writing, language proficiency, and general knowledge. Less attention given to identifying gifted students who come from social and ethnic minorities, or low-income families, or non-native English speakers. These social and cultural factors acted as a barrier to investing the students' giftedness.
Laine, et al. [26] explored primary and secondary school teachers' perceptions about giftedness in Finland in two research projects. The first project included 212 primary teachers and the second included 279 secondary school teachers. The study used questionnaires and content analysis methods. Findings of the studies indicated that the teachers had a multidimensional perspective on giftedness in which it includes cogitative and creative dimensions. In addition, the teachers indicated that motivation and growth mindset play key role in nurturing giftedness. Teachers tend to compare among students and perceive giftedness as not static and can change during the individual's life. The study concluded that there is a need for more training and preparation for teachers in the area of identifying and dealing with gifted students.
Finley's [4] research focused on the identification process for gifted students. Finley described the promotion of a separate, elitist attitude among the gifted population and the lack of connectedness to or extension of the general curriculum to the seminar curriculum. Sternberg and Davidson [27, p. 36] propose for a seminar curriculum for the gifted: "an advance curriculum matches gifted abilities and incorporates the opportunity to explore topics in depth while surrounded by academic peers." Every gifted child deserves to be engaged in meaningful and powerful learning at his or her most appropriate points that stretched and challenged their readiness, interest, and learning profile to achieve their fullest potential. Seminar curriculum developed in order to provide enriched curriculum for the identified gifted students and to address their need for the development of higher-level thinking, problem solving, and research skills. Finley [4] observed that this differentiated instruction model in some schools in US provide opportunity for gifted students (2-hours per week) to interact with their intellectual peers in collaborative groups as they participate in enriching and challenging projects. Finley also offered implications for practice, namely to undergo a critical review of the school's program, update and align curriculum with research, address concerns about the identification process that mandates very superior IQ scores of 130, attend to the diversity within giftedness, and more.
Quek, et al. [28] in their study to explore effective practices in teaching and learning environment emphasised the need for teacher-student interaction that could help the gifted to learn better. The research also imply the importance of customizing instructions, and infusing creative and critical thinking skills. The researchers recommended for an appropriate curriculum revised taking into account the intellectually stimulating environment and the dynamics of the communication process. The study showed that the interpersonal behaviour of teachers have an impact on the students. In order to encourage positive results, teachers could incorporate more real-life investigative work. This signal that in order for any gifted education to thrive, positive relationship between students, teachers and the school leaders need to be established.
DiPaola and Walter-Thomas [29] study indicated the role of the principal shifted to that of the instructional leader in the 1980s and more recently as learning leader. They now play the role as an agent of change in the teaching, learning, and implementation process [30]. Hess and Kelly [31], [32] studies on principal preparation program in US implied that because the "preparation of principals has not kept pace with changes in the larger world of schooling, graduates of leadership-preparation programs have been left ill-equipped for the challenges and opportunities posed by an era of accountability" (p. 40). The lack of gifted education content may lead principals to begin their careers without the ability to oversee effectively concerns related to gifted students.
Hence, the rationale for the current study emerged from a desire to uncover what we currently know, what we do not know, and what we still need to know about leadership perception of giftedness in Bahrain schools. We hope to enable a systemic reform, and create policies that fosters optimal growth of appropriate
RESEARCH METHOD
This study employed quantitative descriptive methodology to investigate school principals' perceptions of giftedness among students in Bahraini schools. Data was collected through a questionnaire designed in Likert Scale format that was originally developed by Nagara [33] and translated to Arabic and used with a Bahraini Teachers' sample (n=80) with a split-half reliability coefficient of 0.74 [34]. We use the same questionnaire again in this study to investigate Bahraini school principals' perceptions in three sections: 1) Characteristics school principals attribute to giftedness in students; 2) Methods employed by schools to identify students presumed to be gifted in their schools; and 3) Educational provisions school principals adopted to include gifted students in their school. The questionnaire had five Likert Scale type options that were coded in SPSS analysis (Strongly Agree=5, Agree=4, Neutral=3, Disagree=2, and Strongly Disagree=1). The questionnaire was administered by the first author to a group of consenting school principals. Participation in the study was voluntary. Various statistical descriptive data analysis techniques were used such as measures of central tendency and inferential statistic such as t-test and One-way Analysis of Variance (ANOVA) to determine if there are significant differences between the means of two groups or to compare means of two or more samples. The value of the split-half reliability coefficient of the questionnaire in the current study is 0.79.
Sample
The sample of this study consisted of 29 school principals (15 females, 14 males). Although the sample size is relatively small but they represent 7% of the research population (the total number of public schools in Bahrain is 209). Availability sampling is used in this study. The researchers reached more than 100 school principals who were accessible at that time and received completed surveys from only 29 of them. The participants' work experience varied from (6 to 21) years. Thirteen participants have (11 to 15) years of work experience while 11 of them had 21 years of experience or more. In addition, high number of the participants (18) work in intermediate schools while lower numbers work in primary or secondary schools. All of the participants hold a Bachelor degree and three of them hold higher degrees such as PhD or Masters as shown in Table 1 to Table 4.
RESULTS
To answer the first research question: What are the characteristics school principals attribute to giftedness in students in their schools? The researcher calculated frequencies, percentages, averages, and standard deviations for the participants' responses in the questionnaire. The results shown in Table 5 indicate that the general average of the characteristics school principals attribute to giftedness in students in their schools is (3.84) with a percentage of (76.90%). The school principals indicated that the most important characteristic of gifted students is (They show intense interest in some subjects). The mean for this item was inherent/innate gifts, do not need to put effort). The mean for this item was (4.07) with a percentage of (81.38%). The school principals indicated that the lowest characteristic of giftedness is (Excel in both academic and non-academic areas) with an average of (3.59) and percentage of (71.72%). The second lowest characteristic was (Have natural exceptional abilities) with an average of (3.62) and percentage of (72.41%). Table 5. Frequencies, percentages, averages, and standard deviations of school principals' perceptions on the characteristics of giftedness To answer the second research question: Are there any significant differences among school principals in their views on the characteristics of gifted students that attributed to the following variables: gender, work experience, subject, school level, and qualification? The researchers calculated t-test and ANOVA values to find any significant differences as shown in Table 6 to Table 9.
Findings of t-test indicated that there were no statistically significant differences (0.744>.05) found between school principals' in terms of their views on the characteristics of gifted students that can be attributed to the gender variable. Findings from ANOVA analysis indicated that there were no statistically significant differences between the school principals with a significance level of (0.319>.05) attributed to the work experience variable for this dimension in general. However, using ANOVA analysis on the individual items indicated that there is significant difference between the school principals in the item (show good memory of what they learn) with a significance level (.046<.05) as shown in Table 6. The least significance difference (LSD) test was used to know the direction of these differences to which level of school principals' work experience. The findings indicate that the difference is in favor of the following work experience years: (6-10) and (11-15 years) with significant levels of (.040), (.027) and (.047)<.05 as shows in Table 7.
Findings from ANOVA analysis indicated that there were no statistically significant differences between the school principals with a significance level of (0.095>.05) attributed to the school level variable for this dimension in general. However, using ANOVA analysis on the individual items indicated that there is significant difference between the school principals in the item (show good memory of what they learn) with a significance level (.042<.05) and in the item (Excel in both academic and non-academic areas) with a significance level (.007<.05) as shown in Table 8. The least significance difference (LSD) test was again used to know the direction of these differences to which school level. The findings indicated that this difference is in favor to the intermediate school level with significant levels of (.002) and (.019)<.05 as shown in Table 9. Finally, findings from ANOVA analysis indicated that there were no statistically significant differences between the school principals with a significance level of (0.096>.05) attributed to the qualification variable for this dimension in general.
To answer the third research question: What methods employed by schools to identify presumably gifted students from the school principals' perspectives? The researcher calculated frequencies, percentages, averages, and standard deviations for the participants responses in the questionnaire as shown in Table 10 which indicated that the most used identification method is (Checklists of gifted characteristics of giftedness) with an average of (4.45) and percentage of (88.97%). The second most used method was (a combination of methods/multiple dimensional methods) with an average of (4.31) and percentage of (86.21%). The least used identification method was (Peer nomination-informed by other students) with an average of (3.93) and percentage (78.62%) and, (Parents nomination-informed by parents) with an average of (3.97) and percentage (79.31%).
To answer the fourth research question: Are there significant differences among school principals in their views on the identification methods of gifted students that attributed to these variables: gender, work experience, subject, school level, and qualification? The researchers calculated t-test and ANOVA values to find any significant differences. The t-test results indicated no significant differences found between school principals in terms of their views on the identification methods of gifted students that can be attributed to the gender variable (significance level 0.084>.05), the work experience variable (significance level 0.637>.05), the school level variable (significance level 0.670>.05), and qualification variable (significance level 0.10>.05). To answer the fifth research question: What are the educational provisions school principals used to support gifted students in their school? The researcher calculated frequencies, percentages, averages, and standard deviations for the participants responses in the questionnaire as shown in Table 11 which indicated the general average of the participants' perceptions on the educational provisions used to support gifted students was (4.32) with a percentage of (86.33%). The most used educational provisions were (Through ability grouping) with an average (4.72) and percentage (94.48%). The second type of provisions used in the schools was (Through research projects in their areas of strength) with an average (4.59) and percentage (91.72%). The least type of provisions used in schools was (through acceleration through grade skipping) with an average (3.38) and percentage (67.59%) and (by acceleration programs) with an average (3.86) and percentage (77.24%).
To answer the sixth research questions: Are there significant differences among school principals in their views on the provisions offered to support gifted students that can be attributed to the following variables: gender, work experience, school level, and qualification? The researchers calculated t-test and ANOVA values to find if there are any significant differences. Findings of t-test indicated that there were no statistically significant differences (0.736>.05) found between school principals in terms of their views on the provisions offered to support gifted students that can be attributed to the gender variable. Similarly, no statistically significant differences (0.652>.05) were found that can be attributed to the work experience variable on the general dimension. Also, no statistically significant differences (0.484>.05) were found that can be attributed to the school level variable on the general dimension. The only significant difference between school principals' perceptions on the provisions offered to gifted students (0.040<.05) was attributed to the qualification variable. It was not possible to figure those differences are in favor of which qualification level because there was only three principals in the sample with postgraduate degrees (Table 12).
It is interesting to note that there is a lack of statistical significance found in the variables for gender, subject and qualification of the school principal.
DISCUSSION
First, it is important to highlight that the number of participants in this study is quite small, therefore it is difficult to say that these perceptions would reflect the wider population of school principals in Bahrain. There are in total 209 public schools in Bahrain. The current sample of 29 participants represent 14% of the study population including male and female school principals with a vast and variety of experiences teaching and managing different levels of schools. They are also qualified in their subject specialization and in educational and leadership knowledge and skills.
We noticed in the results of the first question that the school principals gave higher ranking to the following characteristics of giftedness: show intense interest in some subjects, are born with the inherent/innate gifts, do not need to put effort, are quick to grasp concepts/finish class assignments, and excel in non-academic areas such as sports, drama, art, music.The school principal perspectives in this study were congruent with the study [21] in Bahrain where the concept of giftedness differs to talent. This indicates that they look at giftedness from an academic and school perspective as highlighted by Smedsrud [6]. It is interesting to note that the Bahraini principals' perceptions of giftedness correlate with some of the characteristics of giftedness observed by Sternberg and Davidson [27] and also research conducted more recently [14], in having similar pathways of identification which also indicated more emphasis on academic aspects.
The results of the second question generally indicate the sample homogeneity, as there were almost no significant differences among the participants surveyed. Their similar training and professional development explain the homogeneity among the schools principals' perspectives about what giftedness could possibly entails. Given Bahrain centralized education system, such the standardized practice of recruiting and promoting school leaders are somehow constant across all levels. While the results indicate homogeneity in general, it is interesting to note that there were only two significant differences in the school principals' perceptions towards giftedness in terms of excelling in both academic and non-academic areas. Another dimension is showing good memory of what they learn. This significant result could probably because some school principals may not consider these two dimensions as methods to identify giftedness in the students. Quek, et al. [28] and Heuser, et al. [35] argued there are different constructs of talents, intelligence and ability when comparing the perception, policies, and practices across different norms of established systems. Stephens [36] highlighted the importance of policies in establishing sound transition from policy to implementation in order for gifted education program to be impactful across many levels. It can either hinder or support the transition from policy to practice.
The results of the third question indicate that the school principals gave higher ranking to the following methods of identifying gifted students: Checklists of gifted attributes/characteristics, a combination of methods, multiple dimensional methods and personal and teacher observation. Still it seems that they perceive giftedness as abilities related to school and academic work. They did not give high attention to parental or peer opinions. There is a dissonance in this area as Khalifa [37] has emphasized that parental involvement is pivotal to ensure progressive development and success in any gifted education program.
The results of the fourth question generally indicate the homogeneity of the sample, as there were almost no significant differences among them in their perspectives on methods of identifying gifted students. The literature indicates strong influence in terms of expanding notions like multiple intelligences (MI) championed by Gardner [38]. Despite the MI's theory's popularity, empirical support has been mixed. Grantham [39] argues that assessment has been difficult, limiting its impact on gifted education [5], [40]. Sternberg [7] and Renzulli's [41] work clearly broadened educators' conceptions of what giftedness and talent can be, where or how it can be found. Talent is the demonstrated mastery of the gift as evidenced by skills in academics, arts, business, leisure, sports, or technology that place the individual in the top 10% of age peers [2]. This study emphasizes the role of socio-cultural context [5] in defining, identifying, and fostering giftedness which correlate with the recent studies like Kaluda [14] and Gubbin [42] that raised the importance of motivation and engagement in resulting higher achievements and success to the gifted programs.
The results of the fifth question indicate that the school principals gave higher ranking to the following types of provisions offered by school to gifted students: Grouping students according to their gifted abilities, doing projects in students' areas of strengths and interests, through enrichment programs that broadly develop students' horizons and interests. This result reiterate Heuser, et al. [35] study that indicated one of the constructs of intelligence and ability is through individualistic versus collective dimension. The participant school principals did not give high attention to other giftedness provisions such as engaging students in problem solving, accelerating programs probably because it is not available in their schools.
Quek, et al. [28] in their study imply the importance of customizing instructions, and infusing creative and critical thinking skills through providing gifted students with open-ended questions, challenging and intellectually stimulating programs.
The results of the sixth question generally indicate the homogeneity of the sample, as there were almost no significant differences among them in their perspectives on methods of identifying gifted students. The results have shown that the school principals' perceptions of giftedness very closely knitted with cognitive abilities. It narrowed the concept of giftedness as merely intellectual abilities whereas talent is the demonstrated mastery of the gift as evidenced by skills in arts, business, leisure, sports, or technology that place the individual in the top 10% of age peers [2]. Little provision in terms of identifying and appreciating giftedness in terms of natural talents, creativity and cultural aspects as reflected in the existing literature [29], [20] which indicated effective school principal need to have a general understanding of the foundations of giftedness and gifted education along with student characteristics, instructional approaches and financing.
Findings from this study reflect existing research [20], [29] that reveals dissonance between what the principals' perceptions on giftedness are and the educational support that they provided to the gifted students in their schools. Contrary to the research done by David [13] which highlighted the need for the administrative institutions to be actively involved in not only the identification process of giftedness but also provide a comprehensive support to the gifted programs. Smedsrud [6] contended that the traditional approach of identifying giftedness have impact on how the educational provisions are made. Limitations notwithstanding, findings from this study provide useful and timely information about Bahrain school leaders' preparation in terms of provisions and support.
First, the matter of equity deemphasized in the discourses [39], [43]. To overcome the dissonance of equity and excellence, the field of gifted education needs to agree on what giftedness means and what the processes of identification and services should be for gifted programs. Also, more inclusive discussions are required in terms of nature versus nurture of giftedness. As the study done in Bahrain [21] actually differentiated gifted to talented students. The provision for its programs will differ accordingly. It is worth to point out that students who identified as gifted are not properly served in an equitable manner and culturally responsive way. Plausible explanation to these could be resulting from the understanding of the concept of giftedness. Furthermore, the researchers argue that it could be due to the lack of understanding of the global dimensions of giftedness, and the need for more resources and expertise in schools. Jones [43, p. 8] argued, "Many identified gifted students are not receiving the needed academic support through a relevant and rigorous curriculum." There is a need to review curriculum, as it is pivotal that stakeholders, like the school leaders, administrators, education ministry, academics and educational researchers, work simultaneously to identify gifted students in a more open and broad perspectives rather than limit to mere cognitive abilities. This will certainly enable the gifted students to receive support in ways that their intellectual capacities are optimally stimulated and nurtured.
In this present era where discussions on education become mired in measurement, processes, and outcomes, Biesta [44] urged for a refocus on the purpose and direction of good educational programming. Ideally, culturally relevant leadership in gifted education should aim to meet the individualized needs of students through challenging and accelerated curriculum [45]. This should occur in a climate that fosters optimal growth and provides ample opportunities for students to home and cultivate their domain specific talents and ultimately inculcate the joy of discovery and learning [2], [46]. School principals should also "identify policies that align with the stated intent and goals of the program in a transparent process that strives for inclusion, not exclusion" [47, p. 75]. A successful gifted program promotes inclusion and requires educational leaders to translate a vision of excellence and equity into reality [48] in a much broader perspective.
Challenges with funding gifted programs due, in part, to a need for more explicit educational policies also have been problematic for nurturing these gifted students [49]. Schools become more inclusive when leaders make decision that disrupt equity [50]. The impetus for creating inclusive and equitable learning environments involves a conscious shift from deficit thinking to strengths-based paradigms at the individual and systemic levels, an emphasis on high expectations for not only academic performance, but also in harnessing other forms of talents and creativity. Educators need to be critically aware to address the need for addressing diversity issues in schools.
Curriculum is significant in identifying and serving gifted students' programs [2]. Differentiating curriculum and instruction are crucial to not only support these gifted and talented students in terms of "responsive to students' points of readiness, interest and learning profile" [4, p. 45] but also to value and embrace multiple cultures in the curriculum content and creating robust teaching and learning processes.
Bahrain Teachers College could start by answering NAGC [51] call to action. As a teacher training college, academics and educational leadership programs could include courses that promote giftedness and inclusion. School principals need motivation to create a school climate that supports excellence. Professional development courses aligned to address the need for gifted education. Teachers training are important on the needs of gifted students from diverse populations, teacher collaboration, and other recommendations focused on curriculum and progress monitoring. Genuine progress in advancing giftedness and gifted education requires academics and practitioners to engage actively in progressive conversation with educators to develop and use socially responsive curriculum that connects to students' real-life experiences and communities that include multiple voices and perspectives.
The researchers would like to propose that the educational system in Bahrain too may need an excellence orientation policy supporting Stephens [36] calls toward different students psychological make up, laudable based on their diverse talents to better meet the educational demand in pursuing excellence, purposeful design and implementation of giftedness programs for administrative institutional leaders [13] to implement the high standards in Bahrain public school.
Research could dig deeper on how the school leaders can adopt differentiation curriculum practices that provide students with deeper understanding [35] of the subject matter that go beyond the ordinary tasks. Education is a unified effort from all stakeholders. Parental involvement is also crucial to ensure progress and success. Khalifa [37] conquers that these partnerships, facilitated by school leaders, between educators and parents can help develop cultural competence, empathy, and communication that support student growth. Gifted education and educational leadership scholars' alike need to agree to Mansfield [52] argument that policies and effective standard practices promoting gifted education are very critical and must never undermine social justice as this would reduce public trust in school leaders.
CONCLUSION
The school principals looked at giftedness mainly from an academic and school perspective. The results indicate some dissonance between what the principals' perceptions on giftedness are and the educational support that they provided to the gifted students in their schools. Research in the field of giftedness require to be free from bias. A significant portion of research should reflect unbiased consideration of identification, processes, and models conducted by developers of models and instruments, under consideration, with few third-party studies and replications. Although keeping advocacy and research separate is admittedly easier said than done, it is not impossible. Additional research is needed to clarify some of the points alluded to by the data. Future researchers might wish to investigate how school leaders can clearly identify the characteristics of giftedness. Findings through individual or group interviews from within the survey respondents, to explore avenues on how to create conducive conditions and firm support from the parents and the community are integral in ensuring continuous success. For gifted programs, a consistent reviewed curriculum is pivotal for success of any gifted education.
There is certainly a need for a paradigm shift on the broad concept of giftedness. Educators and school leaders especially need to put away that longstanding notion that measure students' giftedness with only standardization tests. Examining new paradigms for definition, talent development, and identification in conjunction with proposed curricular and serve interventions would provide policy makers with clear pathways in decision making which is not only necessary but also crucial for Bahrain's continuous efforts towards a world-class education. | 2021-05-04T22:04:22.293Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "87cfe590585035c0f15c5e861261bbf6e849bf27",
"oa_license": "CCBYNC",
"oa_url": "http://ijere.iaescore.com/index.php/IJERE/article/download/21176/13149",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c533809627701c3029059ef5d4ac61de30996b26",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
119345330 | pes2o/s2orc | v3-fos-license | New Limit on the D Coefficient in Polarized Neutron Decay
We describe an experiment that has set new limits on the time reversal invariance violating D coefficient in neutron beta-decay. The emiT experiment measured the angular correlation J . p_e x p_p using an octagonal symmetry that optimizes electron-proton coincidence rates. The result is D=[-0.6+/-1.2(stat)+/-0.5(syst)]x10^(-3). This improves constraints on the phase of g_A/g_V and limits contributions to T violation due to leptoquarks. This paper presents details of the experiment, data analysis, and the investigation of systematic effects.
I. INTRODUCTION
CP violation has been observed so far only in the decays of neutral kaons [1]. Recently evidence for the implied T violation in the neutral kaon system has been reported [2]. These effects could be due to the Kobayashi-Maskawa phase in the Standard Model [3]. However, these observations could also be due to new physics, and it is well-established that new sources of CP violation are required by the observed baryon asymmetry of the universe [4]. Many extensions of the Standard Model contain new sources of CP violation and can be probed in observables for which the contribution of the Kobayashi-Maskawa phase in the Standard Model is small. The present experiment searches for CP violation in one such observable, a T -odd correlation in the decay of free neutrons.
The differential decay rate for a free neutron can be written [5] dW ∝ S(E e )dE e dΩ e dΩ ν 1 + a p e · p ν E e E ν where p e , E e and p ν , E ν are the momentum and energy of the outgoing electron and neutrino, respectively, S(E e ) is a phase space factor, and J is the neutron spin. The triple-correlation D J · (p e × p ν ) is odd under motion reversal, and can be used to measure time reversal invariance violation when final state interactions are taken into account. Note that in the rest frame of the neutron, where p p is the momentum of the recoil proton. The D coefficient is sensitive only to T -odd interactions with vector and axial vector currents. In a theory with such currents, the coefficients of the correlations depend on the magnitude and phase of λ = |λ|e −iφ , where |λ| = |g A /g V | is the magnitude the ratio of the axial vector to vector form factors of the nucleon. In this notation, the coefficients are given by a = 1 − |λ| 2 1 + 3|λ| 2 , A = −2 |λ| cos φ + |λ| 2 1 + 3|λ| 2 , B = −2 |λ| cos φ − |λ| 2 1 + 3|λ| 2 , D = 2 |λ| sin φ 1 + 3|λ| 2 . (1. 2) The most accurate determinations of |λ| (current world average |λ| = 1.2670 ± 0.0035) come from measurements of A [6]. The coefficients a, A, and B, respectively, are measured to be −0.102 ± 0.005, −0.1162 ± 0.0013, and 0.983 ± 0.004 [6]. Several previous experiments found the value of D, and thus sin φ, to be consistent with zero at a level of precision well below 1%. The three most recent such measurements found D = (−1.1±1.7)×10 −3 [7] and D = (2.2 ± 3.0) × 10 −3 [8], and D = (−2.7 ± 5.0) × 10 −3 [9], constraining φ to 180.07 • ± 0.18 • [6]. Final state interactions give rise to phase shifts of the outgoing electron and proton Coulomb waves that are time reversal invariant but motion reversal non-invariant. Thus D has terms that arise from phase shifts due to pure Coulomb and weak magnetism scattering. The Coulomb term vanishes in lowest order in V-A theory [5], but scalar and tensor interactions could contribute. The Fierz interference coefficient measurements [10,11] can be used in limiting this possible contribution to Interference between Coloumb scattering amplitudes and the weak magnetism amplitudes produces a final state effect of order (E e 2 /p e m n ). This weak magnetism effect is predicted to be [12] D W M = 1.1 × 10 −5 . (1.4) The D coefficient has also been measured for 19 Ne decay, with the most precise experiment finding D N e = (4 ± 8) × 10 −4 [13]. The predicted final state effects for 19 Ne are approximately an order of magnitude larger than those for the neutron and may be measured in the next generation of 19 Ne experiments. For 8 Li, a triplecorrelation of nuclear spin, electron spin and electron momentum has been measured, with the most precise measurement at R = (0.9 ± 2.2) × 10 −3 [14]. Unlike D, a nonzero R requires the presence of scalar or tensor couplings and thus is a tool to search for such couplings. The electric dipole moments (EDMs) of the electron [15], neutron [16], and 199 Hg atom [17] are arguably the most precisely-measured T -violating parameters and bear on many of the same theories as D. Table I summarizes the current constraints on D from analyses of data on other T -odd observables for the Standard Model and extensions [18]. For lines 2-5 these limits are derived from the measured neutron or 199 Hg EDM. In the nearly two orders of magnitude between the present limit on D and the final state effects lies the opportunity to directly observe or limit new physics. Moreover, accurate calculations of magnitude and energy dependence of the final state effects can be made to extend the range of exploration still further.
II. OVERVIEW OF THE EMIT DETECTOR
In the emiT apparatus, a beam of cold neutrons is polarized and collimated before it passes through a detection chamber with electron and proton detectors (four each). A schematic of the experiment is shown in Figure 1. The most significant improvements over previous experiments are the achievement of near-unity polarization (> 93% compared to 70% in [7]) and the construction of a detector with greater acceptance and greater sensitivity to the D coefficient. The octagonal arrangement of the eight detector segments gives them nearly full coverage of the 2π of azimuthal angle around the beam, nearly twice the angular acceptance in previous experiments, and the detector segments are longer than in previous experiments. The placement of the two types of detectors at relative angles of 135 • is also an improvement over previous experiments, in which the coincidences were detected at 90 • . While the cross product is greatest at 90 • , the preference for larger electron-proton angles in the decay makes placement of the detectors at 135 • the best choice to achieve greater symmetry, greater acceptance, and greater sensitivity to D (see Figure 2). Combined Although the cross-product (dashed line) is maximized at electron-proton detection angles of 90 • , the overall sensitivity to D (solid line) is enhanced at larger angles due to the phase space for the decay. Placing the detectors at 135 • allows for an octagonal geometry that combines greater symmetry, acceptance, and sensitivity when compared to placement of the detectors at 90 • . The solid curve in this figure is the sensitivity for a zero-radius beam, which would exhibit a factor of 7 enhancement for 135 • as compared to 90 • . For our nearly 3 cm-radius beam, the enhancement factor is close to 3. with the higher neutron polarization from the supermirror polarizer our geometry provides for an overall sensitivity to D that is a factor of ≈ 7 greater than previous measurements, assuming the same cold neutron beam flux.
The first run of the experiment was conducted at the NIST Center for Neutron Research (NCNR) in Gaithersburg, MD. The experimental apparatus is outlined below, while more detailed descriptions can be found in [19,20].
A. Polarized Neutron Beam
The NCNR operates a 20-MW, heavy-watermoderated research reactor. Neutrons from the reactor pass through a liquid hydrogen moderator to make cold neutrons with an approximately Maxwellian velocity distribution at a temperature of about 40 K. The average neutron velocity is about 800 m/s. The neutrons are transported 68 meters to the apparatus via a 58 Ni-lined neutron guide. Neutrons are totally internally reflected if they enter with an angle of incidence less than 2 mrad for eachÅ of de Broglie wavelength. The capture flux of the neutrons was measured using a gold foil activation technique to be ρ n v 0 = 1.4 × 10 9 n/cm 2 /s (where v 0 = 2200 m/s) at the end of the neutron guide. (The capture flux quantifies the neutron density in the detector for the polychromatic beam.) The beam passes through a cryogenic beam filter of 10-15 cm of single crystal bismuth which filters out residual fast neutrons and gamma-rays.
The neutrons are polarized with a double-sided bendertype supermirror polarizer obtained from the Institut Laue-Langevin in Grenoble, France [21]. The supermirror consists of 40 Pyrex [22] plates coated on both sides with cobalt, titanium, and gadolinium layers that maximize the reflection of neutrons with the desired spin state while absorbing nearly all neutrons of the opposite spin state. The supermirror was measured to polarize a 4.5 cm by 5.5 cm beam with 24% transmission relative to the incident unpolarized flux. The neutron polarization was determined to be > 93% (95% CL).
The neutrons travel the one meter from the polarizer to the spin-flipper inside a Be-coated glass flight tube in which a small helium overpressure is maintained to minimize beam attenuation via air scattering. The neutrons, which have spins that are transverse to their motion, then pass through two layers of aluminum wires which comprise the current-sheet spin flipper. When the current in the second layer is antiparallel to that in the first there is no net magnetic field and the neutron polarization is unaffected. When the currents are parallel, the neutron spin does not adiabatically follow the rapid change in field orientation and thus the sense of J · B is reversed. Downstream of the spin flipper, weak magnetic fields adiabatically rotate the spin to longitudinal, i.e. parallel or antiparallel to the neutron momentum. The longitudinal guide fields are 2.5 mT upstream and 0.5 mT within the detector. Figure 3 shows the spin transport system. The polarization direction is reversed every 5 seconds. In the detection region, the longitudinal field is produced by eight 50 amp-turn current loops of 1 m diameter. The loops are aligned to within 10 mrad of the detector axis using a sensitive field probe and an AC lock technique. Additional coils canceled the transverse components of the Earth's field and local gradients of 7.5 µT/m.
The vacuum chamber begins at the spin flipper with two meters of Be-coated flight tubes, through which the neutrons travel toward the collimator region. Two collimators of 6 cm and 5 cm diameter openings separated by 2 m define the beam. These and 5 additional "scrapers" between them consist of rings of 6 LiF which absorb the neutrons. Behind each ring is a thick ring of high-purity lead which absorbs the gamma-rays from the reactor and those produced by neutron captures upstream. Between scrapers, the walls of the beam tube are lined with 6 Liloaded glass to absorb stray neutrons. . Two sheets of current-carrying wires create a magnetic field of opposite orientation on each side. The field orientation changes so rapidly that the spin of a neutron passing through the current sheets cannot follow the field reversal, and the neutron polarization is reversed with respect to the magnetic guide field. Downstream the magnetic field and polarization are rotated adiabatically from transverse in orientation to longitudinal.
A fission chamber mounted behind a sheet of 6 Li-glass with a 1 mm pinhole aperture was scanned across the beam to obtain a cross-sectional profile of the intensity as shown in Figure 4. The neutron intensity was measured before and after the experiment. To determine the polarization at the entrance to the detector, the beam passed through a second, single-sided, analyzing supermirror directly in front of the scanning detector, and the ratio of intensities with the spin flipped and unflipped was measured. The resulting flippng ratio measures a combination of the neutron-spin-dependent transmission efficiencies of the two supermirrors and the neutron spin flipping eficiency. From this, and assumptions about the spin flipping efficiency, we can determine the product of polarization efficiencies for the two supermirrors (polarizer and analyzer). When the upper limit of 100% spin flipping efficiency is used, a lower limit of the neutron beam polarization of 93% (95% CL) is found. This lower limit also includes the assumption that the flipping ratio for a pair of supermirrors identical to our analyzer would be less than that of a pair of supermirrors identical to our polarizer by a factor of 2 ± 0.5% [21].
Downstream of the detection region the vacuum chamber diameter increases to 40.6 cm, terminating with a 6 Li-glass beam stop 2.8 m from the end of the detector. A 1 mm diameter pinhole at the center of the beamstop allows about 1% of the beam to pass through a silicon window into a fission chamber detector that continuously monitors the neutron flux.
B. Detector System
Eight detectors surround the beam, each 10 cm from the beam axis as shown in Figure 5. The octagonal geometry places electron and proton detectors at relative angles of 45 and 135 degrees. Coincidences are counted between detectors at relative angles of 135 degrees.
Electron Detectors
The electron detectors are slabs (8.4 cm x 50 cm x 0.64 cm) of BC408 plastic scintillator connected on each end to curved lucite light-guides that channel the light to Burle 8850 photomultiplier tubes. Each photomultiplier tube is surrounded by a mu-metal magnetic shield and a pair of nested solenoids acting as an active magnetic shield. This combination of active and passive magnetic shielding had a factor of 10 less impact (0.5 µT) on the guide field at the beam center than the mu-metal alone.
The scintillator thickness of 0.64 cm is just greater than that necessary to stop the most energetic (782 keV) of the electrons from neutron. The scintillators are wrapped with aluminized mylar and aluminum foil to prevent charging and to shield the detectors from x-rays and fieldemission electrons in the vacuum chamber. For each segment, the energy response was calibrated with cosmic-ray muons and conversion electrons from 207 Bi and 113 Sn (see Figure 6.)
Proton Detectors
Each proton detector has an array of 12 PIN diodes of 500 micrometer thickness arranged in two rows of 6. The diodes are held within a stainless steel high voltage electrode. Over each diode an open cylinder protrudes from the face of the electrode, shaping the field to focus and accelerate the protons as shown in Figure 7. Thus each diode collects protons focused from a region 4 cm × 4 cm even though it has an active area of only 1.8 cm × 1.8 cm. The diodes and their electronics are held at -30 to -40 kV. Between the electrode and the beam is a frame strung with 80 0.08-mm gold-plated tungsten wires that define a plane of electrical ground. Protons drift in a field-free region until they pass this plane, and then are accelerated by the high voltage and focused onto the nearest PIN diode below. Near both ends of the detector array are two cryopanels held at liquid nitrogen temperature. Water vapor, released predominantly by the scintillators and other plastic components, is pumped onto the cryopanels to prevent condensation on the cooled PIN diodes.
The charge in the PIN diode produced by each proton is amplified by 10 V/pC with a preamplifier mounted directly behind the PIN diode. These circuits and the PIN diodes are cooled with liquid nitrogen to about 0 • C to decrease electronic noise. Preamplifier signals are processed in a custom VME-format shaper/ADC board with programmable gain and operating mode parameters. The PIN diodes were calibrated with x-rays from an 241 Am source as shown in Figure 8.
Background
The background in the detectors was primarily related to the beam or to the high voltage bias. Closing the beam shutter upstream of the neutron filter stops virtually all neutrons and about 1/3 of the gamma-rays coming from the reactor along the beamline. With the shutter closed, the rates in each detector were less than 100 Hz, primarily from dark current, reactor gamma-rays, and cosmic rays. With the shutter open, the detectors see an increased gamma-ray flux primarily from neutron captures in the apparatus, triggering the detectors at less than 1 kHz per electron detector and less than 1 kHz for all PIN diodes combined. This results in deadtime less than 3% for the beam-related background. At its worst, the highvoltage-related background, consisting of x-rays, light, electrons, and ions, led to rates in the hundreds of kHz in the detectors. It was reduced at times by conditioning and cleaning of electrodes but varied by orders of magnitude during the run.
C. Data Acquisition
A block diagram of the data acquisition system is shown in Figure 9. The identification of neutron decay events is simplified by the fact that the proton signal is observed 0.5 µs to 2 µs after the electron signal. The recoil protons, with maximum energies of only 750 eV, require this time to drift from the point of decay to the face of the proton detector. Events are accepted by the coincidence trigger when the electron signal arrives within a coincidence time window ±τ coinc /2 of a proton signal. The duration τ coinc of this window was originally 14 µs and was shortened to 7 µs midway through the experiment to reduce the system deadtime. Each stored event contains the location (PIN diode) and energy for the proton event, location (electron detector), energy of the electron event, relative time between individual signals from the two phototubes in the electron detector, relative time of arrival of the proton and electron signals, and the orientation of the neutron polarization. Every 30 seconds during the data collection, information is recorded from the system monitors which include system livetime, magnet currents, neutron flux at the beam stop, vacuum pressure, proton detector high voltage, and high voltage leakage current. Periodically, the data acquisition collects singles spectra from all of the individual detectors.
A. Data Collection
The experiment was installed at the NCNR during December 1996 and January 1997. From February through August 1997, 50 GB of data were collected and stored. The data are divided into 626 files representing continuous runs, typically four hours in duration. These are grouped into 125 series, within which running conditions varied little. For one week in August a systematic test was run in which the beam was distorted and the polarization guide field direction changed. The purpose and results of this test will be described in Section IV.
Instabilities in the proton detector high voltage made it impossible to operate all channels of the detectors at all times. Sometimes the electrodes simply would not hold the necessary voltage, and at other times a large spark or series of sparks would damage the electronics held at high voltage. Less than half of the data were collected when all four proton detectors were functioning. Another limitation to the detector uniformity were variations in the measured proton energy deposited in the PIN diodes. In preliminary tests, the surface deadlayers of the PINs were measured to be 20±2 µg/cm 2 as Energy spectrum in PIN diode III14, near which is mounted a weak 119 Sn source producing a 24 keV x-ray. The protons, accelerated to 36 keV but measured at less than 20 keV, are visible between the background and the x-ray peak. The peak on the far right from a low-rate pulser input directly into the preamplifier is used to monitor gain and resolution. specified by the manufacturer, Hamamatsu. In a deadlayer of this thickness a 35 keV proton loses 10 keV of energy. The proton energies measured during the experiment, however, were 12-18 keV, an average of 20 keV below the energy imparted to them by acceleration through 34-38 kV (see Figure 10.) With widths (FWHM) of approximately 10 keV, these peaks are not well separated from the background. High background rates necessitated the setting of thresholds at levels such that some neutron decay events were also rejected. This and the data acquisition deadtime were the primary limitations to the statistics of the experiment. A deadtime per event of 2 ms was necessary for stability of the system. Even with the reduction in length of the coincidence window, the high rate of background kept the system at 40-60% deadtime for most of the data collection period. Figure 11 shows an example of the relative time spectrum for the coincidence data. The large center spike, originating mainly from multiple gamma rays produced by neutron captures in the apparatus, defines zero time difference. The neutron decay events are accepted within a window 0.35 µs to 0.9 µs after the prompt peak. This window contains the majority of the neutron decay protons, while excluding the tail of the prompt peak and the low-signal-to-background tail of the proton peak. The background to be subtracted from these events is estimated using the rates in regions to either side of the decay and zero-time peaks. Events are also selected on the basis of measured proton energy to reduce the amount of background to be subtracted. The energy range accepted is chosen solely by minimizing the fractional statistical uncertainty in the number of neutron decay events for each PIN diode-electron detector pair. Specifically, if N ∆ is the number of coincidences counted by subtracting the background from the coincidences in the 0.35 to 0.9 µs window, the energy range is chosen to minimize
B. Event Selection
where f is the signal to background ratio in this energy range. This increases the overall signal to background on the 15 million good events from 0.8 to 2.5.
A. Determination of D from Coincidence Events
For each PIN diode-electron detector pair in a given data series, the count rate can be expressed as where N 0 is a constant proportional to the beam flux, the ǫ α and ǫ i are detector efficiencies for a PIN diode and electron detector respectively. The average of the neutron polarization vector over the detector volume, given by Pσ,is assumed to be uniform and constant over time, lying along the direction of the 0.5 mT guide field. The ± signs correspond to the two signs of the polarization. The factors K αi 1 and K αi a are geometric factors derived from Equation 1.1 by integrating 1 and p e · p ν /E e E ν , respectively, over the beta-decay phase space, the neutron beam volume, and the acceptance of each electrondetector-PIN-diode detector pair. Similarly, the factors K αi A , K αi B , and K αi D , are obtained by integrating the vectors: p e /E e , p ν /E ν , and (p e × p ν )/E e E ν .
We produce the following efficiency-independent asymmetries From Equation 4.1 we get FIG. 12. The data from two PINs at the same z-position in a proton segment can be used to cancel the effects due to the electron and neutrino asymmetries. The coincidences shown by solid lines (E1PINa and E2PIN b ) have approximately the same angle, a little less than 135 • . These are referred to as "small-angle" coincidences. The "large-angle" coincidences for this pair of PINs (E1PIN b and E2PINa) are the dashed lines.
where we use the definitions Consider the two detector pairings PIN a -E 1 and PIN b -E 2 indicated in Figure 12. The corresponding values of K αi D have opposite sign while K αi A and K αi B have the same sign. We therefore combine asymmetries from two proton-electron detector pairings to produce the combination For uniform detection efficiency the difference (K b2 D − K a1 D lies along the detector axis,ẑ, while the differences lie perpendicular to the detector axis. For a polarized neutron beam with perfect cylindrical symmetry aligned with the detector axis, Departures from perfect symmetry and perfect alignment of the neutron polarization require that the A and B correlation terms be retained in Equation 4.6. The resulting systematic effects are discussed in Section IV C. Additionally, as shown in Figure 12, there are two classes of electron-PIN pairs: those that make an angle smaller than 135 • (b2 : a1) or an angle larger than 135 • (a2 : b1). We thus separate our data into a small-angle group and a large-angle group giving two statistically independent results for each PIN-diode-electron-detector pairing.
B. Monte Carlo Methods
We use two Monte Carlo calculations to determine the values of K 1 , K a , K A , K B , and K D . The results from these two completely independent calculations are in excellent agreement. In both calculations neutron decay events are generated randomly within a trapezoidcylindrical geometry (i.e. a tube with divergence) that can be offset with respect to the detector axis. A realistic beam profile, representative of Figure 4, can be modeled by combining results from several different trapezoids. In one of the Monte Carlo calculations the tracking of protons and electrons is done with the CERN Library GEANT3 Monte Carlo package [23], while in the other tracking is implemented within the code itself. In both, the emiT detector geometry is specified with uniform efficiency over the active area of each scintillator and over the square focusing region of each PIN diode.
The constants defined in Equation 4.1 are given by where X = 1, p e · p ν /E e E ν , p e /E e , p ν /E ν , and p e × p ν /E e E ν for x = 1, a, A, B, and D, respectively. We have studied systematic uncertainties associated with potential non-uniformities in the beta efficiencies and included them in the final uncertainty for the constants K αi x . These constants (a total of 11, taking into account the three directions for each vector) are accumulated in a file that is read to calculate the factors v (Equation 4.6) for different orientations of the polarization.
Values of |K αi D ·ẑ| are used directly in the interpretation of the result for D. Variations among the PIN diode pairs of individual values ofK αi D within a given proton segment are negligible, and average values (|K D ·ẑ|) can be used. They are found to be 0.424 ± 0.010 and 0.335 ± 0.020, for the small-and large-angle coincidences, respectively. The uncertainties are primarily from uncertainties in the geometry of the beam. Values for the other K αi x are used in the estimation of systematic uncertainties described in the following section.
C. Discussion of Systematic Uncertainties
The largest of the systematic effects can be shown to be the contributions to the v (Equation 4.9) that arise due to the misalignment of the neutron polarization with respect to the detector axis. A transverse component of the polarization produces a significant contribution to v b2:a1 because the vector differencesK b2 A −K a1 A and K b2 B −K a1 B are predominantly perpendicular to the detector axis. (For example,K b2 A −K a1 A is proportional to the integral of p e (E 1 ) − p e (E 2 ) and is directed horizontally to the left in Figure 12. The differenceK b2 B −K a1 B is antiparallel toK b2 A −K a1 A .) For an azimuthally symmetric neutron beam, it can be shown that for each proton detector segment (labeled with subscripts η= I, II, III, IV) the weighted average of the v αi:βj for all large or small detector pairs can be expressed as v l/s η = P D(K l/s D ·σ) + α l/s η sin θ σ sin(φ η − φ σ ), (4.9) where θ σ and φ σ are the polar and azimuthal angles ofσ, and φ η =0 • , 90 • , 180 • , and 270 • respectively for detectors I, II, III, and IV. This dependence can be derived analytically for zero beam radius and is confirmed by Monte Carlo simulations for symmetric beams of finite radius. The coefficients α η measure the combined effects of the A and B correlations for each proton detector segment.
If the symmetry of the four sets of proton detectors were perfect, i.e. α I = α II = α III = α IV , the contributions due to the A and B coefficients would average to zero, and Equation 4.7 would be valid, even with a polarization misalignment. In the absence of perfect symmetry, these contributions do not cancel when the four proton detectors are combined, and a false D contribution would result from the application of Equation 4.7. This false D is proportional to the product of two effects that are both small: the misalignment of the neutron polarization with respect to the detector axis (θ σ ) and the departure from perfect symmetry of the proton detectors (∆α = 1/2(α I − α III ) + 1/2(α II − α IV )). Such an effect is called the "tilting asymmetric transverse polarization" effect, or "Tilt ATP" [9,24].
The ATP effect was intentionally amplified for a systematic test, run with transverse polarization (θ σ = 90 • φ σ = φ IV = 270 • ) and a distorted neutron beam. The neutron beam was distorted by blocking half of the beam with a neutron absorber placed upstream near the spin flipper. The results of this test are shown in Figure 13. This demonstration that the experiment can measure an asymmetry consistent with the Monte Carlo calculation serves as a strong check on both the operation of the detector and the validity of the analysis method.
A false D also arises if the polarization has transverse components not described by a simple tilt. The form of Equation 4.9 shows that a net azimuthal component of σ also results in a contribution to v η that does not average to zero when data from proton segments I-IV are combined. This effect, referred to as a "twisting asymmetric transverse polarization" ("Twist ATP") is shown by Monte Carlo simulations to be less than 10 −4 for azimuthal polarizations of less than 1 mrad. For this reason, all sources of guide field distortion are kept to less than 1 mrad, and materials of low magnetic permeability (less than 0.005 µ 0 ) were used in the detection region. There are exceptions to this requirement, however the net effect of all additional permeability was measured to produce less than 1 mrad of distortion of the guide field anywhere in the detector region.
Variations in the neutron flux (Φ) and polarization (P ) that depend on neutron helicity yield a false D. For this experiment the effects due to misalignment of the neutron spin are small [25], so that these systematic effects, to first order in ∆Φ/Φ and ∆P are Our data provide an upper limit of 0.002 for Pσ · (A K A + B K B ). We combine this with neutron flux monitor data for ∆Φ/Φ < 0.004, concluding that D f alse (∆Φ) < 8 × 10 −6 D. The flipping ratio measurement has been used to derive a lower limit on the spin flipper efficiency of 82% so that ∆P < 0.2, and D f alse (∆P ) < 4 × 10 −4 D. We conclude that both effects are negligible in this measurement. segment data (v η ) are then combined in an arithmetic average so that the sinusoidal variation given in Equation 4.7 cancels to first order in misalignments, i.e. .9 and also seen in the test data of Figure 13, where the amplitude is 100 times larger.)
D. Results
The two independent measurements for small angle and large angle PIN-electron detector pairs can be combined in a weighted average.
The full uncertainty includes the uncertainty from the average neutron beam polarization.
The data are also analyzed by breaking each series up into individual runs and combining PIN-electron detector pairings in the same way. The results of these analyses are consistent. The final result is (−0.6 ± 1.2) × 10 −3 , where we have assumed the neutron polarization isP = (96 ± 2)%. This is derived from our measurement of flipping ratio described in Section II A, with the assumption that the allowed range (93% ≤ P ≤ 100%) spans 2σP . Finally, we use the scaled results from the systematic test data ( Figure 13) combined with Monte Carlo simulation studies to estimate the uncertainty of the Tilt-ATP systematic effect. For the test data, proton detector IV (φ IV = 270 • ) was not operational. In calculating D for the test data, only values from detectors I and III can therefore be used in Equation 4.12 with a result of 1 2 (D I + D III ) = (−6.5 ± 1.4) × 10 −2 . Monte Carlo simulations show that for a beam of radius 3 cm, the sin(φ η − φ σ ) behavior of Equation 4.9 is modified so that D test = 1 2 (D I + D III )/1.6 = (−4.1 ± 0.9) × 10 −2 . This can be scaled by sin θ σ , the ratio of polarization misalignments for the data and test runs. The individual values of D l/s η shown in Figure 14 are used to determine θ σ = (9±3)×10 −3 radians for the data run. This provides an upper limit for the uncertainty on the Tilt-ATP systematic effect of D(Tilt ATP)< D test sin θ σ ≤ 5.2×10 −4 . Though we use the test results to estimate this false D effect, we expect the cancellation due to beam symmetry to be more complete for the data run because the test beam was intentionally distorted. We therefore consider this upper limit to be a conservative estimate of the largest possible false D effect [26] The contributions to the statistical and systematic uncertainties are given in Table II
V. SUMMARY AND CONCLUSIONS
The apparatus used to perform a measurement of the D-coefficient in the beta-decay of polarized neutrons has been described. The data using the emiT detector have been analyzed using a technique that is insensitive to the nonuniform detection efficiency over the proton detectors. The initial run produced a statistically limited result of D = [−0.6 ± 1.2(stat) ± 0.5(syst)] × 10 −3 . This result can be combined with earlier measurements to produce a new world average for the neutron D coefficient of −5.5 ± 9.5 × 10 −4 , which constrains the phase of g A /g V to 180.073 • ± 0.12 • . This represents a 33% improvement (95% C.L.) over limits set by the current world average, and correspondingly further constrains standard model extensions with leptoquarks [18]. The result is also interesting in light of upper limits provided by the neutron and 199 Hg electric dipole moments on T-odd, P-even interactions such as left-right symmetric models and exotic fermion models.
A second run is being planned with strategies to improve the statistical limitations related to background experienced in the first run. Our study of systematic effects presented here shows that the largest is the tilt-ATP effect. The uncertainty on this effect can be reduced significantly with more data taken in the transverse polarization mode described in Section IV C. With the planned improvements in place, it will be feasible to improve the sensitivity to D to 3 × 10 −4 or less. | 2019-04-14T03:06:24.251Z | 2000-06-01T00:00:00.000 | {
"year": 2000,
"sha1": "8918551541a79cba8d08a321590843168405e62a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/nucl-ex/0006001",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "df8a10970da48f12f331a8da47c0a6f7cb8a2283",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
2579531 | pes2o/s2orc | v3-fos-license | Genome-wide identification of conserved and novel microRNAs in one bud and two tender leaves of tea plant (Camellia sinensis) by small RNA sequencing, microarray-based hybridization and genome survey scaffold sequences
Background MicroRNAs (miRNAs) are important for plant growth and responses to environmental stresses via post-transcriptional regulation of gene expression. Tea, which is primarily produced from one bud and two tender leaves of the tea plant (Camellia sinensis), is one of the most popular non-alcoholic beverages worldwide owing to its abundance of secondary metabolites. A large number of miRNAs have been identified in various plants, including non-model species. However, due to the lack of reference genome sequences and/or information of tea plant genome survey scaffold sequences, discovery of miRNAs has been limited in C. sinensis. Results Using small RNA sequencing, combined with our recently obtained genome survey data, we have identified and analyzed 175 conserved and 83 novel miRNAs mainly in one bud and two tender leaves of the tea plant. Among these, 93 conserved and 18 novel miRNAs were validated using miRNA microarray hybridization. In addition, the expression pattern of 11 conserved and 8 novel miRNAs were validated by stem-loop-qRT-PCR. A total of 716 potential target genes of identified miRNAs were predicted. Further, Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis revealed that most of the target genes were primarily involved in stress response and enzymes related to phenylpropanoid biosynthesis. The predicted targets of 4 conserved miRNAs were further validated by 5’RLM-RACE. A negative correlation between expression profiles of 3 out of 4 conserved miRNAs (csn-miR160a-5p, csn-miR164a, csn-miR828 and csn-miR858a) and their targets (ARF17, NAC100, WER and MYB12 transcription factor) were observed. Conclusion In summary, the present study is one of few such studies on miRNA detection and identification in the tea plant. The predicted target genes of majority of miRNAs encoded enzymes, transcription factors, and functional proteins. The miRNA–target transcription factor gene interactions may provide important clues about the regulatory mechanism of these miRNAs in the tea plant. The data reported in this study will make a huge contribution to knowledge on the potential miRNA regulators of the secondary metabolism pathway and other important biological processes in C. sinensis. Electronic supplementary material The online version of this article (10.1186/s12870-017-1169-1) contains supplementary material, which is available to authorized users.
Results: Using small RNA sequencing, combined with our recently obtained genome survey data, we have identified and analyzed 175 conserved and 83 novel miRNAs mainly in one bud and two tender leaves of the tea plant. Among these, 93 conserved and 18 novel miRNAs were validated using miRNA microarray hybridization. In addition, the expression pattern of 11 conserved and 8 novel miRNAs were validated by stem-loop-qRT-PCR. A total of 716 potential target genes of identified miRNAs were predicted. Further, Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis revealed that most of the target genes were primarily involved in stress response and enzymes related to phenylpropanoid biosynthesis. The predicted targets of 4 conserved miRNAs were further validated by 5'RLM-RACE. A negative correlation between expression profiles of 3 out of 4 conserved miRNAs (csn-miR160a-5p, csn-miR164a, csn-miR828 and csn-miR858a) and their targets (ARF17, NAC100, WER and MYB12 transcription factor) were observed.
(Continued on next page) * Correspondence: weichaoling0551@163.com; weichl@ahau.edu.cn † Equal contributors 1 (Continued from previous page) Conclusion: In summary, the present study is one of few such studies on miRNA detection and identification in the tea plant. The predicted target genes of majority of miRNAs encoded enzymes, transcription factors, and functional proteins. The miRNA-target transcription factor gene interactions may provide important clues about the regulatory mechanism of these miRNAs in the tea plant. The data reported in this study will make a huge contribution to knowledge on the potential miRNA regulators of the secondary metabolism pathway and other important biological processes in C. sinensis.
Background
In plants, miRNAs negatively regulate gene expression at the post-transcriptional level by translational repression or degradation and silencing of target gene transcripts [1,2]. Since the first miRNA (lin 4) was discovered in Caenorhabditis elegans [3], a large number of miRNAs have been identified across various species. To date, a total of 28,645 hairpin precursor miRNAs and 35,828 mature miRNAs have been deposited in public databases. Of these miRNAs, 6992 precursor miRNAs and 8496 mature miRNAs are found in various plant species (miRBase, Release 21), but none of these miRNAs are from plants of the Theaceae family (www.mirbase.org) [4]. Recent evidence has demonstrated that plant miRNAs are extensively involved in a number of biological functions including growth, development, and defense response against stresses [1,5]. In light of this evidence, it would be important to identify the miRNAs present in this family of plants.
The tea plant [Camellia sinensis (L.) O. Kuntze], which belongs to the family Theaceae, originated in China. Usually, tea is processed from one bud and the two uppermost leaves on the tender shoots of tea plants. It is one of the most popular non-alcoholic beverages in the world because of its attractive aroma, taste, and healthpromoting effects, which are attributable to the abundance of secondary metabolites present in tea plant leaves, including polyphenols, theanine, and volatile compounds [6]. Recently, a few studies have reported the miRNAs in C. sinensis. In the reported studies, several miRNAs in tea plants were detected by the comparative genomics approach [7][8][9] and direct cloning approach [10,11]. However, these have provided relatively little information in validating these miRNAs and their functions in the tea plants.
Recent developments in next-generation highthroughput sequencing (HTS) technology have allowed researchers to identify novel and low-abundant miRNAs in non-model plant species [12]. In particular, genome survey sequences have better potential for predicting the pre-miRNA secondary structures than the expressed sequence tags (ESTs) of plant species whose genome sequences are not yet available in public databases [13][14][15]. Despite these advances in technology, very few studies have applied them to the investigation of miRNAs present in the tea plant. Recently, 106 conserved and 98 candidate novel miRNAs from tea were identified by HTS [16]. Similarly, Zheng et al. [17] reported 295 conserved and 72 potential novel miR-NAs in tea plants by applying HTS. However, our knowledge of miRNAs present in C. sinensis is still limited due to the lack of whole-genome sequence information.
In this study, a small RNA library created from one bud and two tender leaves of tea shoots was constructed, and HTS was performed to obtain the miRNA profiles of C. sinensis. The potential secondary structures of the identified miRNAs were elucidated using the draft scaffold sequence assemblies of the C. sinensis genome that were obtained by genome survey using whole genome shotgun (WGS) sequencing. The expression profiles of the identified miRNAs were validated by miRNA microarrays, as well as stem-loop qRT-PCR. Further, the functions of the predicted potential miRNA targets were predicted using Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway. The target transcription factor genes of 4 conserved csn-miRNAs were validated through 5'RLM-RACE and correlation in the expression pattern of miRNAs and their target transcription factor genes were determined by qRT-PCR. The results of this study not only enrich the miRNA database of Theaceae, but also provide insights into the mechanism underlying posttranscriptional gene regulation mediated by miRNAs and lay the groundwork for further exploration of their biological roles in C. sinensis and other closely related species.
Small RNA library construction and sequence analysis
To identify the miRNAs in the tea plant, a small RNA library from the sample (one bud and two tender leaves) was constructed and subjected to HTS. A total of 6,211,111 raw reads were generated. After low-quality reads, adapters and short RNAs less than 15 nucleotides in length were removed, 3,455,797 unique small RNA reads (55.64% of the total raw reads) were obtained. These small RNA sequences were compared with the sequences deposited in the Repbase and Rfam databases, and an additional 35,464 reads representing rRNA, tRNA, snoRNA, snRNA and other non-coding RNAs were excluded. The sequences of the total and unique reads ranging from 15 to 30 nucleotides in length are shown in Fig. 1. The majority of small RNAs were 21-24 nucleotides in length, and 24-nucleotide small RNAs were predominant. The 24-nucleotide small RNAs comprised 43.0% and 58.29% of the total and unique sequences, respectively, and 21-nucleotide small RNAs comprised 11.16% and 5.76% of the total and unique sequences, respectively ( Fig. 1). Sequences longer than 25 nucleotides and shorter than 17 nucleotides were discarded. The remaining 287,525 reads were retained for miRNA validation and prediction.
Genome survey of the tea plant
We obtained 115.7 G base pairs (Gbp) of high-quality sequencing data from three DNA libraries (180 bp, 500 bp and 800 bp). The peak frequency of 17-mers in the read set was about 21 based on the 17-mer analysis of Li et al. [18]. The total k-mer count was 67,780,201,950. The tea genome size was estimated to be 3.22 Gb using the following formula: genome size = k-mer count/peak of the k-mer distribution. Thus, the genome survey data represented 27.2× coverage of C. sinensis genome. A total of 2,603,467 contigs were assembled with a total sequence length of 953.4 Mb. The N50 length was 668 bp in our assembly, and the longest contig and scaffold were 27,766 and 55,407 bp long respectively.
Identification of conserved miRNAs
In order to identify the conserved miRNAs in C. sinensis, we compared the small RNA reads to mature and precursor miRNAs from other plant species deposited in miRBase 21 (http://www.mirbase.org/) based on the presence of the homologous "seed" regions with 0-3 mismatched bases in the mature region of known miRNAs [19]. Among the unique small RNA reads analyzed, 124,826 reads were mapped to known plant-specific miRNAs and represented a total of 175 miRNAs belonging to 39 conserved miRNA families across a variety of plant species (Additional file 1). We counted the number of members in the 39 conserved miRNA families and found that this number varied widely from 1 to 16 for each family. Among these families, the csn-miR166 family contained the highest number of individual miRNA members, with 16 members distinguished based on nucleotide differences. Csn-miR5368 (15 members) was followed by csn-miR167 (14 members); csn-miR396 and csn-miR2911 (11 members each); csn-miR156 and csn-miR395 (9 members each); csn-miR171 (7 members); and csn-miR169, csn-miR172 and csn-miR390 (6 members each). The remaining miRNA families comprised 1 to 5 members each (Fig. 2).
The read number of the conserved miRNAs varied greatly; thus, these miRNAs differed widely in their expression levels (Additional file 1). Among the identified 39 conserved miRNA families, csn-miR396 showed high expression with a total number of 59,922 reads (48% of the total conserved miRNA reads); it was followed by csn-miR166 (25,026 reads, 20.05%) and csn-miR159 (11,904 reads, 9.54%) (Additional file 1). In addition, the percentage distribution of reads for individual members within each family showed wide variations (Additional file 1).
To investigate the evolutionarily conserved nature of these conserved miRNAs, we compared each miRNA family member of C. sinensis against the miRNA sequences available in miRBase for Populus trichocarpa, Medicago truncatula, Vitis vinifera, Oryza sativa and Arabidopsis thaliana (Fig. 2). The results indicated that the miRNA families identified were present in related plant species; thus, their functions may be evolutionarily conserved in the selected plant species.
Identification of novel miRNAs
The non-conserved small RNA reads were mapped to ESTs and scaffold sequences that were assembled at the base of the genome survey dataset (Additional file 2). The stem-loop structure of miRNA precursors was used to predict novel miRNAs using the mfold program [20], and 83 novel miRNAs were identified in the tea plant (Additional file 3). All miRNA precursors had a standard stem-loop hairpin secondary structure (SS). These miRNA precursors had folding free energies ranging from −4.7 to −138.3 kcal/mol (average, −57.08 kcal/mol). The predicted precursors of these novel miRNAs were 38-258 nucleotides in length. The sequence is most likely to represent an miRNA when the minimal folding free energy index (MFEI) is more than 0.85 [21]. It was found that the MFEI values of these miRNAs ranged from 0.4 to 2.8, with most MFEIs being >0.85.
To further validate the authenticity of the novel miRNAs and gain insight into their potential functions, the expression profiles of four predicted miRNAs (csn-miRn23, csn-miRn27, csn-miRn49 and csn-miRn56) in the leaves at different positions on the tender tea shoots were investigated. In addition, we also measured catechin contents in the corresponding leaves. The content of catechin was significantly higher in the 1st leaf, followed by 2nd leaf, 3rd leaf, 4th leaf and 5th leaf. These results indicated that the level of catechin was gradually decreased from 1st leaf to 5th leaf (Fig. 4). The expression pattern of csn-miRn23 was higher in 4th leaf, followed by 3rd leaf and 2nd leaf, with less expression detected in 5th leaf in comparison to 1st leaf. This expression pattern suggests that csn-miRn23 is negatively correlated with the pattern of catechin Relationship between the relative expression levels and HTS read counts of the identified csn-miRNAs. The expression level of U6 snRNA was used as an internal control. Relative expression was calculated using the 2 △CT method with stem-loop qRT-PCR. Data represent the mean ± SD values of three biological replicates content. The expression pattern of csn-miRn49 was dramatically fluctuated in the leaves at different positions on the tender tea shoots: the lowest expression was observed in 5th leaf, followed by 2nd leaf, 4th leaf and 3rd leaf. Csn-miRn27 and csn-miRn56 showed no obvious correlation between expression and catechin content: the highest expression pattern of csn-miRn27 and csn-miRn56 were observed in 3rd leaf and 5th leaf while the lowest expressions were observed in 5th leaf and 4th leaf respectively (Fig. 5). It requires further investigations to understand the relationship between catechin contents and these identified novel miRNAs in the leaves of tea plant.
Microarray analysis of miRNAs
Microarray-based hybridization was employed to confirm the existence of conserved and novel miRNAs predicted in this study. The mixed RNA pool microarray consisted of 258 probes that represented all the predicted miRNAs from HTS. The small-molecular-weight RNAs isolated from one bud and two tender leaves were hybridized to the microarray chip. A total of 111 miR-NAs were detected by microarray analysis, of which 93 were conserved miRNAs and 18 were novel miRNAs (Additional file 4). The conserved miRNA family members of csn-miR5368, csn-miR6173, csn-miR2911, and csn-miR6300 displayed high levels of expression, whereas those beloning to the csn-miR477c, csn-miR482-5p, csn-miR858b, csn-miR156, csn-miR395 and csn-miR403 families showed low levels of expression. With regard to the novel csn-miRNAs, csn-miRn5 and csn-miRn11-3p showed higher expression signals than the other putative novel miRNAs (Additional file 4).
Prediction of miRNA target genes
To help elucidate the biological functions of the identified miRNAs, we searched for the complimentary mRNA sequences from the corresponding transcriptome sequence data of C. sinensis to predict potential targets of the miRNAs using the Target Finder program. A total of 716 potential target genes were identified for 187 miRNAs, including 116 conserved and 71 novel miR-NAs, based on their perfect or near-perfect complementarity to their target mRNA sequences. For some miRNAs, more than one potential target gene was predicted. Detailed annotations of the results are presented in Additional file 5. Most of the conserved miRNA families were predicted to target transcription factor genes; this suggests that they may play a role in posttranscriptional regulation and transcriptional networks. Other miRNAs were predicted to target genes involved in diverse physiological and metabolic processes, including the regulation of plant metabolism, transport, cell growth and maintenance, and stress responses (Additional file 5).
GO and KEGG analysis
GO analysis of the predicted target transcripts of miR-NAs was performed to understand their potential regulation in the tea plant [22]. Based on their functional annotations, the target genes were classified into three GO categories: molecular function, biological process and cellular component ( Fig. 6 and Additional file 6). Molecular function was represnted by 9 terms (Fig. 6a), with the most frequent term being enzyme activity (33.12%); it was followed by nucleic acid binding (13.75%) and other binding (13.02%). Biological process was represented by 13 terms, with the three most Relative expression levels of four selected novel csn-miRNAs in leaf tissues from different positions in the tender tea shoot. U6 snRNA was used as an internal control. The expression level of the miRNAs in the first leaves was set as 1.0. Relative expression was calculated using the 2 -△△CT method with stem-loop qRT-PCR. Data represent the mean ± SD values of three biological replicates. Different letters above the bars represent significant differences at p < 0.05. Means followed by the same letter over the bars are not significantly different at the 0.5% level, according to DMRT analysis frequent terms being response to stress (19.06%), cellular process (18.37%) and biological process (12%) (Fig. 6b). Most of the proteins encoded by the target transcripts were localized in the membrane (26.14%), followed by other cellular components (18.79%) and the chloroplast (14.60%) (Fig. 6c).
KEGG pathway enrichment analysis showed that the target genes of the miRNAs were mainly involved in 16 pathways (P ≤ 0.05): phenylpropanoid biosynthesis and nitrogen metabolism were the two most common pathways ( Fig. 7 and Additional file 7). In particular, 16% the target genes were involved in phenylpropanoid biosynthesis pathways (Additional file 7). Based on the predicted target gene functions in phenylpropanoid biosynthetic pathway, we propose a pathway panel for polyphenol regulation in tea (Fig. 8). Furthermore, four novel miRNAs (csn-miRn23, csn-miRn27, csn-miRn49, and csn-miRn56) were selected and correlated their expression pattern (Fig. 5) with the pathway panel (Fig. 8). These miRNAs may play an important role in regulating the biosynthesis of phenolic compounds in tea plants.
Experimental verification of miRNA-guided cleavage of target mRNAs in tea plant
To examine the predicted targets of 4 conserved miRNAs, we used 5'RLM-RACE to determine the cleavage sites of miRNA on its target gene. All the 5'RLM-RACE PCR products were analysed on agarose gel, purified, cloned and sequenced. The sequencing results revealed that the cleavage site of ARF17 (CL4731.Contig1), WER transcription facor (CL10500.Contig1) and MYB12 (Unigene41782) lies between 11th and 12th base from 5′ end pairing of csn-miR160a-5p, csn-miR828 and csn-miR858a respectively. NAC100 (Unigene18223) was verified as a target of csn-miR164a. NAC100 can be regulated by cleavage in the binding region between the 10th and 11th base from 5′ end pairing of csn-miR164a (Fig. 9).
Expression analysis of miRNAs and their target genes in different tissues of tea plant
The target transcription factor genes of four conserved miRNAs validated through 5'RLM-RACE, were for expression analysis by qRT-PCR. To understand the physiological importance and the regulatory mechanisms of selected miRNAs in tea plant, correlation in the expression pattern of miRNAs and their target genes was determined in different tissues. The expression pattern of csn-miR160a-5p was higher in 3rd leaf, followed by 2nd leaf, stem, 1st leaf, root and flower in comparison to bud; conversely, the opposite trend was observed for the corresponding auxin responsive factor (ARF17). The expression of csn-miR164a and NAC100 also showed negative correlation in 3rd leaf. While a negative correlation between miR858a and MYB12 was observed in stem followed by flower and root, in the case of csn-miR828 and WER transcription factor, the expression of target gene was partially positively correlated with the expression of the miRNA in different tissues (Fig. 10).
Discussion
In earlier studies, a limited number of miRNAs were identified in tea plants through computational and direct cloning approaches [7][8][9][10]. With the development of high-throughput sequencing (HTS) technology in identification of novel and low-abundance miRNAs in plants, many more tea plant miRNAs have been found when investigating the abiotic stress response miRNAs in tea plant. For example, in a study of cold-response miRNAs in tea cultivar YS and BY, 106 conserved miRNAs and 98 potentially novel miRNAs were identified [16]. In a similar study, 295 conserved and 72 potential novel miR-NAs were found [17]. In a research of drought tolerance of tea plant, 268 conserved miRNAs and 62 novel miR-NAs were detected [23]. However, in these studies, almost all the mature miRNAs and their pre-miRNA secondary structures were predicted based on EST data sets with little information of genomic sequences of tea plant in public database. Most of the pre-miRNA cannot be identified through ESTs due to the lower abundance of primary miRNAs (pri-miRNAs) in miRNA processing and the limited sequencing information of the EST database. To comprehensively identifying miRNAs from plants, the scaffold sequences obtained by genome survey have been used as an important supplementary resource to predict the pre-miRNA secondary structures [13,14], in addition to ESTs.
In this study, we focused on identifying the miRNAs specifically from one bud and two tender leaves of tea plant and further analyzed their structures using the information of tea plant genome survey sequence scaffolds, which we recently obtained. With a genome survey, we obtained 27.2 × coverage of C. sinensis genome sequence followed by assembly to scaffolds. These valuable scaffold data served as important sequence resources in predicting the secondary structures of novel miRNAs by bioinformatics analyses. In total, we identified 175 conserved and 83 novel miRNAs in tea plant, among which pre-miRNA secondary structures of 140 conserved and 69 novel miRNAs were supported by corresponding scaffolds sequences (Additional files 1 and 3). So far, we have not found any miRNA identification studies on tea plant, whose pre-miRNA secondary structures have been confirmed on the base of genome sequences. Thus, our approach to tea plant miRNA identification was much more accurate and reliable than previous studies [16,17].
Length distribution analysis, as an effective assessment of the composition of small RNAs, showed that 24-nucleotide small RNA sequences were the most dominant in both the total and unique reads, followed by 21-to 23-nucleotide RNA sequences (Fig. 1). This result was highly consistent with previous reports on other tea plant cultivars [16,17]. Thus, small RNAs that are 24 nucleotides long might play a vital role in C. sinensis. Similar patterns in the length distribution of small RNAs were also found in other plant species, such as Citrus sinensis [24], Lycium chinensis [25], and Punica granatum [26], and some monocotyledons [27][28][29][30]. However, some studies have shown that small RNAs that are 21 nucleotides long are the most abundant respectively. U6 snRNA and GADPH were used as an internal control for miRNAs and targets, respectively. The expression level of the miRNAs and targets in the bud tissue were set as 1.0. Relative expression was calculated using the 2 -△△CT method. Data represent the mean ± SD values of three biological replicates. Different letters above the bars represent significant differences at p < 0.05. Means followed by the same letter over the bars are not significantly different at the 0.5% level, according to DMRT analysis in species such as Oryza sativa [31], Populus euphratica [32] and Citrus reticulate [33]. These variations in the length distribution of small RNAs indicate that the small RNA transcriptomes might be complicated and significantly different between plant species, depending on their regulatory mechanism in different types of plant species.
To explore the evolutionary roles of conserved and nonconserved miRNAs, both types of miRNAs from tea plant were compared against the known miRNAs in five different plant species (Populus trichocarpa, Medicago truncatula, Vitis vinifera, Oryza sativa and Arabidopsis thaliana). Seven miRNA families (miR1310, miR1427, miR2911, miR5301, miR6149, miR6173 and miR6300) were detected only in C. sinensis (Fig. 2), in addition to many miRNAs that are conserved to different extents in plants. We speculated that these seven non-conserved miRNA families might have specific biological functions in tea plant development, which requires further investigations.
Genome information of non-model plant species is important in identifying miRNAs and their pre-miRNA stem-loop structures in plant species such as tea [16,17]. Before genome sequence information is available, genome survey datasets are valuable in miRNA identification. We successfully established tea plant genome survey datasets by WGS, and used them to identify and analyze the stem-loop secondary structures of pre-miRNAs using the mfold software in this study. Our results showed that the lengths of these pre-miRNA structures varied from 38 to 258 nucleotides in tea plant (Additional files 1 and 3), which may involve in the regulation of miRNA biogenesis through the interaction of their unique structures with miRNA pathway enzymes [21].
Mature miRNAs are produced from the genomeencoded stem loop precursor in plants. The majority of 20-to 25-nucleotide miRNAs are processed from a pre-miRNA (~70-nucleotides long hairpin precursor), which forms a hairpin structure containing mature miRNA in either of its arms [34]. Therefore, these miRNAs (20-25 nucleotides in length) were used to search against miRNA precursors in the reference dataset, including EST and the genome survey sequences in this study. Our results showed that all the identified miRNAs were in their corresponding standard stem-loop hairpin secondary structures (Additional files 1 and 3). Minimum folding free energy (MFE) is a significant characteristic that determines the secondary structure and stability of nucleic acids (DNA and RNA). The predicted miRNA precursors have an average MFEI value of −57.08 kcal/ mol, which is higher than that of tRNA (−27.5 kcal/mol) and rRNA (−33 kcal/mol) [35]. Thus, the predicted characteristic and thermodynamically stable secondary structure of the pre-miRNAs in this study was consistent with that reported in previous studies on tea [16,17].
Microarray detection is widely used to confirm the presence of miRNAs in plants [36,37]. Here, we designed probes for conserved and novel tea plant miRNAs and developed microarray platform to analyze mature tea miRNA profiles, and detected altogether 93 conserved and 18 novel miRNAs in tea (Additional file 4). The miRNAs from the csn-miR5368 and csn-miR6173 families, which are conserved miRNA families, displayed high expression signals, while low expression signals were observed for conserved miRNA families csn-miR477, csn-miR482-5p, csn-miR858b, csn-miR156, csn-miR395 and csn-miR403. With regard to the novel miRNAs, csn-miRn5 and csn-miRn11-3p showed higher expression signals than the other miRNAs (Additional file 4). A previous study reported that miR477 and miR482 were differentially expressed during drought stress in the root and leaf of bread wheat; moreover, miR156 was reported to be down regulated in response to drought stress in rice [38]. In addition, miR858 was reported to be differentially expressed in different tissues of the tomato, and to negatively regulate anthocyanin biosynthesis under normal growth conditions [39]. In Spartina alterniflora, miR395 was reported to be downregulated by salt-induced stress and play important regulatory roles in plant growth and development [40]. Ebrahimi et al. [41] reported that miR403 was differentially expressed in response to abiotic stresses such as drought, heat, salt and cadmium in the sunflower. These observations indicate that miR156, miR395, miR403, miR477 and miR482 are most likely to be implicated in the response to abiotic stresses also in the tea plant. However, further studies on more specific functions of these miRNAs are required.
It is well known that transcription factors play significant roles in plant development, stress responses and secondary metabolism. For example, SPLs, a family of transcription factors specific to plants, which were targeted by miR156, were found to be involved in a number of stress response processes, including response to heat stress, salt stress and drought stress, in plants [42,43]. In this study, some miRNAs, such as those of the miR172 family (csn-miR172a-3p, csn-miR172c and csn-miR172d) and csn-miRn61, were predicted to target ethylene-responsive transcription factors, which may control the biosynthesis of ethylene and regulate the activation of the ethylene pathway [44]. Ethylene is the simplest but one of the most important phytohormones, which participates in major developmental processes, including seed germination, cell elongation, flowering, fruit ripening, organ senescence, abscission, and response to stress [45]. In addition, csn-miR395a-3p was predicted to target WRKY transcription factors, which exert an key function in abiotic stress responses in plants [46,47]. However, other miRNAs like csn-miR828 and csn-miR858 families were predicted to target transcription factor WER and MYB. The transcription factor WER encodes a putative transcription factor of the MYB family transcription factor which is the largest family of transcription factors that have significant functions to promote differentiation of non hair cells and involved in various regulatory networks by controlling development, metabolism and responses to biotic and abiotic stresses [48]. In this study, we confirmed through 5'RLM-RACE that csn-miR828 and csn-miR858a were predicted to target WER transcription factor and MYB12 respectively and exhibited a negative correlation in expression profile in different tissues of tea plant.
The plant-specific NAC family is mainly associated with the regulation of various processes including flower development, formation of secondary walls and cell division and shoot apical meristem formation [49,50]. Recently, it was shown that the miR164 family targeted six NAC family members in several plant species [51,52]. In our study, we validated NAC transcription factor gene as a target of csn-miR164a through 5'RLM-RACE. We also observed a negative correlation in the expression pattern of csn-miR164a and NAC100 in 3rd leaf, 2nd leaf and 1st leaf. Csn-miR160a-5p is predicted to target auxin responsive factor (ARF17) that have a crucial function in the response to various abiotic stresses in different plant species by fine-tuning plant growth and development [53]. Additionally, our study has also confirmed through 5'RLM-RACE that csn-miR160a-5p targets ARF17 and has inverse correlation between csn-miR160a-5p and ARF17, demonstrated by qRT-PCR. Based on this result, we observed that predicted miR-NAs were likely to target various mRNAs encoding transcription factors. These miRNA-TFs interactions not only serve as a basis for elucidating the regulation and function of miRNAs but also play a key role in polyphenol regulatory network by controlling various plant growth and development in tea plant.
Polyphenols are the most abundant secondary metabolites present in the leaves of the tea plant, they account for 18% to 36% of the dry weight of fresh leaves in most tea cultivars [54], even for higher than 36% in some cultivars, for example, C. sinensis var. assamica cv. Jianghua. In Arabidopsis spp., it has been proven that repression of miR156 activity resulted in the production of high levels of flavonols through miR156-targeted SPL genes [55]. In addition, miR828 in Arabidopsis spp. was reported to silence the MYB113 gene, which regulates the biosynthesis of anthocyanins [56]. It has recently been hinted that Cs-miR156 might reduce the expression level of the target gene SPL to regulate the dihydroflavonol 4-reductase (DFR) gene, which is a key gene in catechin biosynthesis [57]. In this study, we found that miRNAs of the miRNA156 family, including csn-miR156a, b, c, d, e, and h, might target the SPL4 gene, and that the MYB308 gene may be a possible target of csn-miR828 (Additional file 6). Thus, miRNAs of the tea plant may exert an important contribution in the biosynthesis of phenolic compounds, which are an important constituent of tea. In particular, the findings in present study indicate the need for further studies on the csn-miR156 family-regulated SPL4 gene and the csn-miR828-regulated MBY308 gene, as they may undertake a role in the accumulation of polyphenols in the tea plant.
These results indicate that these miRNAs may execute an important function in the biosynthesis of polyphenol compounds in the tea plant. Further studies on the regulation of F3′5′H genes by csn-miRn23, 27, 49, 56 might enhance our understanding of polyphenol accumulation and regulation in the commercially important tea tissue.
Usually one bud and two leaves of tender tea shoots are used to process commercial teas. In consideration of the important healthy and economic value of leaves of tea plant, the aim of this study is only focus on exploring miRNAs from leaves of tea plant. To comprehensively understanding the miRNA in tea plant as a natural species, we will further speculate miRNAs in other tea plant tissues (mature and old leaves, stem, flowers, seed and root). Although the genome survey was used to support the prediction of miRNAs in present study, this type of data only contained part of the whole genome sequences and may limit the identification of the extensive novel miRNAs. We are expecting to find more miRNAs when the whole genome sequence of tea plant is available.
Conclusions
In the present study, we identified 175 conserved miR-NAs and 83 novel miRNAs from a small RNA library obtained from one bud and two leaves of the tender tea shoot. A number of assembled scaffolds from the genome survey have proven to be valuable for elucidating the potential secondary structures of novel miRNAs without the whole-genome sequence of C. sinensis as a reference. In total, 716 target genes of miRNAs were predicted, which were mainly involved in enzyme activity, response to stress, and cellular processes. The highest ranking miRNA target genes might undertake a significant role in the accumulation of polyphenols, which are abundantly found in the tea plant. Furthermore, we verified the potential target transcription factor genes (ARF17, NAC100, WER and MYB12) for selected conserved miRNAs by 5'RLM-RACE, and negative correlations between expression levels of these conserved miRNAs and their targets was validated through qRT-PCR. Therefore, these miRNAs might be involved in various regulatory networks in the tea plant by regulating the expression of ARF17, NAC100, WER TF and MYB12. Our study laid a foundation for further investigation into the molecular mechanisms of the metabolic regulation of tea plants and other closely related species.
Plant material and growth conditions
Shuchazao (Camellia sinensis L. var.) was used in the present study, because it was certified as a national variety by the National Crop Variety Approval Committee in 2002 (Accession number: 2,002,008) in China. It can be used to possess both excellent quality and high yield of tea leaves, and has become a very popular variety. Currently, it is grown in six provinces with a total planting area of approximately 20,000 ha. Three-year-old clone cuttings were cultured in pots (30-cm diameter, 35-cm height) and grown under natural daylight conditions at the tea plantation in Anhui Agricultural University, Hefei, China. Various experiments were carried out on "two and a bud" samples (apical bud and the associated leaves up to the second node positions), different tissues (bud, stem, flower, root) and the leaves from the first to fifth positions (position with reference to the apical bud) on the shoots of tea plants. The samples, tissues and leaves were harvested from these cuttings, and immediately frozen in liquid nitrogen and stored at −80°C till further use.
Extraction and quantification of total RNA
Total RNA was extracted using the Total RNA Purification kit (NorgenBiotek Corporation, Canada) according to the manufacturer's protocol. The total RNA quantity and purity were examined using Agilent Technologies 2100 Bioanalyzer (Agilent Technologies, Palo Alto, CA, USA).
Small RNA library construction and sequencing
Small RNA fragments (16 − 30 nucleotides) were isolated from the 200 μl total RNA pool using polyacrylamide (15%) denaturing gel. After purification, the small RNAs were ligated sequentially to a 5′ RNA adaptor and a 3′ RNA adaptor by T4 RNA ligase, reverse transcribed to cDNA, and amplified by PCR. Finally, the purified and validated small RNA-derived cDNA library was sequenced by Solexa sequencing technology using an Illumina GAIIX system provided by LC Sciences (Houston, Texas, USA). The generated small RNA library sequences have been deposited in the Gene Expression Omnibus (GEO) database.
Genome survey using whole-genome shotgun sequencing and de novo assembly Genome survey-based miRNA identification was performed using the WGS approach [16,61]. The first leaves were harvested for DNA extraction with the plant genomic DNA extraction kit (Tiangen, Beijing, China) following the manufacturer's instructions. DNA was randomly sheared by nebulization and end-repaired with T4 DNA polymerase, and the DNA sequences were selected by size using gel electrophoresis on 1% lowmelting-point agarose. Three sequencing libraries of insert size 180 bp, 500 bp and 800 bp were constructed according to the manufacturer's instructions (Illumina Inc., San Diego, CA, USA). Pair-end sequencing of the constructed libraries was performed on the Illumina HiSeq2000 platform (Illumina, San Diego, CA, USA). Further, the sequences were de novo assembled to prepare a draft scaffold dataset using the SOAP de novo program with a K-mer of 17 [61]. To identify the miRNAs present in tea, the obtained small RNA library sequences were mapped to the draft scaffold assembly sequences of the WGS dataset. Technical support for genome sequencing and initial data analysis was provided by Beijing Genome Institute (BGI), Shenzhen, China.
Identification of conserved and novel miRNAs
The procedure for identification of conserved and novel miRNAs is summarized in Fig. 11. Briefly, to obtain clean reads, the raw reads were filtered using the Illumina pipeline filter (Solexa 0.3), and reads were processed with an in-house program, ACGT101-v4.2-miR (LC Sciences, Houston, Texas, USA) to remove adapter dimers, junk sequences, low-complexity sequences, common RNA families (rRNA, tRNA, snRNA, and snoRNA) and repeats [62,63]. Subsequently, unique sequences ranging from 17 to 25 nucleotides in size were collected and mapped with mature and pre-miRNAs from other plant species available in the miR-Base database (miRBase, Release 21; June 2014) [4]. After the analysis, conserved miRNAs that were mapped to the database sequences were identified and categorized into three groups (1, 2a, and 2b), whereas non-mapped sequences were considered as novel putative miRNAs and grouped into one group (3).
The stem-loop secondary structures of the pre-miRNAs were predicted using the mfold software (version 3.6) (http://unafold.rna.albany.edu/?q=mfold/ download-mfold), which was used as a support program for ACGT101-v4.2-miR. The following criteria were used for predicting the pre-miRNA secondary structure: (i) ≥ 12 nucleotides in one bulge of a stem, miRNA microarray analysis miRNA microarray analysis and chip hybridization were performed by LC Sciences (Houston, Texas, USA). Briefly, the extracted small-molecular-weight RNA from one bud and two leaves of tender tea shoots was used for microarray hybridization. RNAs were size-fractionated using the YM-100 Microcon centrifugal filter (Millipore, Bedford, MA, USA). The fractionated small RNAs (<30 nucleotides in length) were extended with a poly-(A) tail at the 3′ end by using poly (A) polymerase and were ligated to an oligonucleotide tag for subsequent fluorescent dye staining. A total of 258 probes were designed to represent the identified miRNAs obtained in HTS, which are completely complementary to the target miRNAs with a chemically modified nucleotide base. The probes were spotted in three replicates onto each chip. Hybridization was performed using 100 μl of hybridization buffer containing 6× SSPE (0.90 M NaCl, 60 mM Na 2 HPO 4 and 6 mM EDTA at pH 6.8) and 25% formamide at 34°C in a microcirculation pump (Atactic Technologies, Houston, TX, USA).
Hybridized arrays were analyzed with a laser scanner (GenePix 4000B; Molecular Device, Sunnyvale, CA, USA), and the images were digitized with the Array-Pro image analysis software (Media Cybernetics, Silver Spring, MD, USA).
During the data analysis, the background signal was subtracted and normalized using the LOWESS program. Spot signals that were less than three-fold the background standard deviation (BSD) and had a spot coefficient of variation (CV) greater than 0.5 were removed. To minimize noise and improve accuracy, probes with low abundance (signal value <100) were not included in variance analysis. Signals below the background average (signal value <30) were considered as non-expressing small RNAs. Fig. 11 Schematic representation of the miRNA screening procedure used to identify homologues of conserved and novel miRNAs from the tea genome. MIRs and miRs represent pre-miRNAs and mature miRNAs, respectively
Validation of miRNAs using qRT-PCR
The stem-loop qRT-PCR method was used to validate the predicted miRNAs from the small RNA sequencing analysis and determine the transcript levels of the miR-NAs [64]. The stem-loop RT primers consisted of 44 conserved and 6 variable nucleotides that are specific to the 3′ end of the miRNA sequences (5′GTC GT A TCC AGT GCA GGG TCC GAG GTA TTC GCA CTG GAT ACG CAN NNN NN3′). Forward primers, in which six nucleotides at the 3′ end of the stem-loop RT primer were complementary to the 3′ end of the miRNA, were designed for each individual miRNA according to Varkonyi-Gasic et al. [65], and 5′-GTG CAG GGT CCG AGG TAT TC-3′ was used as the reverse primer. U6 was used as an internal control. Detailed information regarding the primers used in this study is provided in Additional file 8. cDNAs were synthesized in a 20 μl solution containing 500 ng of total RNA, 4 μl 5× PrimeScript buffer, 0.5 μl M-MLV reverse transcriptase (Takara, Dalian, China), and 1 μl stem-loop RT primer (1 μM). After predenaturation at 65°C for 5 min, the mixture was incubated on ice for 2 min, and the RT reaction was performed for 30 min at 16°C. This was followed by 60 cycles at 30°C for 30 s, 42°C for 30 s and 50°C for 1 s, and a final hold at 85°C for 5 min.
The expression level of randomly selected miRNAs for validation of the predicted miRNAs was calculated using the 2 ΔCt method [66]. Ct values were determined automatically by the in-built software, based on the formula ΔCt = Ct U6 − Ct miRNA . The relative expression of the selected miRNAs in the leaves at different node positions was quantified using the 2 -ΔΔCt method and expressed as the fold change relative to the expression in the first leaves (set as 1) [67]. Quantity of selected miRNA accumulation levels in different tissues of tea plant were calculated as relative expression values in comparison to bud tissues using the 2 -ΔΔCt method [67]. The amplification efficiency for all the selected miRNAs tested in this study ranged from 95% to 110% was autocalculated from the slope of the standard curve by the CFX manager software. All qRT-PCR analyses were performed in three biological replicates, each of which consisted of three technical replicates.
Target gene prediction
To predict potential target genes, all miRNAs obtained with HTS were analyzed by Target Finder (https://github.com/ carringtonlab/TargetFinder Finder) against the transcriptome sequence data of C. sinensis, and deposited in the NCBI Sequencing Read Archive database under accession number SRR1979118. The predicted target genes were evaluated based on complementarity scoring and maximum expectation, according to the method described by Allen et al. [68].
GO and KEGG analysis
To determine the functions of the target genes and their corresponding metabolic network regulated by miRNAs, functional annotation of the target genes was performed using GO (http://www.geneontology.org/) mapping for molecular functions, biological processes and cellular components. The metabolic pathways were annotated using maps from the KEGG (http://www.genome.jp/ kegg/) database [69]. We applied a hyper-geometric distribution statistic to ensure that the target genes were matched with their corresponding biological metabolic pathways, which was indicated by p-values less than 0.05 according to Fisher's exact test.
Verification of miRNA target genes by 5'RLM-RACE
The cleavage site of predicted miRNA targets were validated through 5'RLM-RACE using FirstChoice RLM-RACE Kit (Invitrogen, Thermo Fisher Scientific) according to the manufacturer's protocol. Briefly, 10 μg of total RNA was ligated to the 5'RNA adapter using T4 RNA ligase and reverse transcribed to cDNA. Further, amplification of cleaved products of miRNA target genes was performed using target gene specific reverse primers and RNA adapter specific forward primers (Additional file 8). The final RLM-RACE products were analysed on agarose gel, purified using the DNA gel extraction kit (Corning Life Sciences, Suzhou, China) according to the manufacturer's instruction, directly cloned into a pEASY-T1 vector (TransGen Biotech, Beijing, China), transformed into Escherischia coli Trans1-T1 competent cells (TransGen Biotech) and sequenced. The sequencing results were analysed to map the cleavage sites. The primers used to amplify cleavage products of tea miRNA target genes through 5'RLM-RACE are listed in Additional file 8.
Real-time PCR analysis for expression of selected miRNA target genes
The qRT-PCR was carried out to examine the expression pattern of miRNA target genes. Total RNA was isolated from various tissues of tea plant (Bud, 1st leaf, 2nd leaf, 3rd leaf, stem, flower and root). 500 ng of total RNA from samples was reverse transcribed to cDNA using PrimeScript™ RT Master Mix (Takara, Dalian, China) as per manufacturer's instructions. The first-strand cDNA was used as a template for qRT-PCR with target gene specific primers and SYBR Premix Ex Taq™ II Master Mix (Takara, Dalian, China) in CFX96 connect real-time detection system (Bio-Rad, Hercules, USA). The expression level of miRNA target genes was determined by calculating fold change in selected tissues relative to bud tissue using the 2 -ΔΔCt method. GADPH was used as an reference internal control. All qRT-PCR analyses were performed in three biological replicates, each of which consisted of three technical replicates. Primers used in qRT-PCR are provided in Additional file 8.
High-performance liquid chromatography (HPLC) analysis of catechin content
To examine the catechins present in leaf tissues, 50 g of fresh tea leaf tissue was extracted and dissolved in ethanol. The ethanol solution was evaporated, dissolved in hot water and extracted three times with ethyl acetate. The organic phase was concentrated, dried and redissolved in 1 mL of methanol. The catechins present in the methanolic solution were analyzed by HPLC. All samples were filtered through a 0.22 μm filter membrane and separated on a Phenomenex Synergi 4u Fusion-RP80 column (250 × 4.6 mm) with detection set at 280 nm using an HPLC-UV detector (Waters 2478, Waters Instruments) according to Liu et al. [70]. | 2017-11-21T17:29:03.439Z | 2017-11-21T00:00:00.000 | {
"year": 2017,
"sha1": "90d82d376cdc3e2fd0f2393143b618433c323692",
"oa_license": "CCBY",
"oa_url": "https://bmcplantbiol.biomedcentral.com/track/pdf/10.1186/s12870-017-1169-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "90d82d376cdc3e2fd0f2393143b618433c323692",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
254913753 | pes2o/s2orc | v3-fos-license | Integrated Analysis of Widely Targeted Metabolomics and Transcriptomics Reveals the Effects of Transcription Factor NOR-like1 on Alkaloids, Phenolic Acids, and Flavonoids in Tomato at Different Ripening Stages
Tomato is abundant in alkaloids, phenolic acids, and flavonoids; however, the effect of transcription factor NOR-like1 on these metabolites in tomato is unclear. We used a combination of widely targeted metabolomics and transcriptomics to analyze wild-type tomatoes and CR-NOR-like1 tomatoes. A total of 83 alkaloids, 85 phenolic acids, and 96 flavonoids were detected with significant changes. Combined with a KEGG enrichment analysis, we revealed 16 differentially expressed genes (DEGs) in alkaloid-related arginine and proline metabolism, 60 DEGs were identified in the phenolic acid-related phenylpropane biosynthesis, and 30 DEGs were identified in the flavonoid-related biosynthesis pathway. In addition, some highly correlated differential-expression genes with differential metabolites were further identified by correlation analysis. The present research provides a preliminary view of the effects of NOR-like1 transcription factor on alkaloid, phenolic acid, and flavonoid accumulation in tomatoes at different ripening stages based on widely targeted metabolomics and transcriptomics in plants, laying the foundation for extending fruit longevity and shelf life as well as cultivating stress-resistant plants.
Tomato (Solanum lycopersicon) is the world's most valuable fruit and vegetable crop, and it is rich in phenolic compounds (phenolic acids and flavonoids) and glycoside alkaloids (tomatine) [1][2][3][4]. Alkaloids are preventive secondary metabolites present in plant tissues. Steroid glycoside alkaloids (SGAs) are nitrogenous secondary metabolites primarily identified in the Solanaceae species. SGAs protect plants from insects, bacteria, and viruses that serve essential roles in defending against biological and non-biological stresses [5][6][7]. Alkaloids in wild-type tomatoes exist at high levels during early developmental stages and decrease gradually at maturity [8]. Phenolic acids in tomatoes are dominated by hydroxycinnamic acid and its conjugates; chlorogenic acid and caffeic acid present in tomatoes are the most extensively investigated [9,10]. Phenolic acids are effective as components of the plant defense system against UV, insects, viruses, and bacteria [11,12], as well as having a remarkable effect on color retention, retarding microbial development, and extending shelf life [13]. There are over 500 different flavonoids in tomatoes, which are mainly categorized into flavones, flavonols, flavanones, flavanols, proanthocyanidins, and isoflavones, depending on their glycosidic structures [14,15]. Naringenin chalcone is among the major flavonoids, as well as various glycoconjugates of quercetin and kaempferol [16]. Flavonoids have excellent antioxidant and anti-inflammatory characteristics, and fruit ripening in tomatoes is related to flavonoid accumulation [17]. 2
of 21
Tomatoes undergo sharp changes in metabolism during the development of their fruit, and the metabolite content determines their nutritional value [15,18,19], making it an outstanding model for the study of maturation and secondary metabolism pathways in fleshy fruits [20]. The ripening of fruit is a sophisticated biogenetic process controlled by elements such as hormones, environmental signals, and transcription factors (TFs) and involves drastic variations in chemical composition, colors, textures, and flavors, as well as other sensory characteristics that directly influence the shelf life and quality of fruits [21][22][23][24]. Molecular genetic investigations indicate that the ripening of tomato fruit is controlled by sequences of ripening-related TFs and an ethylene-coordinated transcriptional regulatory network [25,26]. A total of 2026 genes have been identified as TFs in tomatoes, of which 516 have been linked to fruit ripening [27]. The various natural maturation inhibitory mutants in tomatoes have already been used to examine fruit shelf-life extension-for instance ripening inhibitor (rin), non-ripening (nor), colorless non-ripening (cnr), and never ripening (nr) [4,23]. There are 101 NAC TFs in the tomato genome and the majority of NAC proteins include a fully functional N-terminal DNA-binding structural region that is well conserved, as well as a variable C-terminal structural domain [23,24]. NAC1 [28,29], NAC4 [30], NAC9 [31], and NOR-like1 [24] have been shown previously to be involved in tomato-ripening regulation. It has been demonstrated that knock-out of NOR-like1 delays the start of fruit ripening by 14 days, reduces ethylene production, slows down softness and chlorophyll loss, and decreases the accumulation of lycopene [24]. However, the effect of NOR-like1 on tomato metabolites is still unclear. In this study, to reveal the effect of NOR-like1 on the metabolites of tomato at green ripening (GR), 3 days after the color break (BR+3) and 9 days after the color break (BR+9) we combined widely targeted metabolomics with transcriptomics to screen the three most distinctly different metabolites (alkaloids, phenolic acids, and flavonoids) and the associated DEGs, with KEGG-pathway enrichment analyses. In addition, relevant pathways of metabolism were further analyzed, and correlation network analyses were performed for different metabolites and different expression genes to provide deeper insights into the effects of NOR-like1 on alkaloids, phenolic acids, and flavonoids during tomato maturation. The present study may contribute to further investigation of the effect of NOR-like1 on metabolites in tomatoes at the various stages of maturation and could help to enhance tomato quality as well as extend the preservation period.
Plant Materials and Sample Preparation
Cultivated wild-type tomato Ailsa Craig (AC) and a NOR-like1 tomato transgenic line employing CRISPR/Cas9 gene-editing techniques were both grown in a greenhouse at China Agricultural University. A total of 18 samples of wild-type and CR-NOR-like1 fruits were collected and sampled at the green-ripening (GR) stage 3 days after the color break (BR+3) and 9 days after the color break (BR+9), including three biological replicates per period, and each sample was derived from six fruits, which were immediately frozen after sampling in liquid nitrogen at −80 • C and preserved until use. The biological samples were freeze-dried with a vacuum freeze dryer (Scientz-100F). Using a mixer mill with zirconia beads (MM 400, Retsch, Hamburg, Germany), the freeze-dried samples were pulverized at 30 Hz for 1.5 min. A total of 100 mg of the lyophilized powders were added by dissolving them in 1.2 mL 70% of the methanol solution, mixing by vortex for 30 s six times every 30 min, and leaving the samples at 4 • C in the fridge overnight. Before UPLC-MS/MS analysis, the extracts were filtered after centrifugation for 10 min on a 12,000 rpm centrifuge (SCAA-104, pore size 0.22 µm; ANPEL, Shanghai, China).
Widely Targeted Metabolic Analysis
Metabolite analysis was performed using UPLC (SHIMADZU Nexera X2) and tandem mass spectrometry (MS/MS, Applied Biosystems 4500 QTRAP) for metabolite analyses. Chromatographic separation was performed on a column (Agilent SB-C18, 1.8 µm, 2.1 mm × 100 mm), mobile phase-A in deionized water (containing 0.1% (v/v) formic acid), and phase-B in acetonitrile (containing 0.1% (v/v) formic acid). The gradient of elution was as follows: the B-phase was increased from 5% at 0 min up to 95% at 9.00 min and remained at 95% until 10 min, then the B-phase percentage was reduced to 5% at 10.00-11.15 min and equilibrated to 5% until 14 min. The flow rates were controlled with 0.35 mL/min, 40 • C column temperature, and 4 µL injection volume. Flow-throughs were alternately linked to the ESI-triple quadrupole linear ion trap (QTRAP)-MS.
LIT and triple quadrupole (QQQ) were obtained on a triple quadrupole linear-ion-trap mass spectrometer (QTRAP) with a UPLC/MS/MS system operating in the positive-ion mode (ESI+) and the negative-ion mode (ESI−). Operation of the ESI source parameters were as follows: ion source and turbo spray; source temperature of 550 • C; ion spray voltage of (IS) 5.5 kV (ESI+)/−4.5 kV (ESI−); ion-source gas I (GSI) of 50 psi, gas II (GSII) of 60 psi, and curtain gas (CUR) of 25.0 psi; and collision-activated dissociation (CAD) set to high. The tuning and mass calibration of the instrument was carried out in QQQ and LIT mode with 10 and 100 µmol/L of polypropylene glycol solution, respectively. QQQ scans were carried out in the MRM mode with the collision medium gas (nitrogen) setting. Further optimization of DP and CE was carried out for single MRM ion pairs. The set of specific pairs of MRM ions was monitored during each period according to the number of metabolites eluted during each period.
Differential-Metabolite (DEM) Analysis
Pre-processing of the data was performed and the data were analyzed by multivariate statistical analysis, which included principal component analysis (PCA) and orthogonal partial-least-squares discriminant analysis (OPLS-DA). A powerful tool to identify global patterns in multivariate experimental data, PCA provides a preliminary insight into the variability of overall metabolism among samples and the magnitude within groups of samples. OPLS-DA can filter messages in metabolites that are not correlated with categorical variables, thereby accurately analyzing inter-group differences in metabolites, which could further improve the resolving power and effectiveness of the model. Metabolite identification was conducted by matching the mass spectrum to the reference library MetWare database (MWDB). MWDB was constructed based on the standard compounds or public database like METLIN. Variable weight value (VIP) and p-values in t-tests were used to filter significantly different metabolites during different periods of growth between wild-type and CR-NOR-like1 tomatoes, and FC > 2 or FC < 0.5, along with p < 0.053 and VIP > 1, was satisfied to identify the differential compounds. KEGG annotation of significant DEM-and KEGG-related pathway analyses were conducted, resulting in the identification of critical pathways with the highest differential correlation to DEMs.
Transcriptomic Analysis
Using the procedure according to the RNeasy Mini Kit (Qiagen, Hilden, Germany), we isolated total RNA from the fruits and digested them with DNaseI (Qiagen, Germany) for genomic DNA removal. RNA was checked for purity with a NanoPhotometer ® spectrophotometer (IMPLEN, Westlake Village, CA, USA). RNA-concentration measurement was performed in a Qubit ® RNA Assay Kit in the Qubit ® 2.0 Flurometer (Life Technologies, Carlsbad, CA, USA). Detection of degradation and decontamination of RNA was carried out with a 1% agarose gel. Assessment of RNA integrity was carried out with the Bioanalyzer 2100 system's RNA Nano 6000 assay kit (Agilent Technologies, Santa Clara, CA, USA). The RNA library contained total RNA, ≥1 ug. After the cDNA libraries were constructed, the libraries were tested for quality. After the library detection was qualified, the various libraries were pooled based on the valid concentration and the amount of target downstream data required for Illumina sequencing generated paired-end reads of 150 bp. The preliminary quality of the raw sequence data from the sequencer was analyzed to obtain raw sequencing data. Clean data were obtained by removing all low-quality sequences and the subsequent analysis was based on the clean reads. The reference genome and its annotation files were downloaded from the indicated websites and indexed by using HISAT v2.1.0 to compare the clean reads with the reference genome.
Quantification of Gene-Expression Levels
Gene alignments were calculated using featureCounts v1.6.2, followed by the FPKM of each gene according to its length. FPKM is currently used as the most common method available to assess gene-expression levels.
Differential Analysis and Differential Gene-Enrichment Analysis
Differentials expressed among the two groups were analyzed with DESeq2 v1.22.1 and corrected for p-values to obtain the false-discovery rate (FDR) using the Benjamini and Hochberg method. |log 2 foldchange| and FDR were employed as significantly differentially expressed thresholds. KEGG is a test for hypergeometry and path cell-based hypergeometric distribution based on enrichment analysis.
Widely Targeted Metabolomic Differential Analysis
PCA analysis was performed to investigate the trend of separation between the groups and the existence of differences between the samples within the groups. The PCA results ( Figure 1A) show that the quality-control samples were well aggregated, demonstrating the good stability of the experimental method. Moreover, the sample points in each group were relatively well concentrated, suggesting good sample reproducibility at each developmentalperiod point for both tomatoes, and the distances between all groups were relatively dispersed, which indicates that the NOR-like1 gene editing produced a more significant differential change in the metabolites. The scatter plot of the OPLS-DA model scores ( Figure 1B-D) reveals significant differences between each of the two sample groups, and the samples were all within the confidence interval, indicating that there are significantly different metabolites between wild-type and CR-NOR-like1 tomatoes in the same developmental stage, which can be used for subsequent differential-component analyses. and its annotation files were downloaded from the indicated websites and indexe using HISAT v2.1.0 to compare the clean reads with the reference genome.
Quantification of Gene-Expression Levels
Gene alignments were calculated using featureCounts v1.6.2, followed by the FP of each gene according to its length. FPKM is currently used as the most common me available to assess gene-expression levels.
Differential Analysis and Differential Gene-Enrichment Analysis
Differentials expressed among the two groups were analyzed with DESeq2 v1 and corrected for p-values to obtain the false-discovery rate (FDR) using the Benja and Hochberg method. |log2 foldchange| and FDR were employed as significantly d entially expressed thresholds. KEGG is a test for hypergeometry and path cell-based pergeometric distribution based on enrichment analysis.
Widely Targeted Metabolomic Differential Analysis
PCA analysis was performed to investigate the trend of separation between groups and the existence of differences between the samples within the groups. The results ( Figure 1A) show that the quality-control samples were well aggregated, dem strating the good stability of the experimental method. Moreover, the sample poin each group were relatively well concentrated, suggesting good sample reproducibili each developmental-period point for both tomatoes, and the distances between all gro were relatively dispersed, which indicates that the NOR-like1 gene editing produc more significant differential change in the metabolites. The scatter plot of the OPLS model scores ( Figure 1B-D) reveals significant differences between each of the two sam groups, and the samples were all within the confidence interval, indicating that ther significantly different metabolites between wild-type and CR-NOR-like1 tomatoes in same developmental stage, which can be used for subsequent differential-compo analyses.
Differential-Metabolite (DEM) Identification
To investigate the differential effects of the NOR-like1 gene on metabolites in the ripening stage of tomato, three groups of samples were analyzed for significantly different metabolites in three developmental periods before and after NOR-like1 gene-editing treatment, using FC > 2 or FC < 0.5, p < 0.05, and VIP > 1 as selection criteria. Firstly, cluster heat-map analyses ( Figure 2A) and k-means cluster analyses ( Figure 2B) were applied to the DEMs, and 620 DEMs were grouped in eight clusters. A total of 216 DEMs were detected for WT-GR vs. CR-NOR-like1-GR ( Figure It can be seen that the changes in alkaloids, phenolic acids, and flavonoids were significantly and predominantly upregulated at all three stages; thus, it is suggested that NOR-like1 has a notable influence on these three metabolites during the ripening period of tomato. The metabolome was analyzed for KEGG-pathway enrichment, and the top 20 pathways with the most significant enrichment were also analyzed by forming differentialenrichment bubble plots ( Figure 4). The number of metabolites annotated by KEGG during the GR was 326, mainly distributed in 49 metabolic pathways and significantly enriched in flavonoid biosynthesis, isoflavone biosynthesis, phenylpropanoid biosynthesis, flavonoid and flavonol biosynthesis, tyrosine metabolism, etc. The number of metabolites annotated by KEGG during the BR+3 was 340, mainly distributed in 53 metabolic pathways and significantly enriched in sulfur metabolism, tyrosine metabolism, purine metabolism, propionate metabolism, carbapenem metabolism, etc. The number of KEGG annotated metabolites in the BR+9 was 340, distributed mainly in 36 metabolic pathways, with significant enrichment in isoflavone biosynthesis, flavonoid and flavonol biosynthesis, flavonoid biosynthesis, phenylpropanoid biosynthesis, purine metabolism, etc. It was observed that the flavonoid and phenylpropanoid pathways were significantly enriched particularly in GR and BR+9. The horizontal coordinate indicates the rich factor of each pathway, the vertical coordinate is the name of the pathway, the color of the dot reflects the p-value size, and the redder the color, the more significant the enrichment. The size of the dots represents the number of differential metabolites enriched. Enrichment is significant in the pathway labeled yellow. The horizontal coordinate indicates the rich factor of each pathway, the vertical coordinate is the name of the pathway, the color of the dot reflects the p-value size, and the redder the color, the more significant the enrichment. The size of the dots represents the number of differential metabolites enriched. Enrichment is significant in the pathway labeled yellow.
An Overview of RNA-Seq Data
High-quality libraries reflecting transcripts expressed in three developmental stages of wild-type and CR-NOR-like1 tomato (six strains, each with three biological replicates, with a total of 18 samples) were analyzed by RNA-seq on the Illumina HiSeq platform. Clean reads for follow-up analysis were derived by filtering the raw data and checking the sequencing rate of error, as well as the GC content profile (Table S1). A, B, and C represent GR, BR+3, and BR+9 of wild-type tomatoes, respectively; D, E, and F represent GR, BR+3, and BR+9 of CR-NOR-like1 tomatoes, respectively.
Differentially Expressed Gene (DEG) Identification
To identify DEGs in different tomato-ripening processes (the reference genome was from the NCBI database), we first investigated gene-expression patterns under different treatment conditions, centered and normalized the FPKM of differential genes, and then extracted the centralized and normalized FPKM values of the differential genes and analyzed them by hierarchical clustering (Figure 5A), which showed differential expression of a multitude of genes among samples. Furthermore, to find DEGs between samples and to analyze them for other functions, |log 2 Fold Change| ≥ 1 and FDR < 0.05 were taken as conditions for screening DEGs ( Figure 5B). In A vs. D, 736 genes were upregulated and 346 genes were downregulated; in B vs. E, 1984 genes were upregulated and 511 genes were downregulated; and in C vs. F, 577 genes were upregulated and 158 genes were downregulated. To further identify the metabolic pathways participating in DEGs, we performed a KEGG-pathway enrichment analysis (Figure 6), and metabolic pathways and signal transduction pathways were identified in which DEGs were significantly enriched. The DEGs in WT-GR vs. CR-NOR-like1-GR mapped to 111 KEGG pathways, enriched primarily in metabolic pathways (201, 54.92%) and secondary metabolite biosynthesis (129, 35.25%). In addition to these two pathways, it was also significantly enriched in plantpathogen interaction, MAPK signaling pathway-plant, phenylpropanoid biosynthesis, fatty-acid metabolism, and flavonoid biosynthesis. The DEGs in WT-BR+3 vs. CR-NOR-like1-BR+3 mapped to 132 KEGG pathways. The representative pathways were also metabolic pathways (445, 53.36%) and biosynthesis of secondary metabolites (247, 29.62%); the rest of the significantly enriched pathways were photosynthesis, photosynthesisantenna proteins, valine, leucine and isoleucine degradation, glycerolipid metabolism, and glyoxylate and dicarboxylate metabolism. The DEGs in WT-BR+9 vs. CR-NOR-like1-BR+9 mapped to 117 KEGG pathways and were similarly enriched mainly in the biosynthesis of secondary metabolites (159, 61.87%) and metabolic pathways (100, 38.91%). Other significant enrichment pathways were carbon metabolism, photosynthesis, glycolysis/gluconeogenesis, flavonoid biosynthesis, and nitrogen metabolism. The results illustrate that NOR-like1 significantly influences expression levels at different developmental stages involving metabolism, organic systems, and environmental-information processing, with a particularly pronounced effect on metabolism. Flavonoid biosynthesis and phenylpropanoid biosynthesis were more significantly enriched in GR and BR+9 compared to BR+3.
Effect of NOR-like1 on Alkaloids
A total of 83 distinct alkaloids were identified in the three developmental stages of wild-type and CR-NOR-like1 tomatoes (Table S2), and the content of alkaloids was BR9 > BR3 > GR. In CR-NOR-like1 tomatoes, there were 40 increases and 1 decrease during the GR period, 47 increases and 3 decreases during the BR+3 period, and 53 increases and 2 decreases during the BR+9 period. A total of 19 were significantly different in all three ripening stages. Two of these alkaloids (N-acetylputrescine, agmatine) and three phenolamines (p-coumaroylputrescine, N-feruloylputrescine, N-feruloylagmatine) were annotated to the arginine-and proline-metabolism (ko0330) pathways.
There were 16 DEGs identified in the arginine-and proline-metabolism pathways ( Table 1). Compared to wild-type tomatoes, CR-NOR-like1 tomatoes had five upregulated and three downregulated DEGs during GR, six upregulated and two downregulated DEGs during BG+3, and three upregulated and three downregulated DEGs during BR+9. Among them, Ami, ODC, adc1, and P5CS were remarkable changes only in GR; PDH was significantly changed in BR+3 only; and AST was significantly changed in BR+9 alone, whereas 1 ALDH and 1 CPA were significantly different in all three ripening stages.
Effect of NOR-like1 on Phenolic Acids
In total, 85 separate phenolic acids in wild-type tomatoes and CR-NOR-like1 tomatoes at three developmental stages were identified (Table S3), and the relative content of phenolic acid was GR > BR3 > BR9. The CR-NOR-like1 tomatoes had 40 increases and 8 decreases in the GR period, 24 increases and 33 decreases in the BR+3 period, and 19 increases and 16 decreases in the BR+9 period. A total of 17 of these phenolic acids changed significantly at all three ripening stages, and 10 of these were increased in CR-NOR-like1 tomatoes, including 4-aminosalicylic acid, isoferulic acid, ferulic acid, methyl caffeate, p-hydroxycinnamic acid p-hydroxyphenethylamine, gallic acid-4-o-glucoside,1o-feruloylquinic acid, 5-o-feruloylquinic acid, benzyl-(2"-o-xylosyl) glucoside, and osmanthuside H [2-(4-hydroxyphenyl) ethyl-β-D-apiosyl-(1 → 6)-β-D-glucoside]. Of these, 18 showed significant variation only in the GR, 25 only in the BR+3, 4 only in the BR+9, and 18 in all three periods. The major differential metabolic pathway involved in phenolic acids was phenylpropanoid biosynthesis (ko0940), with 12 phenolic acids annotated in this pathway. All seven of the DEMs involved in the GR were significantly upregulated. BR+3 had 5 DEMs, with two upregulated and three downregulated. BR+9 contained eight DEMs, including five upregulated and three downregulated.
There were 60 related DEGs characterized in the phenylpropanoid biosynthesis pathway ( Table 2). Compared to wild-type tomatoes, CR-NOR-like1 tomatoes had 19 upregulated and 11 downregulated during GR, 28 upregulated and 5 downregulated during BR+3, and 9 up-regulated and 3 downregulated during BR. The 1CCR and 1HCT were significantly differentially expressed at all three ripening stages, REF1 and COMT were DEGs only at the GR period, and CAGT was differentially expressed in the BR+3 period only.
The three ripening stages identified 30 DEGs of enzymes associated with flavonoid biosynthesis. (Table 3). Compared to wild-type tomatoes, CR-NOR-like1 tomatoes had nine significantly upregulated and two downregulated during GR, nine were significantly upregulated and four downregulated during BR+3, and 11 upregulated and one downregulated during BR+9. Among them, the HIDH and VR genes were only significantly different in the BR+3 period, whereas the F3H and CHS genes were only significantly different in BR+9.
Correlation Network Analysis
To investigate the effect of NOR-like1 on the regulatory network of alkaloid, phenolic acid, and flavonoid biosynthesis in tomatoes, these three differential metabolites were tested for correlation with differentially expressed genes in three developmental periods, screening DEGs and DEMs with high correlation-coefficient values (r > 0.8) for correlation analysis.
The Ami (LOC101257218) in the metabolic pathway related to alkaloid synthesis showed a high negative correlation with p-coumaroylputrescine (r = −0.832) in the GR period.
According to the results of DEGs and DEMs of phenolic-acid-relevant metabolic pathways, a total of 18 genes were found to show a highly significant correlation with six phenolic acids and one lignan (Table 4). In the GR period, six genes were highly correlated with two phenolic acids, sinapinaldehyde and 5-O-p-coumaroylquinic acid were reduced, and the expression of all six DEGs was upregulated, with BGL (gene-LOC101262919) positively correlated with sinapinaldehyde as well as C4H, HCT, and CAD, which differentially and negatively acted on these two differential metabolites. The BR+3 period contained 15 DEGs highly associated with four phenolic acids, with reduced levels of p-coumaraldehyde, coniferin, and sinapyl alcohol as well as increased levels of ferulic acid. P-coumaraldehyde and coniferin were influenced by CCR, HCT, and TOGT1. Among them, ferulic acid and coniferin were positively regulated by two TOGT1s (LOC101258702, LOC101260915, respectively) and sinapyl alcohol was positively regulated by both HCT (LOC101244961) and BGL (LOC101251735). The remaining DEGs all negatively affected these DEMs to varying degrees. Three genes of BR+9 were highly associated with two phenolic acids and one lignan, and the content of 5-O-p-coumaroylquinic acid and p-coumaraldehyde was reduced in both and negatively correlated with POD, HCT, and 4CL. Based on the findings of DEGs and DEMs of flavonoid-relevant biosynthesis pathways, a total of eight genes showed a high correlation with 15 flavonoids (Table 5). Six structural genes showed a higher correlation with 10 flavonoids and two phenolic acids in the GR period, and all 12 DEMs were upregulated (5 flavanones, 3 chalcones, 2 flavanonols, 1 flavone, 1 phenolic acid)-hesperetin, homoeriodictyol, phloretin, butin, aromadendrin, naringenin chalcone, hesperetin-7-O-glucoside, chrysin, trans-5-O-(p-coumaroyl) shikimate, eriodictyol, luteolin, and phlorizin-and were highly correlated with six DEGs. Among them, C4H (LOC101262919) was positively associated with luteolin, hesperetin, homoeriodictyol, phloretin, butin, aromadendrin, naringenin chalcone, and pinobanksin. The remaining genes-CCOAOMT, CHI, and HCT-all negatively interacted with each of the DEMs in the network to varying degrees. The BR+3 period had one gene showing a higher correlation with three flavonoids, all three DEMs were reduced (3 flavanones), and hesperetin, homoeriodictyo, and isosakuranetin were all positively associated with HCT (LOC101244961). Three genes of BR+9 showed a high correlation with three flavonoids (1 flavones, 1 chalcones, 1 flavanones) and one phenolic acid. HCT (LOC101252161) was negatively correlated with 5-O-p-coumaroylquinic acid, whereas luteolin, isoliquiritigenin, and eriodictyol were positively correlated with CHS (CHS1) and all upregulated. Luteolin was also correlated positively with C4H (LOC101262919).
Discussion
Within the present study, we integrated widely targeted metabolomic and transcriptomic analyses revealing significant effects of NOR-like1 gene editing on alkaloids, phenolic acids, and flavonoids in tomato and identified relevant genes engaged in these different metabolites.
To understand the effects of NOR-like1 gene editing, the pathways associated with these substances were screened for DEMs and DEGs based on the findings of KEGG enrichment analysis, and correlations were performed for DEMs and DEGs in pathways with correlations higher than 0.8 (a correlation coefficient higher than 0.8 indicates high correlation).
Steroid alkaloids were the major alkaloid types in the tomatoes, with 27 in total, and all of them were upregulated. The most significantly upregulated alkaloids were γ-solanine (Log 2 FC = 13.07) in BR+3 and β2-tomatine (Log 2 FC = 12.35) in BR+9; these two alkaloids increased much more than the rest, and they all belonged to steroid alkaloids. Research related to Solanaceae has mainly focused on α-tomatine and α-kynurenine, with few reports on γ-solanine and β2-tomatine, both of which aid plants to defend themselves against pathogens and herbivores via their bitterness and toxicity, and are enriched significantly in leaves, roots, and immature green tomatoes [16]. However, NOR-like1 gene editing produced a very significant positive effect on the late-ripening (BR+3, BR+9) tomatine. A total of five alkaloids were annotated to arginine and proline metabolism. Agmatine is a metabolite associated with arginine and proline metabolism, whereas n-acetylputrescine, pcoumarinylputrescine, ferulic putrescine, and n-ferulic agmatine are related derivatives [32]. Analyses of correlation revealed that only Ami (LOC101257218) was highly negatively associated with p-coumaroylputrescine and that p-coumaroylputrescine was increased in tomatoes (Log 2 FC = 3.20). Ami is an important enzyme in arginine and proline metabolism, and the downregulation of Ami facilitated the accumulation of arginine and proline [33]. Consequently, these alkaloids associated with arginine and proline metabolic pathways were also accumulated. Based on these results, we came to the conclusion that NOR-like1 positively affects alkaloid synthesis in tomatoes, especially late ripening. Therefore, we hypothesize that the changes in arginine and proline metabolic pathways as also part of plant defense mechanisms.
Changes in the gene-expression levels of the phenylpropanoid biosynthesis pathway correlated with variation in lignin, phenolic acids, and flavonoids, whereas changes in the phenylpropanoid biosynthesis in CR-NOR-like1 tomatoes mainly involved alterations in phenolic acids. These substances are initially converted from phenylalanine to cinnamic acid by deamination under the action of PAL, followed by hydroxylation to p-coumaric acid catalyzed by C4H, and eventually converted to p-coumaroyl CoA by the addition of a CoA to p-coumaric acid catalyzed by 4CL, finally entering the phenolic-acid pathway to produce p-coumaroyl quinic acid, caffeic acid, ferulic acid, etc., which exist in the plant in a free state and combined with esters or glycosides [14,[34][35][36]. POD, CCR, and BGL were shown to be important enzymes in the plant-defense response [37], and all three enzymes were predominantly upregulated in CR-NOR-like1 tomatoes. Sinapinaldehyde was the most upregulated of all alkaloids in the GR period (Log 2 FC = 9.84) and it was the precursor of sinapyl-alcohol synthesis in all three periods; however, sinapyl alcohol was reduced in BR+3 under the negative regulation of BGL (LOC101251735) (Log 2 FC = −2.35). Because CAD is the enzyme capable of reducing sinapinaldehyde to the corresponding sinapyl alcohol [38], we presumed that the decrease in sinapyl-alcohol content could have been due to significant negative regulation of sinapinaldehyde by CAD (LOC101253340). Ferulic acid is a key metabolite annotated to the phenylpropanoid biosynthesis and is positively regulated at BR+3 by TOGT1 (LOC101260915). A natural production most commonly found in tomatoes, ferulic acid is widespread in the cell wall and has free radical scavenging and antiviral functions [39]. NOR-like1 significantly affects key enzymes in the phenylpropanoid biosynthesis pathway, positively influencing phenolic-acid components.
Among all the flavonoids that were significantly changed, flavonols (28), flavones (18), and flavanones (16) accounted for more than half of all flavonoid differential metabolites. In the GR period, chrysin, naringenin chalcone, luteolin, etc.; chrysin in BR+3; and acacetin, chrysin, wogonin, etc. in BR+9 were where the flavonoids increased most obviously. Of these, chrysin was upregulated extremely significantly in all three periods (Log 2 FC = 12.17, GR; Log 2 FC = 13.81, BR+3; Log 2 FC = 8.07, BR+9). Chrysin has been shown to have antioxidant capacity and can scavenge free radicals, is anti-inflammatory, and has demonstrated other activities [40], but it has been less studied in tomatoes. The genes associated with flavonoid biosynthesis are mainly classified into structural and regulatory genes [41]for instance, CHS, FLS, F3H, F3 H, C4H, etc. C4H plays an important role in flavonoid biosynthesis [42][43][44]. Correlation analysis showed that C4H at all three ripening stages positively regulated flavonoids including luteolin, hesperetin, homoeriodictyol, phloretin, butin, aromadendrin, naringenin chalcone, and pinobanksin. CHS, FLS, and CHI are key enzymes in flavonoid biosynthesis. The first key speed-limiting enzyme in the flavonoid biosynthetic is CHS [45], whereas CHS and FLS can synergistically upregulate the biosynthesis of flavonols in tomatoes [46]. Both genes were mainly expressed as upregulated in BR+9, whereas the content of flavonols in BR+9 was significantly increased (20 increased and 1 decreased). CHI is the second key speed-limiting enzyme in flavonoid biosynthesis [47]. Naringin chalcone can be isomerized in the presence of CHI to produce naringin, a procedure that may alternatively occur spontaneously in the absence of active CHI [16]. In the present study, CHI was down-regulated in GR, whereas naringenin chalcone and naringenin were very significantly upregulated in GR. Apart from that, F3H, or F3 H, is also a critical enzyme in flavonoid biosynthesis. F3H acts on naringin and eriodictyol, resulting in the substitution of the C3 position with a hydroxyl group. This leads to the formation of the corresponding dihydroflavonols, i.e., dihydrokaempferol (DHK), dihydroquercetin (DHQ), and DHQ, which can be obtained by catalyzing DHK in the presence of F3 H [14]. Naringenin, eriodictyol, and DHK for GR as well as eriodictyol and DHQ for BR+9 were significantly up-regulated, as shown in Table S4. NOR-like1 gene editing greatly affected the critical enzymes for flavonoid biosynthesis, and the variation in flavonoid metabolites was most obvious from the data.
The present results indicate that NOR-like1 dramatically affected gene-expression levels involved in metabolism, organic systems, and environmental-information processing in the different developmental stages, especially on alkaloids, phenolic acids, and flavonoids, with flavonoids being the most dramatic change. Highly relevant key metabolites and key regulatory genes were further screened by correlation analysis. Ami in the arginine and proline metabolic pathways; PAL, C4H, 4CL, and CAD in the phenylpropane biosynthesis; and CHS, FLS, F3H, F3 H, and C4H in the flavonoid pathway all had significant regulatory effects on the accumulation of alkaloids, phenolic acids, and flavonoid metabolites. It was demonstrated that under the same inherited background it is possible to store fruits with higher overall antioxidant capacity longer than those of lower antioxidant capacity, that tomato fruits with higher antioxidant ability show slower overripening [48], and that phenolics and alkaloids also have a significant effect on biotic-biotic resistance. Accordingly, we hypothesized that NOR-like1 gene editing would enhance antioxidant capacity and cause delayed ripening by upregulating alkaloid, phenolic acid, and flavonoid accumulation during tomato ripening. The present study lays the foundation for extending fruit longevity and shelf life as well as cultivating stress-resistant plants, and also provides directions for further studies on the mechanisms of NOR-like1 transcription-factor effects on metabolites in tomatoes. Furthermore, it enriched the study of NAC gene function and regulation in tomatoes and initially revealed the effect of NOR-like1 gene editing on the accumulation of alkaloids, phenolic acids, and flavonoid metabolites in tomato. The effect of NOR-like1 on the metabolism of alkaloids, phenolic acids, and flavonoids during tomato ripening needs to be further investigated-for instance, with antioxidant assays or combined with proteomic approaches-to enrich our studies and explore more deeply the regulatory mechanism of NOR-like1 transcription factor. | 2022-12-21T16:20:48.145Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "1de73b27a0decc46e0e28de709e937d6bbb7e1f1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-1989/12/12/1296/pdf?version=1671588023",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa1f528c6074b3fdbe102511cc9cc17bcaba45db",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
119298072 | pes2o/s2orc | v3-fos-license | The class of Eisenbud--Khimshiashvili--Levine is the local A1-Brouwer degree
Given a polynomial function with an isolated zero at the origin, we prove that the local A1-Brouwer degree equals the Eisenbud-Khimshiashvili-Levine class. This answers a question posed by David Eisenbud in 1978. We give an application to counting nodes together with associated arithmetic information by enriching Milnor's equality between the local degree of the gradient and the number of nodes into which a hypersurface singularity degenerates to an equality in the Grothendieck-Witt group.
When f is a C ∞ function, Eisenbud-Levine and independently Khimshiashvili constructed a real nondegenerate symmetric bilinear form (more precisely, an isomorphism class of such forms) w 0 (f) on the local algebra Q 0 (f) := C ∞ 0 (R n )/(f) and proved (1) deg 0 (f) = the signature of w 0 (f) ([EL77, Theorem 1.2], [Khi77]; see also [AGZV12, Chapter 5] and [Khi01]). If we further assume that f is real analytic, then we can form the complexification f C : C n → C n , and Palamodov [Pal67,Corollary 4] proved an analogous result for f C : (2) deg 0 (f C ) = the rank of w 0 (f).
Eisenbud observed that the definition of w 0 (f) remains valid when f is a polynomial with coefficients in an arbitrary field k and asked whether this form can be identified with a degree in algebraic topology [Eis78, Some remaining questions (3)]. Here we answer Eisenbud's question by proving that w 0 (f) is the local Brouwer degree in A 1 -homotopy theory. More specifically, we prove Main Theorem. If f : A n k → A n k has an isolated zero at the origin, then (3) deg A 1 0 (f) = the stable isomorphism class of w 0 (f).
Morel described the degree map in A 1 -homotopy theory in his 2006 presentation at the International Congress of Mathematicians [Mor06]. In A 1 -homotopy theory, one of several objects that plays the role of the sphere is P n k /P n−1 k , the quotient of n-dimensional projective space by the (n − 1)-dimensional projective space at infinity. Morel constructed a group homomorphism deg A 1 : [P n k /P n−1 k , P n k /P n−1 k ] → GW(k) from the A 1 -homotopy classes of endomorphisms of P n k /P n−1 k to the Groethendieck-Witt group, which is the groupification of the monoid of (isomorphism classes of) nondegenerate symmetric bilinear forms over k. The local degree is defined in terms of the global degree in the natural manner, as we explain in Section 2.
The proof of the Main Theorem runs as follows. When f has a simple zero at the origin, we prove the result by directly computing that both sides of (3) are represented by the class of the Jacobian det( ∂f i ∂x j (0)) . When f has a simple zero at a nonrational point, we show an analogous equality using work of Hoyois [Hoy14]. Using the result for a simple zero, we then prove the result when f has an arbitrary zero. We begin by reducing to the case where f is the restriction of a morphism F : P n k → P n k satisfying certain technical conditions (those in Assumption 19) that include the condition that all zeros of F other than the origin are simple. For every closed point x ∈ A n k , Scheja-Storch have constructed a bilinear form whose class w x (F) equals the Eisenbud-Khimshiashvili-Levine class when x = 0. From the result on simple zeros, we deduce that holds for y ∈ A n k (k) a regular value. For y arbitrary (and possibly not a regular value), we show that both sums in (4) are independent of y, allowing us to conclude that (4) holds for all y. In particular, equality holds for y = 0. For y = 0, we have deg x (F) = w x (F) for x ∈ f −1 (0) not equal to the origin by the result for simple zeros, and taking differences, we deduce the equality for deg 0 (F) = w 0 (F), which is the Main Theorem.
We propose counting singularities arithmetically, and in Section 6, we do so using the Main Theorem in the manner that we now describe. Suppose char k = 2 and n is even, and let f ∈ k[x 1 , . . . , x n ] be the equation of an isolated hypersurface singularity 0 ∈ X := {f = 0} ⊂ A n k at the origin. We define the arithmetic (or A 1 −) Milnor number by µ A 1 (f) := deg A 1 0 (grad(f)) and show that this invariant is an arithmetic count of the nodes (or A 1 -singularities) to which X bifurcates. Suppose grad(f) is finite and separable. Then for general (a 1 , . . . , a n ) ∈ A n k (k), the family (5) f(x 1 , . . . , x n ) + a 1 x 1 + · · · + a n x n = t over the affine t-line contains only nodal fibers as singular fibers. For simplicity, assume that the origin is the only zero of grad f and the nodes appearing in (5) all have residue field k (rather than a nontrivial extension). We then have (6) µ A 1 (f) = #(nodes with henselization {u 1 x 2 1 + · · · + u n x 2 n = 0}) · u 1 . . . u n in GW(k).
Here the sum runs over isomorphic classes of henselizations of rings k[x 1 , . . . , x n ]/u 1 x 2 1 + · · · + u n x 2 n , and the naive count is of nodal fibers of (5).
Taking the rank of both sides of Equation (6), we deduce that the number of nodal fibers equals the rank of µ A 1 (f). When k = C, this fact was observed by Milnor [Mil68,page 113,Remark], and (6) should be viewed as an enrichment of Milnor's result from an equality of integers to an equality of classes in GW(k). When k = R, the real realization of Equation (6) was essentially proven by Wall [Wal83, page 347, esp. second displayed equation].
Through Equation (6), the arithmetic Milnor number provides a computable constraint on the nodes to which a hypersurface singularity can bifurcate. As an illustration, consider the cusp (or A 2 -singularity) {x 2 1 + x 3 2 = 0} over the field Q p of p-adic numbers. A computation shows that µ A 1 (f) has rank 2 and discriminant −1 ∈ Q * p /(Q * p ) 2 . When p = 5, we have −1 = 1 · 2 in Q * p /(Q * p ) 2 , so we conclude that the cusp cannot bifurcate to the split node {x 2 1 + x 2 2 = 0} and the nonsplit node {x 2 1 + 2 · x 2 = 0}. When p = 11, µ A 1 (f) does not provide such an obstruction and in fact, those two nodes are the singulars fibers of x 2 1 + x 3 2 + 10 · x 2 = t. We discuss this example in more detail towards the end of Section 6.
Arithmetic Milnor numbers, and other local A 1 -degrees, also appear in enumerative results of M. Levine. (Note: this person is different from the second author of [EL77]). Indeed, in [Lev17] Levine establishes a formula giving an enumerative count of the singular fibers of a suitable fibration of a smooth projective variety over a curve in which an isolated singularity of a fiber is weighted by µ A 1 (f). Levine's formula should be viewed as a global analogue of (6). Levine also computes µ A 1 (f) when f satisfies a certain "diagonizability" hypothesis. A different application of the A 1 -degree to enumerative geometry is given by the present authors in [KW17]. There the authors study a weighted count of the lines on a cubic surface with the weights defined as local A 1 -degrees. We discuss the application to cubic surfaces in Section 7.
The results of this paper are related to results in the literature. We have already discussed the work of Eisenbud-Khimshiashvili-Levine and Palamodov describing w 0 (f) when k = R, C. When k is an ordered field, Böttger-Storch studied the properties of w 0 (f) in [BS11]. They defined the mapping degree of f : A n k → A n k to be the signature of w 0 (f) [BS11, 4.2 Definition, 4.3 Remark] and then proved that the mapping degree is a signed count of the points in the preimage of a regular value [BS11, Theorem 4.5].
Grigor ′ ev-Ivanov studied w 0 (f) when k is an arbitrary field in [GI80]. They prove that a sum of these classes in a certain quotient of the Grothendieck-Witt group is a well-defined invariant of a rank n vector bundle on a suitable n-dimensional smooth projective variety [GI80,Theorem 2]. (This invariant should be viewed as an analogue of the Euler number. Recall that, on an oriented n-dimensional manifold, the Euler number of an oriented rank n vector bundle can be expressed as a sum of the local Brouwer degrees associated to a general global section.) The Main Theorem is also related to Cazanave's work on the global A 1 -degree of a rational function. In [Caz08,Caz12], Cazanave proved that the global A 1 -degree of a rational function F : P 1 k → P 1 k is the class represented by the Bézout matrix, an explicit symmetric matrix. The class w 0 (f) is a local contribution to the class of the Bézout matrix because the global degree is a sum of local degrees, so it is natural to expect the Bézout matrix to be directly related to a bilinear form on Q 0 (f). As we explain in the companion paper [KW16a], such a direct relation holds: the Bézout matrix is the Gram matrix of the residue form, a symmetric bilinear form with orthogonal summand representing w 0 (f).
CONVENTIONS
k denotes a fixed field.
We write P or P x for the polynomial ring k[x 1 , . . . , x n ] and m 0 for the ideal (x 1 , . . . , x n ) ⊂ P. We write P y for k[y 1 , . . . , y n ]. We write P for the graded ring k[X 0 , . . . , X n ] with grading deg(X i ) = 1. We then have P n k = Proj P.
If f : A n k → A n k is a polynomial function, then we write f 1 , . . . , f n ∈ P x for the components of f. We say a polynomial function f : A n k → A n k has an isolated zero at a closed point x ∈ A n k if the local algebra Q x (f) := P mx /(f 1 , . . . , f n ) has finite length. We say that a closed point Note that if f has an isolated zero at the origin, then Q 0 (f) has dimension 0, which implies that the connected component of f −1 (0) ∼ = Spec P/(f 1 , . . . , f n ) containing 0 contains no other points, whence 0 is isolated in its fiber.
Using homogeneous coordinates [X 0 , X 1 , . . . , X n ] for P n k , we use A n k to denote the open subscheme of P n k where X 0 = 0, and P n−1 k to denote its closed complement isomorphic to P n−1 k .
For a vector bundle E on a smooth scheme X, let Th(E) denote the Thom space of E of Section 3, Definition 2.16 of [MV99], i.e., Th(E) is the pointed sheaf where z : X → E denotes the zero section.
It will be convenient to work in the stable A 1 -homotopy category Spt(B) of P 1 -spectra over B, where B is a finite type scheme over k. Most frequently, B = L, where L is a field extension of k. The notation [−, −] Spt(B) will be used for the morphisms. Spt(B) is a symmetric monoidal category under the smash product ∧, with unit 1 B , denoting the sphere spectrum. Any pointed simplicial presheaf X determines a corresponding P 1suspension spectrum Σ ∞ X. For example, Σ ∞ Spec L + ∼ = 1 L and Σ ∞ (P 1 L ) ∧n is a suspension of 1 L . When working in Spt(L), we will identify pointed spaces X with their suspension spectra Σ ∞ X, omitting the Σ ∞ . We will use the six operations (p * , p * , p ! , p ! , ∧, Hom) given by Ayoub [Ayo07] and developed by Ayoub, and Cisinksi-Déglise [CD12]. There is a nice summary in [Hoy14,§2]. We use the following associated notation and constructions. When p : X → Y is smooth, p * admits a left adjoint, denoted p ♯ , induced by the forgetful functor Sm X → Sm Y from smooth schemes over X to smooth schemes over Y. For p : X → Spec L a smooth scheme over L, the suspension spectrum of X is canonically identified with p ! p ! 1 L as an object of Spt(L). For a vector bundle p : E → X, the Thom spectrum Σ ∞ Th(E) (or just Th(E)) is canonically identified with s * p ! 1 X . Let Σ E equal Σ E = s * p ! : Spt(X) → Spt(X). Let e : E → X and d : D → Y be two vector bundles over smooth Lschemes p : X → Spec L and q : Y → Spec L. Given a map f : Y → X and a monomorphism φ : D ֒→ f * E, there is an associated natural transformation of endofunctors on Spt(L) inducing the map on Thom spectra. The natural transformation Th f φ is defined as the composition The natural transformation Th 1 Y φ is the composition where t : Y → D denotes the zero section of D, s : X → E denotes the zero section of E, and the middle arrow is induced by the exchange transformation
THE GROTHENDIECK-WITT CLASS OF EISENBUD-KHIMSHIASHVILI-LEVINE
In this section we recall the definition of the Grothendieck-Witt class w 0 (f) studied by Eisenbud-Khimshiashvili-Levine. We compute the class when f has a nondegenerate zero and when f is the gradient of the equation of an ADE singularity. Here f : A n k → A n k is a polynomial function with an isolated zero at the origin (i.e. 0 is a connected component of f −1 (0)). We write f 1 , . . . , f n ∈ P for the components of f.
Definition 1.
Suppose that x ∈ A n k is a closed point such that y = f(x) has residue field k. Writing the maximal ideal of x and y respectively as m x and m y = (y 1 − b 1 , . . . , y n − b n ), we define the local algebra Q x (f) of f at a closed point x to be P mx /(f 1 − b 1 , . . . , f n − b n ). When x = 0, we also write Q for Q 0 (f), the local algebra at the origin.
The distinguished socle element at the origin The Jacobian element at the origin J = J 0 (f) ∈ Q 0 (f) is Remark 2. Recall the socle of a ring is the sum of the minimal nonzero ideals. For an artin local ring such as Q 0 (f), the socle is equal to the annihilator of the maximal ideal m. We only use the definition of the socle in Lemma 4, which is used to prove Lemma 6.
In this paper we focus on the class w 0 (f), but in work recalled in Section 4, Scheja-Storch constructed a distinguished symmetric bilinear form β 0 that represents w 0 (f). This symmetric bilinear form encodes more information than w 0 (f) when f is a polynomial in 1 variable, and we discuss this topic in greater detail in [KW16a, Section 4].
To conclude this section, we explicitly describe some ELK classes. The descriptions are in terms of the following classes.
The class of H equals 1, −1 in GW(k).
The following lemma describes w 0 (f) when f has a simple zero.
Lemma 9. If f has a simple zero at the origin, then w 0 (f) = det ∂f i ∂x j (0) .
Proof. We have Q 0 (f) = k and E = det ∂f i ∂x j (0). The element E ∈ Q 0 (f) is then a k-basis, and w 0 (f) is represented by the form β φ satisfying When f has an arbitrary isolated zero, the following procedure computes w 0 (f). (1) Compute a Gröbner (or standard) basis for the ideal (f 1 , . . . , f n ) and a k-basis for the vector space Q 0 (f).
(2) Express E in terms of the k-basis by performing a division with the Gröbner basis.
(3) Define an explicit k-linear function φ : Q 0 (f) → k satisfying φ(E) = 1 using the k-basis. (4) For every pair b i , b j of basis elements, express b i · b j in terms of the k-basis by performing a division and then use that expression to evaluate φ(b i · b j ). (5) Output: The matrix with entries φ(b i · b j ) is the Gram matrix of a symmetric bilinear form that represents w 0 (f).
(For a detailed exposition on how to compute in a finite dimensional k-algebra such as Q 0 (f), see Section 2, Chapter 2 and Chapter 4 of [CLO05].) Table 2 describes some classes that were computed using this procedure. The table should be read as follows. The second column displays a polynomial g, namely the polynomial equation of the ADE singularity named in the first column. The associated gradient grad(g) := ( ∂g ∂x , ∂g ∂y ) is a polynomial function A 2 Q → A 2 Q with an isolated zero at the origin, and the third column is its ELK class w 0 (grad(g)) ∈ GW(Q). (We consider g as a polynomial with rational coefficients.) The description of w 0 (grad(g)) in Table 2 remains valid when Q is replaced by field of characteristic 0 or p > 0 for p sufficiently large relative to n but possibly not for small p (e.g. the description of the A 2 singularity is invalid in characteristic 3 because grad(g) has a nonisolated zero at the origin). w 0 (grad(g)) ∈ GW(Q) A n , n odd x 2 1 + x n+1 2 n−1 2 · H + 2(n + 1) A n , n even x 2 1 + x n+1 2 n 2 · H D n , n even x 2 (x 2 1 + x n−2 2 ) n−2 2 · H + −2, 2(n − 1) D n , n odd x 2 (x 2 1 + x n−2 2 ) n−1 2 · H + −2 gives rise to a notion of local degree, which we describe in this section. We then show that the degree is the sum of local degrees under appropriate hypotheses (Proposition 14), and that when f isétale at x, the local degree is computed by evaluated at x (Proposition 15). For endomorphisms of P 1 k , these notions and properties are stated in [Mor04] [ Mor06], and build on ideas of Lannes. To identify the local degree at anétale point, we use results of Hoyois [Hoy14].
To motivate the definition, recall that to define the local topological Brouwer degree of f : R n → R n at a point x, one can choose a sufficiently small ǫ > 0 and take the Z-valued topological degree of the map By translation and scaling, the map f−f(x) f−f(x) can be replaced by the map induced by f from the boundary ∂B(x, ǫ) of a small ball B(x, ǫ) centered at x to a boundary ∂B(f(x), ǫ ′ ) of a small ball centered at f(x). The suspension of this map can be identified with the map induced by f from the homotopy cofiber of the inclusion ∂B(x, ǫ) → B(x, ǫ) to the analogous homotopy cofiber. As B(x,ǫ) ∂B(x,ǫ) is also the homotopy cofiber of B(x, ǫ) − {x} → B(x, ǫ), we are free to use the latter construction for the (co)domain in (9): (10) f : In A 1 -algebraic topology, the absence of small balls around points whose boundaries are spheres makes the definition of local degree using the map f−f(x) f−f(x) problematic. However, the map (10) generalizes to a map between spheres by Morel and Voevodsky's Purity Theorem. This allows us to define a local degree when x and f(x) are both rational points, as in the definition of f ′ x given below. When x is not rational, we precompose with the collapse map from the sphere P n k /P n−1 k → P n k /P n k − {x} to obtain Definition 11. This is shown to be compatible with the former definition (Proposition 12).
We now give Definition 11, first introducing the necessary notation.
By [MV99, Proposition 2.17 numbers 1 and 3, page 112], there is a canonical A 1 -weak equivalence (P 1 k ) ∧n ∼ = P n k /P n−1 k as both can be identified with the Thom space Th(O n k ) of the trivial rank n bundle on Spec k. Thus we may take the degree of a map P n k /P n−1 k → P n k /P n−1 k in the homotopy category.
Let x be a closed point of A n k , and let f : There is a trivialization of T x P n k coming from the isomorphism T x P n k ∼ = T x A n k and the canonical trivialization of T x A n k . Purity thus induces an A 1 -weak equivalence P n k /(P n k − {x}) ∼ = Th(O n k(x) ). As above, [MV99, Proposition 2.17 number 3, page 12] gives a canonical For n = 1, the following lemma is [Hoy14, Lemma 5.4], and the proof generalizes to the case of larger n, the essential content being [Voe03, Lemma 2.2].
Lemma 10. For any k-point x of A n k , the composition of the collapse map with r is A 1 -homotopy equivalent to the identity.
The diagram
commutes by naturality of Purity [Voe03, Lemma 2.1] and the compatibility of the trivializations of T x P n k and T 0 P n k . The diagram comparing collapse maps via the maps induced by f commutes by definition. Since In particular, for a k-rational point x, the collapse map P n k /(P n−1 k ) → P n k /(P n k − {x}) is an A 1 -homotopy equivalence.
Definition 11. Let f : A n k → A n k be a morphism, and let x be a closed point such that x is isolated in its fiber f −1 (f(x)), and f(x) is k-rational. The local degree (or local A 1 -Brouwer degree) deg A 1 x f of f at x is Morel's A 1 -degree homomorphism applied to a map f x : P n k /P n−1 k → P n k /P n−1 k in the homotopy category, where f x is defined to be the composition When x is a k-point, it is perhaps more natural to define the local degree in the following equivalent manner: the trivialization of the tangent space of A n k gives canonical by Purity [MV99, Theorem 2.23, page 115]. As above, we have a canonical A 1 -weak equivalence Th(O n k ) ∼ = P n k /(P n−1 k ). The local degree of f at x is the degree of the map in the homotopy category For n = 1, the following lemma is [Hoy14, Lemma 5.5], and Hoyois's proof generalizes to higher n as follows.
in Spt(k), where p : Spec k(x) → Spec k is the structure map, and the last equivalence is from [MV99, 3. Proposition 2.17, page 112].
Proof. As above, consider the trivialization of T x P n k coming from the isomorphism T x P n k ∼ = T x A n k and the canonical trivialization of T x A n k . The closed immersion x : Spec k(x) → P k n and this trivialization determine a Euclidean embedding in the sense of Hoyois [Hoy14, Definition 3.8]. This Euclidean embedding determines an isomorphism P n k /(P n = Spec k(x) + , and these identifications agree with the isomorphism P n k /(P n k − {x}) ∼ = P n k /(P n−1 k ) ∧ Spec k(x) + in the statement of the lemma. By [Hoy14, Proposition 3.14], it thus suffices to show that a certain composition To define h, introduce the following notation. Let be the composition of the diagonal with x k(x) . Let be as Lemma 10 with k replaced by k(x). Using the identifications P n k /(P n k − {x}) ∧ Spec k(x) + ∼ = P n k(x) /(P n k(x) − x k(x) ) and P n k(x) /(P n−1 k(x) ) ∼ = P n k /(P n−1 k ) ∧ Spec k(x) + , we can view h as a map h : P n k(x) /(P n k(x) − x k(x) ) → P n k(x) /(P n−1 k(x) ). Then h is the composition The composition (12) is now identified with p ♯ applied to the composition in Lemma 10 of the collapse map with r for the rational pointx : Spec k(x) → A n k(x) , completing the proof by Lemma 10.
The degree of an endomorphism of P n k /(P n−1 k ) is the sum of local degrees under the hypotheses of the following Proposition.
Proposition 14. Let f : P n k → P n k be a finite map such that f −1 (A n k ) = A n k , and let f denote the induced map P n k /(P n−1 k ) → P n k /(P n−1 k ). Then for any k-point y of A n k , Proof. By Purity [MV99, Theorem 2.23, page 115], P n k /(P n The Thom space of a vector bundle on a disjoint union is the wedge sum of the Thom spaces of the vector bundle's restrictions to the connected components. It follows that the quotient maps Apply [P n k /P n−1 k , −] Spt(k) to the above diagram, and let f * be the induced map f * : [P n k /P n−1 k , P n k /P n−1 k ] Spt(k) → [P n k /P n−1 k , P n k /P n−1 k ] Spt(k) . Because the wedge and the product are stably isomorphic, and on the right hand side, k x induces the inclusion of the summand indexed by x. The image of the identity map under f * can be identified with deg f. Using the outer composition in the commutative diagram, we see that the image of the identity map under f * can also be identified with Σ x∈f −1 (y) deg x f.
We now give a computation of the local degree at points where f isétale.
Proposition 15. Let f : A n k → A n k be a morphism of schemes and x be a closed point of A n k such that f(x) = y is k-rational and x is isolated in f −1 (y). If f isétale at x, then the local degree is computed by where J(x) denotes the Jacobian determinant J = det ∂f i ∂x j evaluated at x, and k(x) denotes the residue field of x.
Proof. We work in Spt(k). Let p : Spec k(x) → Spec k denote the structure map.
Since f isétale at x, the induced map of tangent spaces df(x) : induces a map on Thom spectra, which factors as in the following commutative diagram (see Conventions (7)): The naturality of the Purity isomorphism [Voe03, Lemma 2.1] gives the commutative diagram = Th T f(x) P n k allow us to stack Diagram (14) on top of Diagram (13). We then expand the resulting diagram to express the map f x from Definition 11 in terms of We have applied Lemma 13 to identify the diagonal maps. 13 We furthermore have an identification (see Conventions (8)) of Th p 1 p * O n Spec k with the composition We may therefore identify f x with the composition
SOME FINITE DETERMINACY RESULTS
Here we prove a finite determinacy result and then use that result to prove a result, Proposition 23, that allows us to reduce the proof of the Main Theorem to a case where f isétale at 0. In this section we fix a polynomial function f : A n k → A n k that has an isolated zero at the origin and write f 1 , . . . , f n ∈ P for the component functions.
The finite determinacy result is as follows.
Definition 16. Let f, g : A n k → A n k be polynomial functions. Then we say that f and g are equivalent at the origin if both functions have isolated zeros at the origin and we have We say that a polynomial function f : A n k → A n k with an isolated zero at the origin is b-determined if every polynomial function g with the property that
Lemma 17.
A polynomial function f : A n k → A n k with an isolated zero at the origin is finitely determined.
We prove (2) by exhibiting an explicit naive A 1 -homotopy between the maps f ′ 0 and g ′ 0 on Thom spaces. Write By definition, f i = g i modulo m b+1 0 , hence modulo m 0 · (f 1 , . . . , f n ). Moreover, f 1 , . . . , f n is a basis for the k-vector space (f 1 , . . . , f n )/m 0 · (f 1 , . . . , f n ), so the matrix (n i,j ) must reduce to the identity matrix modulo m 0 , allowing us to write (n i,j ) = id n +(m i,j ) with m i,j ∈ m 0 . Let V ⊂ A n k be a Zariski neighborhood of the origin such that the entries of the matrix id n +(m i,j ) are restrictions of elements of H 0 (V, O) that we denote by the same symbols. Now consider the matrix M(x, t) := id n +(t · m i,j (x)) and the map The preimage H −1 (0) contains {0} × k A 1 k as a connected component. Indeed, to see this is a connected component, it is enough to show that the subset is open. To show this, observe that the complement set is is a regular function, and f −1 (0) − {0} ⊂ A n k is closed as 0 ∈ f −1 (0) is a connected component by hypothesis.
The map H induces a map on quotient spaces The quotient . Consider now the composition of the inclusion with H: The spaces appearing in this last equation are identified with the Thom spaces of normal bundles by the purity theorem, and these Thom spaces, in turn, are isomorphic to smash products with Th(O ⊕n Spec k ) because the relevant normal bundles are trivial: ). These identifications identify (17) with a naive A 1 -homotopy Remark 18. When f is a polynomial function in 1 variable, we can exhibit an explicit b. Indeed, f is b-determined provided f contains a nonzero monomial of degree b. To see this, take b to be the least such integer and write f = u · x b for u ∈ k[x] a unit in (P x ) m 0 . The ideal (x b ) lies in (f), so the proof of Lemma 17 shows that f is b-determined. We make use of this fact in the companion paper [KW16a].
In the proof of the Main Theorem, we use Proposition 23 to reduce the proof to the special case where the following assumption holds: Assumption 19. The polynomial function f is the restriction of a morphism F : P n k → P n k such that (1) F is finite, flat, and with induced field extension Frac F * O P n k ⊃ Frac O P n k of degree coprime to char(k) = p; (2) F isétale at every point of To reduce to the special case, we need to prove that, after possibly passing from k to an odd degree field extension, every f is equivalent to a polynomial function satisfying Assumption 19, and we conclude this section with a proof of this fact. The proof we give below is a modification of [BCRS96, Theorem 4.1], a theorem about real polynomial functions due to Becker-Cardinal-Roy-Szafraniec.
We will show that if f is a given polynomial function with an isolated zero at the origin, then for a general h ∈ P that is homogeneous and of degree sufficiently large and coprime to p, the sum f + h satisfies Assumption 19. This result is Proposition 23. We prove that result as a result about the affine space H d k parameterizing polynomial maps h : A n k → A n k given by n-tuples of homogeneous degree d polynomials. We show that the locus of h's such that f + h fails to satisfy Assumption 19 is not equal to H d k by using the following three lemmas. The following lemma is used in Lemma 22 to bound a dimension.
Lemma 21. Let f : A n k → A n k be a nonzero polynomial function satisfying f(0) = 0 and a = (a 1 , . . . , a n ) ∈ A n k (k) a k-point that is not the origin 0. If is a Zariski closed subset of codimension n + 1.
Proof. It is enough, by the Krull principal ideal theorem, to show that (18) and (19), considered as regular functions on H n k , form a regular sequence. Writing the coefficients {c i (1), . . . , c i (n)} are coordinates on the affine space H d k . As polynomials in these coefficients, the elements f 1 (a) + h 1 (a), . . . , f n (a) + h n (a) from (18) are affine linear equations, and distinct linear equations involve disjoint sets of variables, so we conclude that the first set of elements form a regular sequence with quotient equal to a polynomial ring. In particular, the quotient is a domain, so to prove the lemma, it is enough to show that (19) has nonzero image in the quotient ring.
To show this, first make a linear change of variables so that a = (1, 0, . . . , 0).
This determinant is essentially the determinant of the general n-by-n matrix det(x α,β ), as we now explain.
Identify O H d k with k[x α,β ] by setting, for α, β = 1, . . . , d, the variable x α,β equal to the (β, α)-th entry in the above matrix (and, say, arbitrarily matching the remaining variables c i β (α), i β = i 1 , . . . , i n , in O H d k with the remaining variables in k[x α,β ]). This identification identifies the determinant under consideration with the determinant det(x α,β ) of the general n-by-n matrix and identifies the elements (18) with linear polynomials, say A 1 x 1,1 + B 1 , A 2 x 2,1 + B 2 , . . . , A n x n,1 + B n . Now consider det(x α,β ) as a function det(v 1 , . . . , v n ) of the column vectors. Under the identification of (19) with det(x α,β ), the image of (19) in the quotient ring is identified with det(v 1 , v 2 , . . . , v n ) for v 1 = (−B 1 /A 1 , −B 2 /A 2 , . . . , −B n /A n ). By the hypothesis ∂f 1 ∂x i (a) · a i = d · f 1 (a), so −B 1 /A 1 = 0 and v 1 is not the zero vector. We conclude that det(v 1 , v 2 , . . . , v n ) is nonzero because e.g. we can extend v 1 to a basis v 1 , . . . , v n and then det(v 1 , v 2 . . . , v n ) = 0 by the fundamental property of the determinant. Proof. We prove the lemma by proving that the Zariski closure of the complement of S in H d k has dimension strictly smaller than dim H d k , hence the complement of S cannot be Zariski dense.
Consider
The complement of S is the image π 1 (∆) of ∆ under the first projection π 1 : H d k × A n k → H d k . We bound dimensions by analyzing the second projection π 2 : H d k × A n k → A n k .
To bound the dimension, we argue as follows. Some f i is nonzero since f is nonzero, and without loss of generality, we can assume f 1 = 0. Because d is coprime to p and strictly larger than the degree of f 1 , the polynomial ∂f 1 ∂x i (x) · x i − d · f 1 (x) is nonzero (by Euler's identity). We conclude that B := {a ∈ A n k : has codimension 1 in A n k . We separately bound ∆ ∩ π −1 2 (B) and ∆ ∩ π −1 2 (A n k − B).
The fibers of π 2 : ∆ − π −1 2 (B) → A n k − B have codimension n + 1 by Lemma 20, so by [Gro65, Proposition 5.5.2], we have By similar reasoning We conclude that π 1 : ∆ → H d k cannot be dominant for dimensional reasons [Gro65, Theorem 4.1.2]. The complement of the closure of π 1 (∆) can thus be taken as the desired Zariski open subset.
Proposition 23. Let f : A n k → A n k be a nonzero polynomial function satisfying f(0) = 0. Then there exists an odd degree extension L/k such that f ⊗ k L is equivalent to a function satisfying Assumption 19. If k is infinite, we can take L = k.
Proof. The function f is finitely determined by Lemma 17, so say it is b-determined for b ∈ Z. Choose d to be an integer coprime to p and larger than both b and the degrees of the f i 's. We claim that there exists an odd degree field extension L/k and a degree d homogeneous polynomial function h ∈ H d k (L) such that h −1 (0) = {0} and g := (f ⊗ k L) + h isétale at every point of g −1 (0) − 0. To verify the claim, observe that Lemmas 20 and 22 imply that the subset of all such h's contains a nonempty Zariski open subset U ⊂ H d k . If k is an infinite field, U(k) must be non-empty, so we take L = k. Otherwise, k is a finite field, say k = F q . We then have that U(F q n ) is nonempty for n a sufficiently large odd number as U(F q ) = ∪U(F q n ). The function f ⊗ k L is also b-determined, and we complete the proof by showing that g := (f ⊗ k L) + h satisfies Assumption 19.
To complete the proof, we need to show that G is finite, flat, and induces a field extension Frac O P n k ⊂ Frac G * O P n k of degree coprime to p. To see that G is finite, observe that the pullback G −1 (H) of a hyperplane H ⊂ P n k has positive degree on every curve (since the associated line bundle is G * O(1) = O(d), an ample line bundle). We conclude that a fiber of G cannot contain a curve since G −1 (H) can be chosen to be disjoint from a given fiber. In other words, G has finite fibers. Being a morphism of projective schemes, G is also proper and hence finite by Zariski's main theorem. This implies that G is flat since every finite morphism P n k → P n k is flat by [Mat89, Corollary to Theorem 23.1].
Finally, we complete the proof by noting that the degree of Frac O P n k ⊂ Frac G * O P n k equals d n , the top intersection number G * (H n ) = c 1 (O(d)) n . (To deduce the equality, observe that H n is the class of a k-point y ∈ P n k (k), so the top intersection number is the k-rank of O G −1 (y) , the stalk of G * O P n k at y. The rank of that stalk is equal to the rank of any other stalk of G * O P n k since G * O P n k , being finite and flat, is locally free. In particular, that rank equals the rank of the generic fiber, which is the degree of Frac G * O P n k ⊃ Frac O P n k .)
THE FAMILY OF SYMMETRIC BILINEAR FORMS
In this section we construct, for a given finite polynomial map f : A n k → A n k , a family of symmetric bilinear forms over A n k such that the fiber over the origin contains a summand that represents the ELK class w 0 (f). This family has the property that the stable isomorphism class of the fiber over y ∈ A n k (k) is independent of y, and we use this property in Section 5 to compute w 0 (f) in terms of a regular value. Finally, we compute the stable isomorphism class of the family over anétale fiber.
Throughout this section f denotes a finite polynomial map, except in Remark 32 where we explain what happens if the finiteness condition is weakened to quasi-finiteness.
The basic definition is the following.
Definition 24. Define the family of algebras Q = Q(f) associated to f by Concretely Q is the ring P x considered as a P y -algebra by the homomorphism y 1 → f 1 (x), . . . , y n → f n (x) or equivalently the algebra P y [x 1 , . . . , x n ]/(f 1 (x) − y 1 , . . . , f n (x) − y n ). Given (y 1 , . . . , y n ) = y ∈ A n k (L) for some field extension L/k, the fiber Q ⊗ k(y) is the Lalgebra L[x 1 , . . . , x n ]/(f 1 (x) − y 1 , . . . , f n (x) − y n ). This algebra decomposes as where Q x (f) is as in Definition 1 and the product runs over all closed points x ∈ f −1 (y).
The algebra Q has desirable properties because we have assumed that f is finite.
Proof. It is enough to show that, for any maximal ideal m ⊂ P y , the images of y 1 − f 1 (x), . . . , y n − f n (x) in (P y /m)[x 1 , . . . , x n ] form a regular sequence by [Mat89, first Corollary, page 177]. The quotient of P y [x 1 , . . . , x n ] by the sequence is the structure ring of f −1 (m), which is 0-dimensional by hypothesis. In particular, the images of y 1 −f 1 (x), . . . , y n − f n (x) generate a height n ideal, and hence they form a regular sequence by [Mat89, Theorem 17.4(i)].
As we mentioned in Section 1, Scheja-Storch have constructed a distinguished symmetric bilinear form β 0 on Q 0 (f) that represents w 0 (f). In fact, they construct a family β of symmetric bilinear forms on Q such that the fiber over 0 contains a summand that represents w 0 (f). The family is defined as follows.
Definition 26. Let η : Q → P y be the generalized trace function, the P y -linear function defined on [SS75,page 182]. Let β be the symmetric bilinear form β : Q → P y defined by β( a 1 , a 2 ) = η( a 1 · a 2 ). Given y ∈ A n k (L) and x ∈ f −1 (y), we write η x and β x for the respective restrictions to Q x (f) ⊂ Q ⊗ k(y). We write w x (f) ∈ GW(k) for the isomorphism class of (Q x (f), β x ).
Remark 27. We omit the definition of η because it is somewhat involved and we do not make direct use of it. We do make use of three properties of η. First, the homomorphism has strong base change properties. Namely, for a noetherian ring A and an A-finite quotient B of A[x 1 , . . . , x n ] by a regular sequence, Scheja-Storch construct a distinguished A-linear function η B/A : B → A in a manner that is compatible with extending scalars by an arbitrary homomorphism A → A [SS75, page 184, first paragraph]. Second, the pairing β is nondegenerate by [SS75,Satz 3.3]. Finally, the restriction η 0 : Q 0 (f) → k of η satisfies the condition from Definition 7 (by Lemma 28 below). In particular, the definition of w 0 (f) in Definition 7 agrees with the definition of w x (f) from Definition 26 when x = 0.
The reader familiar with [EL77] may recall that in that paper, where k = R, the authors do not make direct use of Scheja-Storch's work but rather work directly with the functional on Q defined by a → Tr(a/J). Here Tr is the trace function of the field extension Frac Q/ Frac P y . We do not use the function a → Tr(a/J) because it is not well-behaved in characteristic p > 0 since e.g. the trace can be identically zero.
We now describe the properties of the family ( Q, β).
We now prove that, for y 1 , y 2 ∈ A n k (k), the restriction β to the fiber over y 1 is stably isomorphic to restriction to the fiber over y 2 . This result follows easily from the following form of Harder's theorem.
Lemma 30 (Harder's theorem). Suppose that ( Q, β) is a pair consisting of a finite rank, locally free module Q on A 1 k and a nondegenerate symmetric bilinear form β on Q. Then ( Q, β) ⊗ k(y 1 ) is stably isomorphic to ( Q, β) ⊗ k(y 2 ) for any y 1 , y 2 ∈ A 1 k (k).
Proof. When char k = 2, the stronger claim that ( Q, β) is isomorphic to a symmetric bilinear form defined over k is [Lam05b, Theorem 3.13, Chapter VII]. When char k = 2, the claim can be deduced from loc. cit. as follows. By [Lam05b, Remark 3.14, Chapter VII], the pair ( Q, β) is isomorphic to an orthogonal sum of a symmetric bilinear form defined over k and a sum of symmetric bilinear forms defined by Gram matrices of the form shows that the specialization is stably isomorphic to H.
Corollary 31. The sum
is independent of y ∈ A n k (k).
Proof. The sum (21) is the class of β ⊗ k(y) by Lemma 29, so since any two k-points of A n k lie on a line, the result follows from Lemma 30.
Remark 32. Corollary 31 becomes false if the hypothesis that f is finite is weakened to the hypothesis that f is quasi-finite (i.e. has finite fibers). Indeed, under this weaker assumption, Scheja-Storch construct a nondegenerate symmetric bilinear form β ⊗ k(y) on Q ⊗ k(y) for every y ∈ A n k (k), but the class of w x (f) can depend on y. For example, consider k = R (the real numbers) and f := (x 3 1 x 2 + x 1 − x 3 1 , x 2 ). A computation shows if y 2 = 1; 1/(y 2 − 1) + H otherwise, so the rank of w x (f) depends on y.
The morphism f fails to be finite, and we recover finiteness by passing to the restriction f : f −1 (U) → U over U := A 2 k − {y 2 = 1}. Over U, the rank is constant, but the isomorphism class still depends on y because signature of We now compute w x (f) when f isétale at x.
Lemma 33. Let f : A n k → A n k be finite and y ∈ A n k (k) be a k-rational point. If f isétale at x ∈ f −1 (y), then w x (f) = Tr k(x)/k J(x) in GW(k).
Note that when k(x) = k, Lemma 33 is a consequence of Lemma 28 and the equality J = (rank k Q x (f)) · E = E (Remark 3). We check the lemma for nontrivial extensions k ⊂ k(x) using descent.
Proof. We show that both of these isomorphism classes of bilinear forms over k are described by the following descent data: Let L be a finite Galois extension of k such that k(x) embeds into L, and let G = Gal(L/k). Let S be the set S = {x ∈ A n k (L) : x(Spec L) = {x}}, of L-points whose image in A n k is x. Define V(S) to be the L-algebra of functions S → L, with point-wise addition and multiplication. Define φ : S → L by φ(x) = 1/J(x), and β φ : V(S) × V(S) → L to be the bilinear form Because J is a polynomial with coefficients in k, the map β φ is G-equivariant. Thus the Galois action on (V(S), β φ ) determines descent data.
We now show that the k-bilinear space w x (f) = (Q x (f), β x ) is isomorphic to the kbilinear space determined by this descent data. To do this, it is sufficient to find a k-linear map Q x (f) → V(S) that respects the bilinear pairings and realizes Q x (f) as the equalizer There is a tautological inclusion Q x (f) → V(S) because an element of Q x (f) is a polynomial function on S, and we show this inclusion has the desired properties. To see that the inclusion respects the bilinear forms, it suffices to see that the functional V(φ) restricts to the residue functional η. To see the latter, extend scalars to L and then observe that, for every summand L of Q x (f), both η and φ map the Jacobian element to 1. The equalizer of V(S) ⇒ σ∈G V(S) is the subset of G-invariant functions (i.e. functions v : S → L satisfying v(σs) = v(s) for all σ ∈ G, s ∈ S). Because S is finite, every function on S is a polynomial function, and a polynomial function is G-invariant if and only if it can be represented by a polynomial with coefficients in k, i.e. lies in Q x (f). Thus, we see that w x (f) is determined by the descent data on (V(S), β φ ).
It remains to show that the descent data on (V(S), β φ ) also determines the k-bilinear form Tr k(x)/k J(x) . The equality J = 1/J in the Grothendieck-Witt group shows that Tr k(x)/k J(x) has representative bilinear form B : k(x) × k(x) → k defined B(x, y) = Tr k(x)/k (xy/J). The claim is equivalent to the statement that there is a G-equivariant isomorphism L ⊗ k k(x) ∼ = V(S) respecting the bilinear forms.
Note that S is in bijective correspondence with the set of embeddings k(x) ֒→ L, and that we may therefore view s in S as a map s : k(x) → L. Let Θ : L ⊗ k k(x) → V(S) denote the L-linear isomorphism defined by Θ(l ⊗ q)(s) = l(sq).
By definition,
where J(s) denotes the Jacobian determinant evaluated at the point s, and s(q i ) denotes the image of q i under the embedding k(x) → L corresponding to s. Since J is defined over k, we have J(s) = s(J). Thus s∈S (1/J(s))s(q 1 )s(q 2 ) = Tr L/k (q 1 q 2 /J) = B(q 1 , q 2 ), showing that Θ respects the appropriate bilinear forms.
PROOF OF THE MAIN THEOREM
We first note the case of the Main Theorem when f isétale.
Lemma 34. Let f : A n k → A n k be a polynomial function that satisfies Assumption 19 and y ∈ A n k (k) be a k-rational point. Suppose that f isétale at x ∈ f −1 (y). Then
Proof. Combine Lemma 33 and Proposition 15.
We now use the previous results to prove the Main Theorem.
Proof of Main Theorem. Recall that, after possibly passing from k to an odd degree field extension L/k when k is a finite field, we can assume that f satisfies Assumption 19 by Proposition 23. This allows us to reduce to the case where the assumption is satisfied because the natural homomorphism is injective. Injectivity holds for a somewhat general L/k, but we only need the result in the simple case of a finite field in which case the result can be deduced as follows. When char k = 2, the only invariant of an element of GW(k) is its rank, so injectivity follows from the observation that extending scalars preserves the rank. When char k = 2, an element of w ∈ GW(k) is completely determined by its rank and discriminant. Thus to show injectivity, we need prove that if disc(w ⊗ k L) ∈ (L * ) 2 , then disc(w) ∈ (k * ) 2 , and this result is e.g. a consequence of [Lam05a, Corollary 2.6].
Since the formation of w 0 (f) is compatible with field extensions and similarly with deg A 1 0 (f), we conclude that it is enough to prove the theorem when f satisfies Assumption 19.
After possibly passing to another odd degree field extension, we can further assume that there exists y 0 ∈ A n k (k) such that f isétale at every point of x ∈ f −1 (y 0 ). (To see this: Since any field extension of degree prime to p is separable, f isétale at the generic point [Gro67, 17.6.1c']. Therefore f isétale when restricted to a Zariski neighborhood of the generic point [Gro67, Définition 17.3.7]. Let Z denote the complement of this open neighborhood. Since finite maps are closed, f(Z) is a closed subset of A n k not containing the generic point. Any k-valued point of A n k − f(Z) has the desired property, and this subset contains k-points after possibly passing to an odd degree extension.) By these assumptions, f is the restriction of F : P n k → P n k , and F induces a map P(F) : P n k /P n−1 k → P n k /P n−1 k of motivic spheres that has degree by the local degree formula (Proposition 14). In particular, the right-hand side is independent of y since the left-hand side is.
Analogously we proved in Section 4 that the sum is independent of y ∈ A n k (k) (Corollary 31).
The local terms w(β x ) and deg A 1 x (F) are equal when F isétale at x by the Lemma 34. As a consequence (24) for y = y 0 and hence (by independence) all y ∈ A n k (k). In particular, equality holds when y = 0. By Assumption 19, the morphism F isétale at every x ∈ f −1 (0) not equal to the origin, so subtracting off these terms from (24), we get
APPLICATION TO SINGULARITY THEORY
Here we use the local A 1 -degree to count singularities arithmetically, as proposed in the introduction. We assume in this section char k = 2, but see Remark 46 for a discussion of char k = 2.
Specifically, given the equation f ∈ P x of an isolated hypersurface singularity X ⊂ A n k at the origin, we interpret the following invariant as a counting invariant: Definition 35. If f ∈ P x is a polynomial such that grad f has an isolated zero at the closed point x ∈ A n k , then we define µ A 1 x (f) := deg A 1 x (grad f). When x is the origin, we write µ A 1 (f) for this class and call it the arithmetic Milnor number or A 1 -Milnor number.
Two remarks about this definition:
Remark 36. The condition that grad f has an isolated zero at x implies that the fiber of f over f(x) has an isolated singularity at x, and the converse is true in characteristic 0 but not in characteristic p > 0, as the example of f(x 1 , x 2 ) = x p 1 + x 2 2 shows. Remark 37. When k = C, the arithmetic Milnor number is determined by its rank, which is the classical Milnor number µ(f) = rank Q 0 (f). The classical Milnor number µ(f) is not only an invariant of the equation f but in fact is an invariant of the singularity 0 ∈ Spec(P x /f) defined by f. When k is arbitrary, the invariance properties of µ A 1 (f) are more subtle, especially in characteristic p > 0. In particular, the rank of µ A 1 (f) is not an invariant of the singularity in characteristic p > 0. For example, f(x) = x 2 1 + x p 2 + x p+1 2 and g(x) = x 2 1 + x p 2 + x 2p+1 2 both define the A p−1 singularity (in the sense that the completed local rings P x /f, P x /g, and P x /x 2 1 + x p 2 are isomorphic), but the ranks of w 0 (f) and w 0 (g) are respectively p and 2p. For conditions that imply µ A 1 (f) is an invariant of the singularity, see Lemma 39.
We now examine the arithmetic Milnor number of a node in more detail. We define a node following [SGA73, Exposé XV]: Definition 38. When k = k is algebraically closed, we say that a closed point x ∈ X of a finite type k-scheme is a node (or standard A 1 -singularity or ordinary quadratic singularity) if the completed local ring O X,x is isomorphic to a k-algebra of the form (25) P/x 2 1 + . . . x 2 n + higher order terms. Here P = k[[x 1 , . . . , x n ]] is the power series ring over k.
When k is arbitrary, we say that x ∈ X is a node if every x ∈ X ⊗ k k mapping to x is a node. We say that f ∈ P is the equation of a node at a closed point x ∈ A n k if x ∈ Spec(P/f) is a node.
The A 1 -Milnor number of the equation of a node is the weight that appears in Equation 6 from the introduction. Indeed, if f = u 1 x 2 1 +· · ·+u n x 2 n , then grad(f) = (2u 1 x 1 , . . . , 2u n x n ), so ∂x i ∂x j (0) = 2 n u 1 . . . u n = u 1 . . . u n if n is even.
The arithmetic Milnor number is related to an invariant studied in real enumerative geometry. Over the real numbers, 1-nodal curves are typically counted with weights known as Welschinger signs or weights (see e.g. [Wel10]). Over k = R, there are three different types of nodes: the split node (defined by f = x 2 1 − x 2 2 at the origin), the nonsplit node (defined by f = x 2 1 + x 2 2 ) at the origin), and a complex conjugate pair of nodes. The Welschinger weights of these nodes are respectively +1, −1, and 0. The weight of a real node is exactly the negative of the signature of µ A 1 (f).
Over an arbitrary field, the structure of a node is described by [SGA73, Exposé XV, Théorème 1.2.6]. That theorem states that, if x ∈ X := Spec(P/f) is a node, then L := k(x) is a separable extension of k and there exists a nondegenerate quadratic form q = u 1 x 2 1 + · · · + u n x 2 n ∈ L[x 1 , . . . , x n ] and a morphism (Spec(L ⊗ k P/q), 0) → (X, x) of pointed kschemes that induces an isomorphism on henselizations.
(Note: in loc. cit. the result is stated with L/k the maximal separable subextension of k(x)/k, but this subextension is k(x)/k because we have assumed char k = 2.) We can use this description of nodes to describe the arithmetic Milnor number of a node.
Lemma 39. Assume n is even. Suppose that L/k is a separable field extension and x ∈ X = Spec(L ⊗ k P x /f), y ∈ Y = Spec(P y /g) are nodes and (X, x) → (Y, y) is a morphism of pointed k-schemes that induces an isomorphism on henselizations. Then Proof. By Proposition 15, we have µ A 1 y (g) = Tr L/k ( ∂ 2 g ∂x i ∂x j (y) ), so it is enough to prove that the Hessian of f differs from the Hessian of g by a perfect square. Say that (X, x) → (Y, y) is induced by the ring map defined by y 1 → a 1 , . . . , y n → a n . The elements a 1 , . . . , a n must satisfy (27) f = u · g(a 1 , . . . , a n ) for some u ∈ O h X,x Computing the Hessian of f using (27), we deduce · det ∂ 2 g ∂y i ∂y j (y) Since n is even, this last equation shows that the two Hessians differ by a perfect square.
Remark 40. For µ A 1 x (f) to be an invariant of the pointed k-scheme x ∈ X, it is essential that n is even. For example, when n is odd, consider the equation f = x 2 1 + . . . x 2 n and note that both f and −f define the standard node at the origin, but µ A 1 (f) = 2 n , µ A 1 (−f) = −2 n . These two classes are equal only when −1 is a perfect square. For odd n, we get an invariant of the equation determining the pointed k-hypersurface.
is an invariant of a node when n is even, it does not, in general, determine the isomorphism class, as the following example shows.
We now identify µ A 1 (f) as a count of nodes. Recall that we wish to identify µ A 1 x (f) with a count of the nodal fibers of the family f(x) − a 1 x 1 − · · · − a n x n = t over the t-line for a 1 , . . . , a n ∈ k sufficiently general. In showing this, an essential point is to show that, for y ∈ A n k (k), the sum of the local degrees f(x)=y deg A 1 y (grad f) is independent of x. When grad(f) extends to a suitable morphism P n k → P n k , this is Proposition 14, but requiring the map to extend is a restrictive condition that fails to be satisfied in important basic examples such as f = x 2 1 + x n 2 .
We will instead deduce independence from Corollary 31. In order to apply that corollary, we need to interpret µ A 1 x (f) in terms of the bilinear pairing β. We have done this when k(x) = k and when f is the equation of a node at x but not in general. The following lemma is stated so that we only need to consider singularities of this type, allowing us to avoid a lengthy technical discussion of the relation between β and deg A 1 x (grad f) when k(x) is a nontrivial extension of k.
Lemma 42. Let f ∈ P be such that is a finite morphism. Assume every zero of grad(f) either has residue field k or is in theétale locus of grad f and similarly with grad(f − a 1 x 1 − · · · − a n x n ). Then we have for any (a 1 , . . . , a n ) ∈ A n k (k). Here both sums run over all zeros of the relevant gradient.
Proof. Observe that the zeros of grad(f − a 1 x 1 − · · · − a n x n ) are exactly the points in the preimage of (a 1 , . . . , a n ) under grad(f). Furthermore, µ A 1 x (f − a 1 x 1 − · · · − a n x n ) = deg x (grad f). Thus the left-hand side of (28) is the sum of deg A 1 x (grad f) for x in the preimage of x, and the right-hand side is the analogous sum over the preimage of (a 1 , . . . , a n ). By the Proposition 15 and the Main Theorem, we have µ A 1 x (f) = w x , so the result is Corollary 31.
Lemma 43. Let f ∈ P be such that is a finite, separable morphism. Then there exists a nonempty Zariski open subset U ⊂ A n k such that, for all (a 1 , . . . , a n ) ∈ A n k , the preimage of 0 under grad(f(x) − a 1 x 1 − · · · − a n x n ) : isétale over k.
Proof.
Observe that since grad f is separable, the locus of points V ⊂ A n k where grad f iś etale contains the generic point of A n k and hence is a nonempty Zariski open subset. The subset grad f(A n k − V) ⊂ A n k is closed because grad f is proper and so the complement of grad f(A n k − V) has the desired properties. Lemma 44. Let f ∈ P is given. If f(x) = grad(f)(x) = 0 and grad(f) isétale at x, then x ∈ Spec(P/f) is a node.
Proof. By the definition of a node, we can reduce to the case where k = k and, after possibly making a linear change of coordinates, we can assume x = 0 is the origin. Write f(x) = a i x i 1 1 . . . x in n . Since f(x) = grad f(x) = 0, all terms of degree at most 1 must vanish. Since grad(f) isétale at x, the determinant of the matrix defined by the degree 2 terms (i.e. the Hessian) must be nonzero. We conclude that, after a further linear change of variables (diagonalize the quadratic form), the given equation can be written as f(x) = x 2 1 + · · · + x 2 n + higher order terms, showing x ∈ Spec(P/f) is a node.
Combining the previous lemmas provides us with the desired interpretation of µ A 1 (f) as a count of nodal fibers.
Corollary 45. Let n be even and f ∈ P such that grad(f) is finite and separable. Then for (a 1 , . . . , a n ) ∈ A n k (k) a general k-point, the family A n k → A 1 k , (30) x → f(x) − a 1 x 1 − . . . a n x n has only nodal fibers.
Proof. By Lemma 43, a general k-point (a 1 , . . . , a n ) (i.e. a k-point of nonempty open subscheme of A 1 k ) has the property that grad(f − a 1 x 1 − · · · − a n x n ) isétale at every zero. We will prove that such a point satisfies the desired conditions. For this choice of (a 1 , . . . , a n ) the family (30) has only nodal fibers by Lemma 44. Furthermore, the terms on the right-side of (31) are the arithmetic Milnor numbers of the nodal fibers of (30) by Equation (26). Thus that sum is the sum of µ A 1 x (f − a 1 x 1 − . . . a n x n ) as x runs over the zeros of the gradient, so (31) is a special case of Lemma 42.
Let us illustrate the content of Corollary 45 with the example of the cusp (or A 2 ) singularity discussed at the end of the introduction. The polynomial f = x 2 1 + x 3 2 satisfies the hypotheses of the corollary. Furthermore the origin is the only zero of grad(f), and from Table 2, we see that µ A 1 (f) = H. Thus if (a 1 , a 2 ) are chosen so that (30) has two k-rational nodal fibers {x 2 1 + u 1 x 2 2 = 0} and {x 2 1 + u 2 x 2 2 = 0}, then H = u 1 + u 2 .
Suppose we further specialize to the case k = Q 5 . An inspection of discriminants shows that the family cannot contain, for example, the nodes {x 2 1 + x 2 2 = 0} and {x 2 1 + 2 · x 2 2 = 0}.
There are also more complicated possibilities for the nodal fibers. For example, the only singular fiber of x → f(x) + 3 · 5 · x 2 is the fiber over the closed point (t 2 + 4 · 5 3 ), a closed point with residue field a nontrivial extension of k. Additional examples describing the collections of nodes that a singularity can bifurcate to can be found in [KW16b].
Remark 46. We conclude with a remark about the assumption that char k = 2. When char k = 2, Definition 35 should not be taken as the definition of a node because x 2 1 + · · · + x 2 n = (x 1 + · · · + x n ) 2 does not define an isolated singularity. Instead the polynomial x 2 1 + · · · + x 2 n should be replaced by f(x) = x 2 1 + x 2 x 3 + · · · + x x−1 x n n odd; x 2 1 + x 1 x 2 + · · · + x n−1 x n n even. Using this last equation, we can define nodes as before, although their classification becomes more complicated (see [SGA73,Exposé XV] for details).
The arithmetic Milnor number can be defined as in odd characteristic, but then it is not a very interesting invariant. For example, consider the node that is defined by f(x) = x 2 1 + x 1 x 2 + ux 2 2 for u ∈ k. The gradient function grad f(x) = (x 2 , x 1 ) does not depend on u, so µ A 1 (f), and any other invariant obtained from the gradient, does not depend on u. The isomorphism class of the node does depend on u: the isomorphism class is classified by the image of u in k/{v 2 + v : v ∈ k}.
APPLICATION TO CUBIC SURFACES
Here we explain how the A 1 -degree can be used to arithmetically count the lines on a cubic surface. In [KW17], we proved that the lines on a smooth cubic surface satisfy (33) d∈L * /(L * ) 2 (#lines of type d) · Tr L/k ( d ) = 15 · 1 + 12 · −1 .
The type of a line can be interpreted in several ways, and one interpretation is that it is the local A 1 -degree (or index) of a global section σ of a vector bundle defined using the cubic surface. The global section σ has only simple zeros for smooth cubic surfaces, so the main result of this paper is not needed to prove (33). The main result can, however, be used to extend that equation to certain singular surfaces, as we now explain.
The lines on a cubic surface, smooth or not, are always the zeros of a global section σ. When the cubic surface is nonruled, the zeros are isolated but, when the surface is singular, possibly nonsimple. For a nonruled singular cubic surface, Equation (33) remains valid provided the type is defined to be the local index of σ. With this definition, the type of a line can be effectively computed using the main result of this paper.
For example, consider the cubic surface defined by x 2 1 x 4 + x 3 2 + x 3 3 over a field of characteristic 0. This equation is one of the normal forms of a cubic surface with a D 4 singularity. The surface contains the line parameterized by [S, T ] → [0, −S, S, T ]. One computes that the type of this line is the local A 1 -degree at the origin of the polynomial function f : A 4 k → A 4 k defined by (a, b, c, d) → (c 3 − 3c 2 + 3c, a 2 + 3c 2 d − 6cd + 3d, 2ab + 3cd 2 − 3d 2 , b 2 + d 3 ). The authors computed the local A 1 -degree of this function by implementing the method in Table 1 in Mathematica (Version 10.0.2). With respect to the lexicographical ordering, we have that (d 4 , c, bd 2 , b 2 + d 3 , ad 2 + 2bd, 2ab − 3d 2 , a 2 + 3d) is a Gröbner basis, 1, d, d 2 , d 3 , b, bd, a, ad is a k-basis for Q 0 (f), and E = −9d 3 /2.
This class is 2, 6 + 3 · H. (To see this, replace ad with 3b/2 + ad in the basis, and the matrix becomes block diagonal.) ERRATUM Lemma 6 is false as stated. There are two issues. One is that the common element φ 1 (E) = φ 2 (E) should be assumed to be nonzero. The other concerns the case when the characteristic of k is 2. In that case, the conclusion that β φ 1 is isomorphic to β φ 2 should be weakened to the statement that these two forms are stably isomorphic. (When the characteristic of k is not 2, the notions of stably isomorphic and isomorphic coincide [MH73, Witt's theorem (4.4)].) Lemma 6 is essentially a restatement of [EL77, Proposition 3.4 and 3.5], except [EL77, Proposition 3.5] includes the hypotheses that φ 1 (E) is not zero and the characteristic of k is not 2. A counterexample in characteristic 2 is given as follows.
Lemma 6 is replaced by the following.
Thus β φ 1 and β φ 2 determine the same element of GW(k) and the other results of this paper are true as stated. | 2018-11-15T18:54:24.000Z | 2016-08-19T00:00:00.000 | {
"year": 2016,
"sha1": "1dd35844f4a2aa31c0690450a7031310f8d0931f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1608.05669",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1dd35844f4a2aa31c0690450a7031310f8d0931f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
10277364 | pes2o/s2orc | v3-fos-license | Deceleration of Fusion–Fission Cycles Improves Mitochondrial Quality Control during Aging
Mitochondrial dynamics and mitophagy play a key role in ensuring mitochondrial quality control. Impairment thereof was proposed to be causative to neurodegenerative diseases, diabetes, and cancer. Accumulation of mitochondrial dysfunction was further linked to aging. Here we applied a probabilistic modeling approach integrating our current knowledge on mitochondrial biology allowing us to simulate mitochondrial function and quality control during aging in silico. We demonstrate that cycles of fusion and fission and mitophagy indeed are essential for ensuring a high average quality of mitochondria, even under conditions in which random molecular damage is present. Prompted by earlier observations that mitochondrial fission itself can cause a partial drop in mitochondrial membrane potential, we tested the consequences of mitochondrial dynamics being harmful on its own. Next to directly impairing mitochondrial function, pre-existing molecular damage may be propagated and enhanced across the mitochondrial population by content mixing. In this situation, such an infection-like phenomenon impairs mitochondrial quality control progressively. However, when imposing an age-dependent deceleration of cycles of fusion and fission, we observe a delay in the loss of average quality of mitochondria. This provides a rational why fusion and fission rates are reduced during aging and why loss of a mitochondrial fission factor can extend life span in fungi. We propose the ‘mitochondrial infectious damage adaptation’ (MIDA) model according to which a deceleration of fusion–fission cycles reflects a systemic adaptation increasing life span.
Algorithm
The algorithm for the time integration of the master equation was written in C++ and was based exclusively on standard C++ libraries. The source code is available from SourceForge (http://sourceforge.net/) under the link http://sourceforge.net/projects/mida-model/ and the visualization of analyzed system data makes use of the Graphics Layout Engine (GLE) that is freely available under http://glx.sourceforge.net/.
A flow chart of the program is presented in Fig. S1. The initial system configuration is built up from the input file that contains all information about the parameter set (red box in Fig. S1). Next, to compute the system's time evolution the master equation is integrated by repeating the same procedure at every time step until the total simulation time is reached (green box in Fig. S1). Depending on the processes that are enabled in a particular simulation, e.g. absence or presence of molecular damage, the corresponding contributions to the master equation according to Eqs. (5)- (7) and (10)
Additional Computer Simulations
In the sequel, we present the results of computer simulations that are obtained by varying selected parameters of the reference simulation. In Table S1, we provide an overview of the varied parameters for each simulation and a brief comment on the main impact of this variation relative to the reference simulation.
At first, we consider the impact of each process on the time-evolution of the system towards equilibrium. In order to allow for a direct comparison of the dynamics with that of the reference simulation as presented in Fig. 3 of the main text, all simulation runs were started from the same random initial distribution of mitochondria in quality state-space and were performed for the same simulation time. In cases where the equilibrium distribution was not yet reached after this simulation time, we provide additional information on the equilibrium distribution from simulations over longer times (data not shown).
The first three simulations are performed for the reference simulation in the absence of one of the following processes: fusion-fission events (see Fig. S2), quality decay (see Fig. S3), and mitophagy and mitochondrial biogenesis (see Fig. S4).
In the absence of fusion-fission events, it is observed in Fig. S2 that mitophagy and mitochondrial biogenesis counteract quality decay. However, this does not give rise to a significant number of mitochondria in high-quality states. Even in equilibrium (data not shown) the slowly evolving system will be characterized by a significantly lower average quality of mitochondria as compared to the case where fusion-fission events are present. Thus, while mitophagy and mitochondrial biogenesis have important impact on maintaining a large number of mitochondria in active states, the molecular exchange in fusionfission events is required to obtain mitochondria in high-quality states at reasonable time scales. This can also be seen from a comparison with Fig. S3, where we present the simulation for the reference simulation in the absence of quality decay. In this case, the system quickly evolves into an equilibrium state that is characterized by the absence of mitochondria in non-active states and occupation of highquality states. The importance of mitophagy and mitochondrial biogenesis for maintaining the fraction of active mitochondria becomes apparent in the absence of these processes. As can be inferred from Fig. S4, in this case the system slowly evolves into an equilibrium state where no active states will be occupied (data not shown). Thus, fusion-fission events alone are not sufficient to maintain a high fraction of active mitochondria, but are required to quickly establish and maintain mitochondria in high-quality states. This is realized by fusion-fission events at the cost of the quantity of mitochondria in active states, since the induction of high-quality mitochondria is directly associated with the induction of low-quality mitochondria that tend to become non-active. Therefore, mitophagy and mitochondrial biogenesis, albeit being relatively slow processes, provide an important backbone of maintaining sufficient mitochondria in high-quality states by the much faster fusion-fission events.
Next, we perform three simulations of the reference simulation including only the process of quality 3 decay (see Fig. S5), mitophagy and mitochondrial biogenesis (see Fig. S6), and fusion-fission events (see Fig. S7).
We observe from Fig. S5 that quality decay alone gives rise to the expected system behavior with mitochondria accumulating in the state with q = 0. Since no process is counteracting the quality decay, in equilibrium there will be no mitochondria left in states with q > 0 (data not shown). The situation is different in the case where only mitophagy and mitochondrial biogenesis are present, as is shown in Fig. S6. In this case, quality improvement is observed, however, as the driving processes are significantly slower than fusion-fission events, an equilibrium state is only reached after simulation times that are orders of magnitude longer than presented here. In addition, it should be kept in mind that processes like quality decay (see Fig. S1) and molecular damage have a strong impact on the distribution of mitochondria in quality state-space by lowering the average quality. On the other hand, fusion-fission events ensure the fast equilibration of the system into a state with a large number of mitochondria in high-quality states, as can be inferred from Fig. S6. Again, in the absence of mitophagy and mitochondrial biogenesis this distribution is established at the cost of a larger number of mitochondria accumulating in the non-active state. This suggests that the interplay between fusion-fission events on the one hand and mitophagy and mitochondrial biogenesis on the other hand is important for the establishment and maintenance of a mitochondrial distribution that this characterized by both, large quantity and high quality of active mitochondria (see Fig. S3).
To demonstrate the robustness of the qualitative results that we obtained from the reference simulation, we show in Fig. S8 the simulation results for a varied set of selectivity functions, i.e. we changed the rates regarding their quality-dependence from smooth Hill functions into abruptly changing functions of q. The selectivity functions for fusion-fission events, quality decay and mitochondrial biogenesis were chosen to have the same value for all quality states of active mitochondria (q > 0). Moreover, these three processes do not occur for mitochondria in the non-active quality state (q = 0), whereas mitophagy only occurs for mitochondria in the non-active state. The altered selectivity functions are plotted in Fig. S8A and all other parameters are the same as in the reference simulation. Starting from the same random initial distribution (see Fig. S8B), we obtain the equilibrium distribution of mitochondria in quality state-space that is shown in Fig. S8C. The average quality of mitochondria (see Fig. S8E) and the fractions of mitochondria in active and non-active states (see Fig. S8F) are found to be qualitatively similar to that in the reference simulation, despite the rather drastic change in the selectivity functions.
Quantitative differences are observed, e.g. with regard to the time scale on which the flow equilibrium is reached. In the present case, the system reaches the equilibrium state faster and the dynamically adapting renewal rate (see Fig. S8D) attains the equilibrium value r r (t → ∞) = 1.1 × 10 −4 min −1 , which is slightly higher than in the reference simulation (r r (t → ∞) = 8.1 × 10 −5 min −1 ). It can be concluded from this and other tested variations of the selectivity functions (data not shown) that the precise profile of these Hill functions does not change the qualitative conclusions drawn from the reference simulation with its particular choice of parameters for the sensitivity functions.
Similarly, we checked the robustness of the reference simulation with regard to the time-dependence of the rates. We generally observe that the system attains the same equilibrium configuration for the same set of parameters, independent of the profile of the Hill function that defines the continuous transition between two states. By way of exmaple, this is illustrated in Fig. S9, where we repeat the simulations of Fig. 4 in the main text for the reference system in the presence of molecular damage. However, in contrast to the simulations in Fig. 4, the time-dependence of the rate for molecular damage is not simply given by a monotonically increasing Hill function but by a combination of two Hill functions that give rise to a pulse in the rate of molecular damage. This can be seen in Fig. S9A-C and Fig. S9D-E, respectively, for random molecular damage and for infectious molecular damage. The simulation is started from the equilibrium state of the reference simulation and during the first part of the simulations, i.e. until the maximum in the damage rates is reached, the dynamic change of the system is in agreement with the time-evolution of the simulations in Fig. 4, as expected. Then, when the damage rates are declining and reaching again zero values, the system is observed to evolve back into the inital state, i.e. the equlibrium state of the reference simulation. The parameters that determine the time-pulsed damage rates are generally observed to determine the system kinetics, e.g. regarding the time scale on which the system equilibrates, rather than altering the equilibrium state.
Finally, we perform a simulation of the MIDA model that was started from the equilibrated reference simulation in order to demonstrate that deceleration of fusion-fission cycles improves mitochondrial quality control during aging. In Fig. S10 we show simulation results obtained by either keeping the fusion-fission rate constant in time ( Fig. S9A-C) or allowing its dynamic decrease in time (Fig. S9D-F).
In both cases, the dynamical increase of random molecular damage is starting at t = 2 · 10 4 min, where the parameters of the Hill function are chosen to be r rd (0) = 0 min −1 , r rd (∞) = 7.5 · 10 −2 min −1 , τ rd = 5 · 10 4 min, h rd = 2, and the random fraction is set to f rd = 3% (see Eqs. (11) and (12)). The infectious molecular damage is induced according to Eq. (25) withr rd = 0.05 min −1 and at t = 2.5·10 4 min we allow the fusion-fission rate to adapt according to a Hill function with parameters r ff (0) = 5 · 10 −2 min −1 , r ff (∞) = 1·10 −2 min −1 , τ ff = 1·10 4 min, and h ff = 4. All other parameters are chosen as in the reference simulation and the simulation is started from the equilibrium distribution of P (q, t) as obtained from the reference simulation.
The following three stages may be distinguished: Stage I: During this stage molecular damage is absent implying and the system dynamics is equivalent in Fig Stage II: At time t = 2 · 10 4 min, random molecular damage occurs that induces infectious molecular damage. This gives rise to a fast decrease in the average quality (see Fig. S10B and Fig. S10E) and in the fraction of active mitochondria (see Fig. S10C and Fig. S10F). However, allowing for the dynamic deceleration of fusion-fission events starting at time t = 2.5 · 10 4 min, gives rise to an improvement in the mitochondrial quality control. Higher values for the average quality and the fraction of active mitochondria are maintained in this case (see Fig. S10E and Fig. S10F) as compared to the case of constant fusion-fission rate (see Fig. S10B and Fig. S10C).
6
Stage III: Since the funtioning of a cell requires the fraction of active mitochondria to be maintained above a survival threshold, arbitrarily set to the point where the fraction of active and non-active mitochondria become equal. For the simulation with constant fusion-fission rate this survival threshold is reached at time t = 4.5 · 10 4 min in Fig. S10C, however, in the MIDA model cellular survival is prolonged until t = 7 · 10 4 min in Fig. S10F.
We conclude that the duration of Stage II is increased by a factor two in the case of the MIDA model as compared to the non-MIDA model with constant fusion-fission rate. 7
Varied parameters
Changes observed relative to reference simulation Figure absence of fusion-fission events no mitochondria in high-quality states Fig. S2 absence of quality decay all mitochondria in high-quality states Fig. S3 absence of mitophagy and mitochondrial biogenesis no mitochondria in active states in equilibrium Fig. S4 absence of fusion-fission events, mitophagy and mitochondrial biogenesis no mitochondria in active states in equilibrium Fig. S5 absence of fusion-fission events and quality decay all mitochondria in high-quality states in equilibrium Fig. S6 absence of quality decay, mitophagy and mitochondrial biogenesis reduced fraction of mitochondria in active states Fig. S7 selectivity functions in the rates qualitatively similar to reference simulation | 2016-01-07T01:57:53.067Z | 2012-06-01T00:00:00.000 | {
"year": 2012,
"sha1": "e649d7df20a128b60c88541ad5da1c817f052d91",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1002576&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e649d7df20a128b60c88541ad5da1c817f052d91",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Biology",
"Medicine"
]
} |
91716917 | pes2o/s2orc | v3-fos-license | Different yellowing degrees and the industrial utilization of flue-cured tobacco leaves
Yellowing is a key stage in the curing of flue-cured tobacco (Nicotiana tobacum L.) as much of the chemical transformation occurs during this period. This study examined the effect of different yellowing degrees on the value of flue-cured tobacco leaves at the farm level for both processing and manufacturing. The study was conducted in the counties of Chuxiong, Dali, and Yuxi in Yunnan, China over two years. Yellowing treatments have been designed to have either a mild or a regular yellowing degree. Yield, value, appearance, suction property, smoking characteristics, and physical resistance to further processing were investigated to evaluate the effect of degree of yellowing on the industrial utilization of flue-cured tobacco leaves. The regular yellowing degree enhanced yield, value, and appearance compared to the mild yellowing degree, regardless of cultivar or location; however, physical resistance to further processing and the suction property of the mild yellowing degree treatment were better than with the regular yellowing degree regardless of cultivar or location. Furthermore, although the regular yellowing degree recorded higher smoking characteristic scores than the mild yellowing degree immediately after flue-curing, the scores of mild yellowing degree leaves could be further augmented by increasing intensity in the re-drying stage. The smoking characteristic score in the regular yellowing degree can only be increased by low intensity re-drying, and significantly decreased by mild and high intensity re-drying. Therefore, in terms of industrial utilization, mild yellowing is the better choice for flue-curing tobacco. This study also suggested that the current regular yellowing stage in Yunnan should be shortened to meet the demands of the traditional tobacco industry.
Introduction
The flue-curing process is a way of curing fluecured tobacco with artificial heat over a period of 6-7 days (Horne, 1980;Hawks and Collins, 1993;Peele, 2005); it entails a flue-curing stage and a re-drying stage (Figure 1).Multiple factors influence industrial utilization and the style of the tobacco leaf and include the ecological environment, cultivar, maturity, agronomic management, and curing technology (Reed et al., 2012).Among these factors, the yellowing stage in the flue-curing process is a key step during which complex physical, physiological, and biochemical reactions materialize in the tobacco leaves (Bacon et al., 1952;Weston, 1968;Koiwai and Kisaki, 1979;Alejar et al., 1988).Previous studies have focused on various aspects of the curing process for flue-cured tobacco leaves, including temperature, humidity, time, and draft fan control (Zhan et al., 2011;Cui et al., 2013;Xie et al., 2013).These studies focused mainly on the commercial purchase of tobacco, which is set out as the First Sampling in Figure 1.However, few studies have concentrated on the extent of yellowing and its industrial usability in flue-cured tobacco leaves after threshing and re-drying as represented by the Second Sampling in Figure 1.
The current flue-curing mode for tobacco leaves in Yunnan is standardised to obtain dry, yellow, and fragrant leaves.However, the cigarette manufacturing industry gives poor evaluation scores to such tobacco leaves.A number of indices that are taken seriously in the industry are paid less heed during the flue-curing process.These indices include resistance to further processing, suction properties, shatter resistance, and the smoking characteristics of re-dried tobacco leaves (Walton et al., 1974;Wang et al., 1998).Resistance to further processing, and the suction properties, are closely re-
Research Article
Figure 1 -The process stages for flue-cured tobacco.
Zou et al. Tobacco flue-curing process
Sci. Agric.v.76, n.1, p.1-9, January/February 2019 lated to shatter resistance, which refers to the resistance of tobacco leaves to crushing under various mechanical forces.Moreover, shatter resistance has a close correlation with the machining properties of tobacco leaves and exhibits positive correlation with tobacco quality.The absorption equilibrium moisture content (AEMC) and desorption equilibrium moisture content (DEMC) of tobacco leaves exhibit significant correlation with important chemical contents, such as reducing sugar and potassium contents (Wang et al., 2011).Thus, the aim of this study was to systematically investigate the effect of yellowing degree on the industrial utilization of tobacco leaves during the flue-curing process.
Study site description
The experiments were performed separately in Chuxiong, Dali, and Yuxi, (the three-major tobaccogrowing regions), Yunnan Province, China, in 2015 and 2016.The curing barns were all bulk curing barns with a horizontal layout.The experiment arrangements are in Table 1.The three locations are all characterized by mild variation in mean monthly air temperatures, from 10 °C in Jan to 25 °C in June, but have a relatively uneven distribution in mean monthly precipitation, with an annual average rainfall of 850 mm and 80 % of the precipitation occurring from May to Oct in all three locations.
Experiment design
During the flue-curing process, two treatments were designed: regular yellowing (R) and mild yellowing (M).For regular yellowing, tobacco leaves were wilted at a relatively low temperature (kept in the yellowing stage before 42 °C) until they were 80-90 % yellowed.The entire yellowing process included vein yellowing and eventual color fixation after wilting and softening.Regular yellowing is currently the conventional fluecuring mode favored by most tobacco growers.For mild yellowing, tobacco leaves were wilted at a relatively low temperature (yellowing stage) until 60 to 70 % yellowed, and entered the subsequent, higher temperature color fixation/leaf drying stage with yellow lamina but stems still green.The Hongda and K326 cultivars are expressed as H and K, while Chuxiong, Dali, and Yuxi locations are represented by C, D, and Y, respectively (Table 2).Flue-curing technology is popular in Yunnan Province given the main change points of temperature and humidity (as measured by a hygrometer) namely, 35/33, 38/35, 42/36, 48/37, 54/38, 62/39, and 68/39 °C (Figure 2), respectively.In the treatment process, the turning points were adjusted according to degree of yellowing and the flue-curing time of the tobacco leaves (Table 3).Each treatment had three replications or three curing barns at each location.
Table 3 indicates the time schedule difference between regular yellowing and mild yellowing for both K326 and Hongda cultivars.In the yellowing stage, it took 78 h for the K326 cultivar to undergo regular yellowing, which was 24 h longer than with mild yellowing.Similarly, regular yellowing took 23 h longer than mild yellowing for the Hongda cultivar.In the leaf drying stage, it took 63 h for the K326 cultivar to undergo mild yellowing, which was 9 h longer than regular yellowing.Similarly, mild yellowing took 10 more hours than regular yellowing for the Hongda cultivar.There was little difference in the stem drying stage for either treatment.In total, the cumulative curing time was 161 h regular yellowing in K326, which was 13 more hours than the mild yellowing treatment in K326.The cumulative curing time was 168 h in the Hongda regular yellowing treatment, which took 12 h longer than the mild yellowing treatment.Figure 3 presents a comparison of flue-cured K326 leaves (regular v. mild yellowing, Chuxiong County, 2016).The cured leaf chlorophyll data showed that leaf chlorophyll content (leaf chlorophyll a + b) was 8.5 μg g -1 for mild yellowing, while regular yellowing resulted in a chlorophyll content of 7.2 μg g -1 .
In order to make a judgement about which fluecuring treatment (mild or regular) left more space or potential for improving the smoking characteristics in re-drying or further physical processing, the re-drying intensity levels, including low-, mild-, and high-intensity re-drying processes, were examined for the re-drying trial.leaf apices and leaf margins.When the leaves were wrinkled, the 12 th to the 14 th mature leaves were collected and flue-cured according to experiment requirements.Other agronomic practices were conducted according to local standards for cultivating high-quality tobacco.
Parameters
Yield and quality of tobacco leaves All of the samples were preserved after flue-curing to describe their appearance.According to GB2635-92, which is the standard flue-cured tobacco grading system in China, the flue-cured tobacco leaves were graded, and the proportions of lower-, middle-, and higher-grade tobacco leaves, and those of orange and greenish tobacco leaves, were calculated along with their average prices.
Industrial appearance quality
C2F and C3F were the typical grades of flue-cured tobacco for conducting research and testing.After removing greenish tobacco from the flue-cured tobacco leaves, 10 kg of C2F and C3F samples were separately evaluated based on the Tobacco Industrial Classification Standard.
Physical resistance to further processing
C2F and C3F samples, each of 10 kg in mass, were also separately taken to conduct conventional chemical composition analysis after removing greenish tobacco from the flue-cured tobacco leaves.After equilibration for 72 h at constant temperature (22 °C) and humidity (60 %) levels, the samples were analysed according to the method for detecting the shatter resistance index of tobacco leaves provided by Chen et al. (2011).
Absorption and desorption properties
The absorption and desorption properties of the samples were assayed by the SPSx moisture-retention instrument for tobacco leaves.The detection conditions were as follows: the original environment temperature and relative humidity (RH) were 25 °C and 60 %, respectively.Next, they were adjusted to 25 °C and 75 %, respectively, after reaching equilibrium before studying the absorption and desorption properties of the samples.Subsequently, after reaching equilibrium, the ambient temperature and RH were adjusted to 25 °C and 60 % once again to observe the absorption and desorption that had taken place.
Smoking characteristics
C2F and C3F samples, each of 5 kg mass, were separately taken to evaluate the smoking characteristics and quality after removing greenish tobacco from the flue-cured tobacco leaves.After re-drying, the scored smoking characteristics were provided by testers employed by the re-drying enterprise.Each sample was evaluated by seven certified experts and the results were the mean of seven reports.
Field agronomic management
The Hongda, and K326, cultivars were produced using high-quality and high-efficiency cultivation techniques with balanced nutrition, normal growth, and fresh leaves yellowed and matured on a layer-by-layer basis.In Aug, after cultivating for 90 to 95 days and topping for 35 to 40 days, the tobacco leaves turned pale yellow and 80 % of them were yellowed, with white and bright main veins, white branch veins, and down-rolled
Statistical analyses
Data were analysed by the General Linear Model (GLM) Procedure in SAS (Statistical Analysis System, version 9.3).Replicate measurements on composite leaf samples were averaged for statistical analysis of the treatment effects.Treatment effects were declared significant when the probability (p) of a greater F statistic was ≤ 0.05.Mean separation was undertaken by Tukey's honest significant difference (HSD) test at the 95 % level of confidence.For smoking characteristics, the data shown in this study was the average of seven reports.
Effect on yield and quality of tobacco leaves
During the production of tobacco leaves, yield and quality are mainly influenced by factors such as climate, soil, cultivation, harvest maturity, and degree of yellowing in the flue-curing process.The position, shape, physical feeling and color are the four key components of leaves which determine tobacco value in China.In this study, tobacco leaves with the same position and shape were cured and evaluated to identify the degree of yellowing that would exert a significant influence on the final yield and quality.The highest quality tobacco leaves were obtained after regular yellowing in the flue-curing process, which is conducive to increasing the flue-curing quality.High-class tobacco leaves, thus processed,exhibited maximum proportions of 78-84 % after different degrees of yellowing, respectively (Table 4).Moreover, mid-to high-class tobacco leaves, processed using regular yellowing, exhibited maximum proportions of 96-98 %, respectively, while fetching average prices of 30.50 to 34.77 yuan.
Effect on appearances of tobacco leaves
The appearance of the submitted samples was assessed against the Tobacco Industrial Classification Standard and the classifications of different samples are summarised in Table 5. CO2 indicates the highestclass level of tobacco leaves in the middle stalk position, while minor components indicate the acceptable level of tobacco leaves.The samples in the three experi-ment sites subjected to regular yellowing exhibited the best appearance with the highest proportion of CO2 and lowest proportion of minor components.This was most significantly reflected in tobacco leaves from Chuxiong where there was a difference in proportion (24 %) of CO2 in tobacco leaves in the two treatments.
Effect on physical resistance to further processing
The samples were analysed after pre-treatment according to the method for detecting the shatter resistance index provided by Chen et al. (2011).The proportions of tobacco leaves < 1 mm and ≥ 4 mm from the samples in the three experiment sites subjected to mild yellowing exhibited the best shatter resistance (Table 6).Of the three experiment sites the tobacco leaves in Chuxiong showed the best physical resistance to further processing.The shatter resistance of flue-cured tobacco leaves decreased as the degree of yellowing increased.
Effect on the absorption and desorption properties
During processing, tobacco leaves with a large moisture absorption rate readily absorb moisture during storage and transportation, thus increasing the shatter resistance and decreasing the losses due to shattering.In contrast, tobacco leaves with a large moisture desorption rate readily lose moisture in the re-drying process to achieve ideal moisture conditions.Thus, the shatter resistance decreases while the losses due to shattering increase.For moisture absorption and desorption characteristics, fewer hours indicate a higher absorption rate.The moisture absorption and desorption rates of flue-cured tobacco leaves both decreased with increased degree of yellowing in the flue-curing process.
The samples of the three experiment sites subjected to mild yellowing exhibited the largest moisture absorption and desorption rates (Table 7).Moreover, the moisture absorption and desorption rates of the Hongda cultivar were both less than those of the K326 cultivar at each location.In terms of sites, the moisture absorption rate of the tobacco leaves in Chuxiong was greater than those in Dali and Yuxi, while the moisture desorption rate of the first was lower than those of the last.---------------kg --------------
Effect on smoking characteristics
The smoking characteristics of the tobacco leaves subjected to regular yellowing obtained at the three experiment sites (Figure 1, First Sampling) were superior to those using mild yellowing (Table 8), and were reflected, in the main, by indices including aroma, aroma quantity, aroma quality, and purity.In terms of test sites, the smoking characteristics of tobacco leaves from Dali and Yuxi were superior to those from Chuxiong, which was primarily reflected in three indices including irritability, offensive odor, and purity; however, the aftertaste of tobacco leaves from Chuxiong was better than those from the other two experiment sites.Various technologies such as re-drying are required in industrial processes based on industrial demands.However, according to feed-back related to industrial products, although tobacco leaves subjected to regular yellowing had a high score, they offered a thin smoke, light offensive odors, and a poor after-taste.Thus, the tobacco quality, close to that subjected to a re-drying process, cannot be as readily adjusted on the scale of industrial production.In contrast, tobacco leaves subjected to mild yellowing offered thick smoke and an intense impact, but were irritating; thus, their quality can be improved further according to industrial needs after the re-drying process.
Effect on yield and value of tobacco leaves
In flue-cured tobacco production, the flue-curing process exerts an important influence on the yield and value of tobacco leaves and directly influences the appearance of flue-cured tobacco leaves and conversion of internal chemical constituents, such as polyphenols (Roberts, 1941;Gong et al., 2009).Specifically, the degree of yellowing plays an important role in the whole curing process because the peak period of the conversion of primary chemical components in fresh tobacco leaves occurs upon yellowing, which is extremely important when forming tobacco quality (Gong et al., 1996).Different degrees of yellowing can directly influence the appearance of flue-cured tobacco leaves, their chemical composition, the coordination and contents of neutral aroma components, and production value (Qian et al., 2012;Liu et al., 2015).The appearance of the Hongda cultivar improved with increased yellowing, while in terms of chemical composition, the total nitrogen content gradually decreased, the reducing sugar content increased, and there was no significant influence on other indices (Wang et al., 2007;He et al., 2014).These results are similar to those obtained by Qian et al. (2012).
The authors of much other research believe fluecured tobacco leaves have the greatest yield with a regular degree of yellowing while Qian et al. (2012) suggest that different degrees of yellowing can influence the yield and value of flue-cured tobacco.Most previous research is based on the processes used in tobacco leaf production and has concentrated on yellowing standards in different ecological environments, for other cultivars, locations, and curing barns.Similarly, it is generally believed that 80 to 90 % is considered as optimal yellowing for middle tobacco leaves (Song et al., 2010); however, the re-drying process was given greater attention in the Song et al. (2010) study.The purpose of re-drying tobacco leaves is to achieve and control a uniform moisture content within a certain range so that physico-chemical properties of tobacco leaves change favorably and consistently.This can improve the quality of tobacco leaves and makes their storage easier, thus benefiting industrial production.Compared with flue-curing, re-drying provides better control and has a greater effect on various aspects of tobacco leaves including physical, physiological, biochemical, quality, safety, individuation, and specialization.In our study, regular yellowing treatment (R) resulted in a higher proportion of high quality leaves as determined by the current market grading system, and higher average prices, compared to mild yellowing treatment (M).Therefore, from the grower's perspective, regular yellowing is the proper choice.This will be true until prices of grades are adjusted to reflect the greater value of tobacco from mild yellowing at the manufacturing stage.
Effect on industrial appearance, resistance to further processing, and absorption and desorption properties
The industrial appearance of flue-cured tobacco leaves was consistent with grade quality evaluation of appearance in commercial systems, and that obtained using regular yellowing was slightly higher than that resulting from mild yellowing.However, in view of the industrial processes used in flue-cured tobacco, mild yellowing was superior to regular yellowing.The moisture absorption characteristics and resistance to further processing of flue-cured tobacco leaves directly influenced the crumbliness of the leaves during processing.Generally speaking, more than 5 % of tobacco material is lost from tobacco leaves with poor resistance to further processing during subsequent defoliation and cigarette manufacture: this causes losses and increases the cost of tobacco and directly influences the economic value of tobacco leaves (Mutasa et al., 1990).Thus, the physical resistance to further processing of tobacco leaves is the focus of enterprises involved in re-drying tobacco and the greater the physical resistance to further processing, the less the loss of tobacco during defoliation.The data show that physical resistance to further processing represented by shatter resistance decreases as yellowing increases.Therefore, tobacco leaves with mild yellowing are easily processed during defoliation.With a rapid moisture absorption rate, tobacco leaves can quickly absorb moisture to enable defoliation in the re-drying process.Similarly, with a rapid moisture desorption rate, moisture in tobacco leaves can be quickly lost in the attempt to reach the required dry state.The data show that the moisture absorption and desorption rates of tobacco leaves with mild yellowing are greater than those of tobacco leaves with regular yellowing.Therefore, tobacco leaves with mild yellowing are more easily processed (i.e., defoliated and re-dried).From the perspective of appearance, it is appropriate to choose a slightly higher degree of yellowing, while mild yellowing is more conducive to physical resistance to further processing and improved suction properties.
Effect on smoking characteristics of flue-cured, and re-dried tobacco leaves
Smoking characteristic scores are a core index used to evaluate the quality of tobacco leaves and the primary method of assessing the internal quality of tobacco leaves (Stedman, 1968;White et al., 1979;Weybrew et al., 1983;Tso, 1990); however, the influence of the flue-curing process on tobacco qualities generally focuses on the smoking characteristics of tobacco leaves after flue-curing (Dai et al., 2008).The curing processes include flue-curing and re-drying; thus,,the quality of tobacco leaves evaluated on the basis of the smoking characteristics of re-dried tobacco leaves is closer to that of 'as-produced' cigarettes and exhibits broader significance.This research studied the influence of different degrees of yellowing in flue-curing process on the smoking characteristics of re-dried tobacco to further optimize flue-curing processes.After being treated with different intensities in the re-drying stages, tobacco leaves with different degrees of yellowing had dissimilar smoking characteristics.In view of the processing effect, tobacco leaves with mild yellowing were more conducive to subsequent re-drying efficacy compared with the current degree of yellowing seen in flue-curing processes.
Conclusions
The following conclusions are drawn from the experiments undertaken at three sites over a two-year period: first, in view of grower income, the appearance, proportion of upper-class tobacco leaves, and proportions and average prices of middle-and upper-class tobacco leaves subjected to regular yellowing are superior to those subjected to mild yellowing, regardless of cultivar.Second, from the perspective of industrial flue-cured tobaccos, samples with mild yellowing exhibit the greatest shatter resistance among the tobacco leaves tested and there is no significant difference between cultivars.Moreover, among the samples collected from the three sites, the samples with mild yellowing show quicker moisture absorption and desorption rates than those subjected to regular yellowing.Third, the scored smoking characteristics of tobaccos subjected to the flue-curing process with a regular degree of yellowing at the three test sites were all higher than those with mild yellowing.Despite this, the quality of fluecured tobaccos with regular yellowing could be adjusted slightly from an industrial point of view and smoking scores decreased as intensity during the re-drying stage increased.In comparison, the quality of tobacco leaves with mild yellowing could be improved through subsequent re-drying to meet industrial demand.Therefore, the results suggest that the conventional degree of yellowing in the flue-curing process needs to be reduced according to the settings of the flue-curing process from the re-drying perspective in the tobacco industry.
From the perspective of re-drying and cigarette processing, the authors propose an appropriate degree of yellowing for flue-curing as driven by the reform requirement from the supply side to improve the effectiveness of re-drying and tobacco processing.The conclusion is that it is necessary to treat both Hongda and K326 cultivars in tobacco-growing areas of Yunnan Province with 60 to 70 % yellowing before 42 °C with a flue-curing process.This change would be most effectively brought about if the price structure was adjusted to render the tobacco obtained from mild yellowing more valuable to the farmer than that from regular yellowing in China.
higher CO2 proportion indicates a higher-grade classification; different letters indicate a statistically significant difference at p ≤ 0.05.For treatment, Chuxiong, Dali, and Yuxi, locations are represented by C, D, and Y, respectively.The Hongda and K326 cultivars are expressed as H and K, respectively.M and R refer to mild or regular yellowing degree, respectively.
the larger the proportion < 1 mm, the worse the shatter resistance of the samples,while the larger the proportion ≥ 4 mm, the better the shatter resistance thereof; different letters indicate a statistically significant difference at p ≤ 0.05.For treatment, Chuxiong, Dali, and Yuxi, locations are represented by C, D, and Y, respectively.The Hongda and K326 cultivars are expressed as H and K, respectively; M and R refer to mild or regular yellowing degrees, respectively.
leaf sample was evaluated by seven certified experts and the data in this table are the mean of seven reports.For treatment, Chuxiong, Dali, and Yuxi, locations are represented by C, D, and Y, respectively.The Hongda and K326 cultivars are expressed as H and K, respectively.M and R refer to mild or regular yellowing degree, respectively.Effect of different yellowing degree on smoking characteristics following re-drying.
Table 2 -
Experiment codes.For treatment code, Chuxiong, Dali, and Yuxi, locations are represented by C, D, and Y, respectively.The Hongda and K326 cultivars are expressed as H and K, respectively.M and R refer to mild or regular yellowing degree, respectively.
Table 3 -
Flue-curing time required from regular yellowing to mild yellowing (unit: hours).
Note: KM refers to K326 with mild yellowing treatment; KR refers to K326 with regular yellowing treatment; HM refers to Hongda with mild yellowing treatment; HR refers to Hongda with regular yellowing treatment.
Table 4 -
Effect of yellowing degree on tobacco leaf yield.
Standardised by selling price of tobacco leaves in that year; different letters indicate a statistically significant difference at p ≤ 0.05.For treatment, Chuxiong, Dali, and Yuxi, locations are represented by C, D, and Y, respectively.The Hongda and K326 cultivars are expressed as H and K, respectively.M and R refer to mild or regular yellowing degree, respectively.
Table 5 -
Effect of yellowing degree on tobacco leaf appearance.
Table 6 -
Effect of yellowing degree on physical resistance to further processing.
Table 7 -
Effect of yellowing degree on moisture absorption and desorption properties.Note: equilibration time at 60 %-75 % RH indicates absorption, and that at 75 %-60 % RH indicates desorption; a lower number of hours indicates a higher absorption or desorption rate; different letters indicate a statistically significant difference at p ≤ 0.05.For treatment, Chuxiong, Dali, and Yuxi, locations are represented by C, D, and Y, respectively.The Hongda and K326 cultivars are expressed as H and K, respectively.M and R refer to mild or regular yellowing degree, respectively.
Table 8 -
Effect of different yellowing degree on smoking characteristics. | 2019-04-03T13:07:56.491Z | 2019-01-30T00:00:00.000 | {
"year": 2019,
"sha1": "eb35338452c411871b792a80441a7ad2edd71734",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/sa/v76n1/1678-992X-sa-76-01-0001.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "eb35338452c411871b792a80441a7ad2edd71734",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
259137155 | pes2o/s2orc | v3-fos-license | Posttraumatic stress disorder and depression after the 2018 Strasbourg Christmas Market terrorist attack: a comparison of exposed and non-exposed police personnel
ABSTRACT Background: Police personnel are among the first responders exposed to terrorist attacks, raising in number in the late decades. Due to their profession, they are also exposed to repetitive violence, increasing their vulnerability to PTSD and depression. Objective: Our study aims at comparing the prevalence of PTSD and depression, and the risk factors associated with these conditions among directly and indirectly exposed versus non-exposed police personnel during the Strasbourg Christmas Market terrorist attack. Method: Three months after the attack, participants completed a survey assessing their sociodemographic characteristics, occupational data, degree of exposure, sleep debt around the event, event centrality (CES), and three mental health conditions: PTSD (PCL-5), depression (PHQ-9), and suicide risk (yes/no questions). Results: A total of 475 police personnel responded to the questionnaire: 263 were exposed to the attack (182 of them directly) and 212 were non-exposed. Among directly exposed participants, the prevalences of partial and complete PTSD were 12.6 and 6.6%, and the prevalence of moderate-to-severe depression was 11.5%. Multivariate analysis revealed that direct exposure was associated with a higher risk of PTSD (OR = 2.98 [1.10–8.12], p = .03). Direct exposure was not associated with a higher risk of depression (OR = 0.40 [0.10–1.10], p = .08). A significant sleep debt after the event was not associated with a higher risk of later PTSD (OR = 2.18 [0.81–5.91], p = .13) but was associated with depression (OR = 7.92 [2.40–26.5], p < .001). A higher event centrality was associated with both PTSD and depression (p < .001). Conclusions: Police personnel directly exposed to the Strasbourg Christmas Market terrorist attack were at higher risk of PTSD but not depression. Efforts to prevent and treat PTSD should focus on directly exposed police personnel. However, general mental health should be monitored for every personnel member.
Introduction
Terrorism is the calculated use of violence to create a general climate of fear in a population (Terrorism n.d.). The Federal Bureau of Investigation (FBI) notes that the terrorist threat has evolved in recent years from large-group conspiracies to lone-offender attacks (Terrorism, 2022). The number of attacks perpetrated by isolated individuals rose in France during the late 2010s. The Strasbourg Christmas Market terrorist attack on 11 December 2018 represents one of these attacks. Armed with a gun and knives, a lone offender attacked citizens at the Strasbourg Christmas Market (Mengin et al., 2021). Because of the celebration, thousands of people were present in downtown Strasbourg during the attack. The assailant killed five people and wounded 11, but many more were directly or indirectly exposed. Numerous first responders were mobilized, including more than 1,000 police personnel. The police secured the zone, protected civilians, and tracked the assailant across the city for two days before he was neutralized. No police personnel were physically injured, and their degree of exposure ranged from direct participation in field operations to on-site officer management and direction of the police call centre.
Police officers usually face numerous potentially traumatic events during their professional lives (Carlier et al., 1997;Lee et al., 2016;Weiss et al., 2010). They belong to the specific populations described by DSM-5 (criterion A4) as being particularly prone to suffer from PTSD in their lives (Chopko et al., 2018). Previous studies have reported PTSD rates of 3.9% to 32% among police officers (Brewin et al., 2022;Maia et al., 2015;McFarlane et al., 2009;Stevelink et al., 2020;Syed et al., 2020). The prevalence of PTSD in police officers following routine work-related incidents may vary from 0% to 44% (Wagner et al., 2020). After exposure to an extreme event, incidence rates of PTSD in police personnel vary from 0.4% to 12.9% (Regehr et al., 2021). Several studies have explored the psychological consequences of police officers being exposed to terrorist attacks (Bowler et al., 2010;Brackbill et al., 2009;Cone et al., 2015;Farfel et al., 2008;Perrin et al., 2007;Pietrzak et al., 2012). The prevalence of PTSD varies from 1.3% (Gabriel et al., 2007) to 16.5% (Bowler et al., 2012). In France, very sparse data is available concerning PTSD among police officers, except for after the terrorist attacks of 13 November 2015 in Paris, which led to a prevalence of PTSD ranging from 5.5% to 9.9% (Motreff et al., 2020).
Depression is a frequent comorbidity of PTSD. As an example, among the police officers involved in the 9/11 attacks in New York City, after 5-6 years, 10% suffered from depression and 6.5% were diagnosed with both depression and PTSD, whereas only 1.4% and 0.6%, respectively, had been diagnosed with these conditions before the attacks (Bowler et al., 2012).
As police personnel are at high risk for mental health issues due to their repeated exposure to violence, it is of major importance to assess the mental health of both exposed and non-exposed police personnel after a terrorist attack. To our knowledge, no study to date has compared the mental health of exposed and non-exposed police personnel after a terrorist attack. In addition, while most studies have focused on police officers, other police personnel may also be impacted by a terrorist attack (e.g. the scientific police examining corpses and the administrative and judiciary police who are indirectly exposed). Moreover, to improve the secondary prevention of PTSD, depression, and suicide in this population, we explored the personal and occupational factors associated with a higher risk of these conditions. In police officers, some occupational factors are well known to impact long-term mental health, such as problems with colleagues or lack of support from superiors (Edgelow et al., 2022;van der Velden et al., 2010). The impacts of occupational factors on police officers' mental health after a terrorist attack remain scarcely explored.
Various peritraumatic factors may impact the emergence of PTSD after a traumatic event, such as peritraumatic dissociation and distress (Candel & Merckelbach, 2004;Vance et al., 2018). Peritraumatic sleep deprivation has also been explored as a predictive factor for posttraumatic-associated disorders (Cox et al., 2017;Swift et al., 2022). After the attack, which occurred at night, police officers might have been exposed to different sleep conditions due to their duty and due to stress (i.e. hyperadrenergic activity). We therefore explored how sleep debt was linked to later mental health problems in this population.
Finally, a traumatic event might become central to one's identity and be associated with higher levels of PTSD (Berntsen & Rubin, 2006). The principle underlying event centrality states that a highly accessible memory of a negative event might become a reference point for everyday inferences, fostering unnecessary worries, intrusions, and avoidance (i.e. PTSD symptoms). No study has explored event centrality in professional populations repeatedly exposed to violent situations.
Our primary hypothesis was that exposure to the terrorist attack would be significantly associated with PTSD and depression. Our secondary hypotheses were that specific training in terrorist attack response would be negatively associated and that a higher sleep debt after the attacks would be associated with a higher risk of PTSD. Finally, we explored how centrality interacted with both exposure and PTSD in police officers, hypothesizing that a higher centrality would be associated with higher levels of PTSD and exposure.
Procedure
This cross-sectional study was carried out between March and April 2019, three months after the Strasbourg Christmas Market terrorist attack. The police occupational physician provided information about the survey through billposting and oral communication in the police departments. Online or paper versions of the questionnaire were sent to 1,405 and 292 police personnel, respectively.
Study population
Participants were recruited from the total population of the police departments who were directly or indirectly involved in the handling of the attack. The inclusion criterion was employment by one of these Strasbourg police departments in December 2018. We included all police personnel (e.g. administrative, technical, and scientific staff). No exclusion criteria were set. We expected that at least half the police personnel in our sample would have been unexposed to the attack, either because they were not working on the day of the attack or because of their affectation.
Measures
All the questionnaires were written in French.
2.3.1. Sociodemographic data Gender, age, marital status, and education level were collected. We also calculated body mass index (BMI). Four classes for age were created following the quartiles and three classes for body mass index following the World Health Organization recommendations (< 25, 25-30, and ≥ 30).
Occupational information
Participants specified their police department (public security, judicial police, border police, republican security company, special intervention unit), their job (police commissioner, police inspector, police officer, police officer assistant, administration staff, technical or scientific staff), their work schedule (days only or shift work), the number of years of experience in their current position, and their previous participation in specific updated training for terrorist attacks. Because the Yellow Vest Protest 1 event was intercurrent in France and was particularly engaging for police officers during this period, participants indicated whether they had participated in these events.
Degree of exposure
Exposure to the terrorist attack was assessed using 17 closed questions (see Table S1). The responses were then grouped to arrive at a secondary classification. We separated the participants into 'exposed' and 'not exposed' groups. The degree of exposure was rated as 'direct' or 'indirect' among exposed participants. Participants directly and physically involved in the operations in Strasbourg's city centre during the attack or during the neutralization of the terrorist were rated as 'directly exposed.' Participants working at the police call centre during the terrorist attack, managing teams, or conducting other tasks related to the terrorist attack, but who were not present on the scene of the attacks or during the terrorist's neutralization, were rated 'indirectly exposed.'
PTSD
We assessed the presence and severity of posttraumatic stress disorder among participants using the PTSD Checklist for DSM-5 (PCL-5) (Ashbaugh et al., 2016;. All respondents were asked to refer to the Christmas Market terrorist attack as a potentially traumatic event. This scale encompasses 20 statements, each rated on a 5-point Likert scale ranging from 0 ('not at all') to 4 ('extremely'). The items evaluate four DSM-5 symptoms clusters: B, re-experiencing (items 1-5); C, avoidance (items 6-7); D, negative alterations of mood and cognitions (items 8-14); and E, alterations in arousal and reactivity (items 15-20). Following the DSM-5 diagnostic rule, each item with a rating of two or higher was considered a PTSD symptom. PTSD was defined as meeting at least one B item, one C item, two D items, and two E items (Bovin et al., 2016). Partial PTSD was considered when a participant met two or three of the diagnostic criteria B, C, D, or E (McLaughlin et al., 2015). The internal consistency of the questionnaire was appropriate in our sample (Cronbach's α = 0.95).
Depressive symptoms
We used the auto-administered Patient Health Questionnaire-9 (PHQ-9) to assess depression. The PHQ-9 comprises nine items rated on a 4-point Likert scale ranging from 0 ('never') to 3 ('almost daily') (Kroenke & Spitzer, 2002). The total score ranges from 0 to 27. Items relate to depressed mood, feelings of guilt, worthlessness, helplessness, slowness, loss of appetite, and sleep difficulties. The score was rated as follows: > 9, moderate-to-severe depression; 9-5, minor depression; and < 5, no depression. The internal consistency of the questionnaire was appropriate in our sample (Cronbach's α = 0.86).
2.3.6. Suicide risk Four yes/no questions assessed suicide risk. The questions were as follows: Q1, 'During the last two weeks […] How often did you think you would be better dead, or did you consider hurting yourself in any way?'; Q2, 'Did you feel down to the point that you thought about killing yourself?'; Q3, 'Were you making plans to commit suicide?'; and Q4, 'Have you attempted suicide?' The suicide risk was considered null when no questions were answered 'yes,' low when Q1 was answered 'yes,' moderate when Q2 was answered 'yes,' and high when either Q3 or Q4 was answered 'yes.' 2.3.7. Other potential confounding variables 2.3.7.1. Sleep debt around the attacks. We retrospectively assessed sleep duration before and after the attacks (from the night before to four nights after the event). The participants rated their sleep duration for each night among four categories (more than 6 h, 4-6 h, 2-4 h, less than 2 h, or 'I do not remember'), with categories rated 1, 2, 3, 4, and NA, respectively. The total sleep duration scores (before and after the event) were calculated on 1to-4 and 4-to-16 rating scales. The score after the event was not calculated if the participant answered NA to all questions and was weighted if there were three or fewer missing data. Three classes of sleep debt after event were created following the quartiles (< 7, 7 to > 9, and ≥ 9).
Previous exposure to traumatic events
Previous exposure to potentially traumatic events was assessed using the Life Events Checklist for DSM-5 (LEC-5) . The total number of potentially traumatic events reported by the respondent was counted, whether they were direct or indirect victims. The number of events ranged from 0 to 17. Four classes were defined following the quartiles (< 6, 6 to > 9, 9 to > 12, and ≥ 12).
Antidepressant or anxiolytic and mental health care use
The participants specified their current use of anxiolytics or antidepressants and whether they had begun this use recently (after the attacks) or formerly (before the attacks). We asked whether the participants had seen a mental health professional since the attack.
Behavioural factors
Current smoking status was assessed (non-smoker, current smoker, or former smoker). Alcohol consumption was estimated as a frequency of use: never, once a month or less, 2-4 times a month (non-drinker to mild drinker), 2-3 times a week, and at least 4 times a week (regular drinker). Sports practice was binary assessed as yes/no current sport activity.
Event centrality
The Centrality of Event Scale (CES) measures how central an event is to a person's identity and life story (Berntsen & Rubin, 2006). We provided the short 7-item version of the scale. Each item is rated on a 5-point Likert scale from 1 ('totally disagree') to 5 ('totally agree).' The total score ranges from 7 to 35. In previous studies, the weighted mean correlation between the CES and measures of PTSD was 0.51 (Gehrt et al., 2018). Three categories were defined following the quartiles (< 10, 10 to < 15 and ≥ 15). The internal consistency of the CES in our sample was relevant (Cronbach's α = 0.93).
Ethics
The relevant local ethical review board approved the study (CE-2019-17, Comité d'Ethique pour la Recherche, Université de Strasbourg, France). Before starting the survey, participants viewed a short informative film displayed on their intranet. Each police personnel was individually informed of the study's purpose in a letter. Participants were volunteers; they gave their identities so that their occupational physician could contact those with mental health problems. The questionnaire was then pseudonymized for our research.
Statistical analysis
A descriptive analysis of the data was performed for the variables exposed or non-exposed, PTSD or partial PTSD/no PTSD, and depression/no depression. The differences were analyzed with the chi-squared test or Fisher's exact test when the expected values were below 5. For the continuous variables, differences were analyzed with a t-test or Wilcoxon test according to their distribution. Multivariate logistic regression analysis calculated odds ratios (ORs) and their 95% confidence intervals for PTSD and depressive symptoms. The objective of the multivariate analysis was to identify the factors linked to the occurrence of PTSD and depression and thus to identify avenues for prevention. The independent variables included in the multivariate logistic regression models were chosen according to the p-value calculated for univariate analyses. The criteria for including a variable were a p-value lower than 0.20 or its being known as a factor usually associated with PTSD. The variables were excluded when a Variance Inflated Factor (VIF) was superior to two. We performed a multicollinearity analysis and the VIF for all covariate was always lower than two. An ANOVA was performed to analyze the influence of PTSD and depression on the centrality of the event to the self. Post-hoc analyses were performed to explore the impact of PTSD on centrality for subgroups of directly, indirectly exposed, and non-exposed participants.
Results
Among the 1,697 police personnel who received a questionnaire, 637 completed it. The response rate was 37%. Due to missing data, especially data on PTSD and depression, 163 questionnaires could not be analyzed (see Figure 1).
Finally, our sample included 475 police personnel (median age 41.9 years, SD = 9.1), 348 (73%) males, and 127 (27%) females ( Figure 1). Among them, 263 (55%) took part in the police operations following the terrorist attack and were considered directly (n = 182) or indirectly (n = 81) exposed. Directly, indirectly exposed, and non-exposed participants did not differ in age, educational level, BMI, work rhythm, mean sleep debt score before the event, or taking antidepressants or anxiolytics before the event. They differed in gender, jobs, years in current position, specific terrorist attack training received, mean weighted sleep debt score after the event, previous exposure to traumatic events, and Yellow Vest Protest exposure (Table 1).
Prevalence of PTSD, depression, and suicide risk
The prevalences of partial and complete PTSD in our sample were 9.1% and 3.2%, respectively. Directly Figure 1. Flow chart. exposed personnel had a significantly higher risk of PTSD than did indirectly and non-exposed personnel (12.6% and 6.6% of partial and complete PTSD in directly exposed; 9.9% and 1.2% in indirectly exposed; 5.7% and 0.9% in non-exposed participants, respectively; p = .001, Cramer's V = 0.18).
The prevalence of depression in our sample was 11.8%. Direct exposure was not associated with a higher risk of moderate-to-severe depression (11.5% of moderate-to-severe depression among directly exposed police personnel, compared to 13.6% among indirectly exposed, and 11.3% among non-exposed, p = .86).
Exposure was not associated with a higher suicide risk (3.0% moderate-to-severe suicide risk among directly exposed police personnel, 4.9% among indirectly exposed, and 2.4% among non-exposed, p = .47).
Factors associated with partial or complete PTSD
In univariate analysis (Table 2), exposure degree was significantly associated with partial-to-complete PTSD (p = .001, Cramer's V = 0.18). Regarding sociodemographic characteristics, only gender (female) was associated with a higher risk of PTSD (p = .03, d = 0.20). No occupational factor was associated with PTSD. The mean number of previous traumatic events was associated with PTSD (p = .001, d = 0.45), but taking antidepressant or anxiolytic treatment before the event (p = .28) was not. Regarding peri-traumatic factors, a higher sleep debt score after the events was associated with a higher risk of PTSD (p < .001), while sleep debt score the night before the event was not (p = .62). Finally, a higher score on the event centrality scale was correlated with PTSD (p < .001, d = 1.19). Every comorbid disorder we explored was associated with PTSD: moderate or severe depression and suicide risk (p < .001). The participants with partial or complete PTSD used mental health care services significantly more than those with no PTSD (33% and 6%, respectively, p < .001). Multivariate analysis revealed that direct exposure and gender (female) were significantly associated with partial-tocomplete PTSD (OR = 2.98 [1.10-8.12], p = .03 and OR = 2.24 [0.99-5.07], p = .05, respectively), while indirect exposure was not (OR = 1.52 [0.45-5.11], p = .50) ( Table 3) Secondary analyses were performed to explore whether exposure, PTSD, or both were correlated with how events were considered central to the self. The ANOVA revealed a primary significant effect of exposure (F(2,468) = 6.86, p = .001, η 2 p = 0.028) such that centrality was significantly higher in directly exposed subjects than in non-exposed subjects (p < .001) and non-significantly higher than in indirectly exposed (p = .089) subjects. Additionally, centrality was higher in PTSD subjects than in non-PTSD subjects (F(1,468) = 86.80, p < .001, η 2 p = 0.156). This was particularly true for directly exposed subjects, as reflected by a significant interaction between PTSD and exposure (F(2,468) = 4.76, p = .009, η 2 p = 0.020) (Figure 2). Post-hoc analyses revealed a significant difference in centrality in directly exposed subjects between those with and those without PTSD (p < .001). All other differences were not significant. Table 1. Sociodemographic characteristics of exposed and non-exposed participants. (7) Year in current position (SD) 9.3 (7.2) 9.3 (7.7) 7.6 (7.6) .05 (14) 12 (20) .55 59 (14) 12 (21) .03 71 (15) Police officer assistant 20 (5)
Discussion
First, our study confirmed that police personnel directly exposed to the Strasbourg Christmas Market terrorist attacks had a higher risk of developing partial or complete PTSD three months later than did nonexposed police personnel. On the contrary, though indirectly exposed personnel showed higher PTSD rates, they did not show a significantly higher risk of PTSD than did non-exposed personnel. PTSD was associated with every mental health condition we explored (depression, suicidal risk) and thus represents a significant mental health burden. The prevalences of partial and complete PTSD in exposed police officers were 9.1% and 3.2%, respectively. These results are comparable to previous studies conducted shortly after terrorist attacks (1.3% PTSD in police officers exposed to the 2004 Madrid terrorist attack, 5-12 weeks after the event) or later (15.4% subsyndromal PTSD and 5.4% complete PTSD in police officers exposed to the World Trade Center attacks, starting 4 years after the event) (Gabriel et al., 2007;Pietrzak et al., 2012). In contrast, the prevalences observed in police officers after the 13 November 2015, Paris terrorist attack were higher (23.2% and 9.5% partial and complete PTSD, respectively, one year after the event). A longitudinal study among the police officers involved in the World Trade Center attack showed an increase in probable PTSD from 7.8% 2-3 years to 16.5% 5-6 years after the attack, inviting us to remain careful long after such an event (Bowler et al., 2012).
The prevalence of complete PTSD among directly exposed police personnel (6.6%) remained lower than in civilians, which is a common finding after terrorist attacks (Paz García-Vera et al., 2016). In Strasbourg, probable PTSD among directly exposed civilians was estimated at 26.4%, from 6 to 11 months after the attack. Various hypotheses can explain these differences: first, civilians are in these situations the main targets of terrorist attacks, while first responders enter the crime scene later. Secondly, first responders are engaged in specific actions due to their missions of help and protection, which might reinforce their sense of control, limiting the feeling of powerlessness and protecting from PTSD (De Stefano et al., 2018). Thirdly, a psychological selection upon entering the profession and a better preparation to live through such an event in particular may also contribute to these differences (Regehr et al., 2021). Finally, the fear of being stigmatized by psychological pathology, of weapon withdrawal, or of negative impacts on their careers may lead police personnel to underestimate their mental suffering (Perrin et al., 2007).
Regarding factors related to PTSD, young age was not significantly related to PTSD, though a trend existed (p = .08). This result could be explained by a lack of statistical power due to the low proportion of young people among the police personnel and the study participants (only 5% were 27 years old or younger). Consistent with previous studies on police personnel and the general population, gender (female) was associated with a higher risk of PTSD (Bowler et al., 2010;Shalev et al., 2019). Though significantly more women belonged to the non-exposed group, a higher risk of PTSD in women remained significant among directly exposed participants (p = .001). Interestingly, the late increase in PTSD revealed by Bowler et al. (2012) was marked in males, which may reflect a different phenotype, with an early resilient response and late-onset PTSD, in males.
Our study also explored peritraumatic sleep and showed that greater sleep debt on the nights following the attack was associated with partial or complete PTSD. Previously, the peritraumatic clinical responses investigated were levels of distress and dissociation during the event, but findings regarding their ability to predict later PTSD were mixed (Canan & North, 2019;Lensvelt-Mulders et al., 2008;van der Velden & Wittmann, 2008;Werner & Griffin, 2012). Sleep disturbances (e.g. post-traumatic nightmares) are reliably associated with PTSD and are part of the diagnosis. However, the temporal causality between sleep disturbance and PTSD remains a current research topic (Cox et al., 2017). Though univariate analyses showed that peritraumatic sleep debt was associated with a higher risk of PTSD, multivariate analyses showed no association between these variables in our population. These contrasting findings invite new research on this topic. Indeed, previous research showed that shorter sleep durations increase amygdala activation and are associated with PTSD (Geoffroy et al., 2020;Goldstein & Walker, 2014). As memory consolidation of recently encoded memories occurs during sleep, a significant sleep debt after a traumatic event might impede this process, leading to unstable and intrusive memories (Goerke et al., 2017;Rasch & Born, 2013). The retrospective design of our study produced a recall bias, especially concerning peritraumatic factors such as sleep debt. Such observations have also emerged in the literature on peritraumatic dissociation (Candel & Merckelbach, 2004). However, other researchers have agreed that it is impossible to measure peritraumatic dissociation (or sleep problems) prospectively and that the only solution is to measure it quasi-prospectively (i.e. after the event but before the emergence of PTSD). In a quasi-prospective and retrospective study, these authors found comparable effect sizes for the correlation between peritraumatic dissociation and PTSD (0.35 and 0.37, respectively) (Breh & Seidler, 2007). Future quasi-prospective studies on peritraumatic sleep debt may help clarify our results.
Also, police personnel exposed to more previous traumatic events had a significantly higher risk of PTSD. This trend is known to be a common PTSD risk factor in the general population but remains scarcely explored in police personnel. In contrast, exposure to the recent Yellow Vest Protest was not associated with a higher risk of either PTSD or depression. The impact of cumulative traumatic events seems more impactful than this recent event.
As police personnel are overexposed to traumatic events (Van Eerd et al., 2021), and PTSD is associated with depression and higher suicidal risk, measures to prevent it are necessary. Primary prevention measures exist, such as providing information about the psychosocial dangers of their job, increasing social support from colleagues and managers, and implementing organizational changes and systematic officer training (Skogstad et al., 2013). However, in our study, specific training for terrorist attack interventions was not associated with a lower risk of PTSD.
Secondary prevention measures are often centred on debriefing for traumatic incidents, though these measures have showed contrasting results for PTSD prevention (Carlier et al., 2000). Individual or targeted follow-ups of exposed individuals by their occupational physicians or clinical psychologists might also be adequate to prevent PTSD and depression. Following the international recommendations, traumafocused psychotherapy (e.g. cognitive processing therapy, prolonged exposure, EMDR) should be recommended to police personnel suffering from PTSD (Martin et al., 2021).
Though some personal and behavioural factors were associated with PTSD and/or depression in previous studies (e.g. BMI, tobacco or alcohol use, and PTSD) (Mengin et al., 2022;Pericot-Valverde et al., 2018;Suliman et al., 2016), our study revealed no association between BMI, sports activity, alcohol or tobacco consumption, and PTSD in our population. Only a low level of sports activity was associated with depression, which is already acknowledged in the literature (Pearce et al., 2022).
Finally, PTSD and depression have been consistently associated with a higher event centrality in the literature (Blix et al., 2014;Gehrt et al., 2018), though they have scarcely been demonstrated in professionals exposed to a traumatic event. In addition, our results show that the correlation of PTSD with centrality is partially explained by the degree of exposure to the traumatic event. However, the combination of PTSD and exposure primarily accounts for the increase in event centrality; in other words, the combination of direct (highly impactful) exposure and major psychological trauma increases the likelihood that an event will be central to one's identity. Our study's cross-sectional design prevents us from drawing any conclusions about the direction of the relationship between event centrality and PTSD. Therefore, our results should be confirmed by future longitudinal research assessing the impact of both exposure and PTSD on event centrality.
Our study has several strengths. First, we included a critical number of participants. Second, we included non-exposed police personnel as a control group in our analyses. Third, we used validated scales measuring PTSD and depression severity; the latter is often omitted after a traumatic event, though it is commonly associated with PTSD. We also measured the degree of exposure, distinguishing direct and indirect exposure. Finally, our measures were collected within a short time frame (2 months), giving a precise picture of our population 3-4 months after the event.
There are also some limitations to our study. First, the low response rate (37%) may constitute a response bias, with participants with higher levels of mental health problems being more or less inclined to participate. Data on age, gender, and jobs were available for public security and judiciary police agents (n = 1272) to compare respondents with non-respondents. A selection bias existed, as the population of respondents differed from non-respondents in terms of mean age (42 years for respondents versus 40 years for nonrespondents, p = .006), gender (34% of female in respondents versus 24% of non-respondents, p < .001), and job (67% of policer officers in respondents versus 74% in non-respondents and 4%, p < .001). Males are known to be far less likely to seek mental health treatment than are women, which might explain the lower proportions of males and police officers (mostly males) among the respondents (Chatmon, 2020). Some police personnel might mistrust a questionnaire concerning their health and tend to minimize their symptoms or not participate (e.g. fearing losing their job in case of psychological distress). A healthy worker bias is unavoidable, as workers on sick leave (possibly because of PTSD or depression) could not be reached (Pearce et al., 2007). Our results might then underestimate policers' psychological suffering. On the contrary, policers concerned or anxious about their health might have been keen on responding to our questionnaire. Secondly, exposed and non-exposed police personnel differed in some sociodemographic characteristics.
We performed post-hoc analyses for the most frequent job (i.e. police officers, n = 330), a homogeneous subsample, to control for these factors. In this sample, our results were comparable to those obtained in our population: PTSD was more frequent among directly exposed officers (13% partial and 8% full PTSD, versus 3% and 2% in unexposed; p < .001), while depression was not (p = .48). Sleep debt and centrality were also significantly higher in directly exposed officers than in indirectly and non-exposed officers (p < .001). Thirdly, we did not consider some organizational factors in our questionnaire, such as support from colleagues or superiors, though it is a protective factor against PTSD (Skogstad et al., 2013). Social support is also a well-known protective factor against PTSD among civilians (Mengin et al., 2021). Future studies investigating PTSD in police personnel should include these factors in their analyses, distinguishing professional (e.g. colleagues, superiors) and personal (e.g. family, friends) support. Finally, our study is cross-sectional and does not allow us to conclude in terms of causality. Our study did not measure levels of PTSD before the attacks, and as directly exposed participants had faced a significantly higher number of traumatic events before the attacks than had non-exposed participants, they presented a higher vulnerability to PTSD (Shalev et al., 2019). Thus, we performed a logistic regression to assess the impact of exposure level on PTSD, including the previous number of traumatic events as a covariate. Direct exposure remained significantly associated with a higher risk of partial or complete PTSD (p = .003). In addition, consuming anxiolytics or antidepressants before the event was not associated with a higher risk of PTSD (p = .26) but was associated with depression (p < .001).
Conclusion
Though frequently exposed to potentially traumatic events in their work, police personnel directly exposed to Strasbourg Christmas Market terrorist attacks were more at risk of developing partial or complete PTSD, but not depression, than their non-exposed colleagues. Police personnel indirectly exposed did not present a significantly higher risk of PTSD or depression. Efforts to prevent and treat PTSD by occupational health services should focus on directly exposed police personnel in the short, medium, and probably long term. However, general mental health should be monitored for every personnel member. Note 1. The Yellow Vest Protests are a series of populist grassroots weekly protests that began in France on 17 November 2018. Many police officers were involved in these mass demonstrations. | 2023-06-13T06:16:17.138Z | 2023-06-12T00:00:00.000 | {
"year": 2023,
"sha1": "cc415bf8044270dec66997854fe6efddc8091c87",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1080/20008066.2023.2214872",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c406a1d78e29c60e37de9542f8fe2c852f31c6c8",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231705975 | pes2o/s2orc | v3-fos-license | A potential solution to avoid overdose of mixed drugs in the event of Covid-19: Nanomedicine at the heart of the Covid-19 pandemic
Since 2020, the world is facing the first global pandemic of 21st century. Among all the solutions proposed to treat this new strain of coronavirus, named SARS-CoV-2, the vaccine seems a promising way but the delays are too long to be implemented quickly. In the emergency, a dual therapy has shown its effectiveness but has also provoked a set of debates around the dangerousness of a particular molecule, hydroxychloroquine. In particular, the doses to be delivered, according to the studies, were well beyond the acceptable doses to support the treatment without side effects. We propose here to use all the advantages of nanovectorization to address this question of concentration. Using quantum and classical simulations we will show in particular that drug transport on boron nitrogen oxide nanosheets increases the effectiveness of the action of these drugs. This will definitely allow to decrease the drug quantity needing to face the disease.
Introduction
Since the beginning of 2020, our world is completely upset by the appearance of a novel strain of coronavirus. On March 11th, the World Health Organization (WHO) declared a pandemic due to this COVID-19 (2019-nCoV). Coronaviruses (CoV) can cause several pulmonary diseases in mammals [1,2]. The first of these emerged in 2002 (called SARS-CoV) and was responsible for strong breathing syndromes leading to a mortality rate of 10%. The second virus (MERS-CoV) appeared in 2012 with a significantly higher virulence rate (mortality of about 35%). Fortunately, these two diseases had a low contagion rate (less than 100000 cases for the SARS-CoV) or remained very localized (only condensed in the Arabic peninsula for the MERS-CoV) and few deaths were recorded (less than 10000). However, these were the first alerts upon the apparition of a new strain of viruses.
The new virus originally named SARS-CoV-2 [3] due to its similarity to the SARS strain genome (about 82%) [4,5] was initially reported in Wuhan. Its amazing mode of human-to-human transmission has led to an explosive expansion in the number of cases, which has resulted in the virus spreading around the world. As of December 17, WHO had confirmed 72,851,747 cumulative cases and 1,643,339 deaths. The mortality rate tends thus to 2.3% but is very dependent on both health and age of the patient. However, the significant development of the virus in the southern hemisphere causes its important resurgence in the daily months, i.e. the famous second wave. And worse, this virus could become seasonal, raising the fear of its appearance each year, and therefore successive waves of a pandemic.
There is a huge challenge facing the whole scientific community, especially virologists, in finding a solution to this pandemic.
The structure of the SARS-Cov-2 virus consists of four structural domains. Its spherical envelope (around 100 nm diameter) is made of a nucleoprotein surrounded by a lipid bilayer coming from the host cell. This latter contains 3 other proteins: the membrane one (M), the envelope one (E) and the spike one (S) [6]. The latter (S) is responsible for binding the virus to the host cell and the fusion of the protein [7,8]. When homotrimerized, it causes viral infection through two domains. The S1 domain, which contains the receptor binding domain (or RBD) aims to bind to the host cell receptor [9], also known as angiotensin converting enzyme 2 (ACE-2) [10]. The S2 domain is responsible for fusion of protein E with the host cell [11].
In recent years, drug development process against emerging viral infection has been increased and assisted by the amazing explosion of computer resources. The repurposing methodology [12] has led to the discovery of several therapeutics agents against diseases such as Ebola, or hepatitis C virus [13,14].
Recent calculations have been performed on SARS-CoV-2 [15] using clinically approved drugs such as LopinavireRitonavir [16]. However, clinical trials have not been convincing and therefore have limited the use of these drugs by hospital settings. In parallel, some hospitals have used a dual therapeutic treatment comprising both azythromicin (AZM) and hydroxychloroquine (HCQ) molecules. The efficiency of this association of an antibiotic and an antimalarial was found to be high if the drugs were given in the initial state of the disease and with a normal dose. Conversely, when given at the early stage of the disease (and with higher doses), side effects (namely the cardiac problems) lead to an increased mortality rate.
In recent years, nanomedicine has experienced significant development since it allows drugs to be delivered at the targeted cell. Using nanovectors such as nanotubes (of different constitutions) [17e21] or nanoflakes (of different constitutions) [22,23], researchers can now safely transport drug molecules to their target and attack only the diseased part organs. Among the great diversity of nanovectors, boron nitride oxide nanoflakes (BNO) have recently attracted particular interest due to their bio human compatibility [24,25]. Herein, in order to answer to the difficult question of the medical dose of each drug in the treatment of SARS-CoV-2, we will study in this paper the transport of HCQ or AZM (or both) using BNO towards the receptor binding of the virus or the closed state structure of the viral protein.
The paper is organized as follows. After describing the numerical methods used in this paper, we will study the stability of the nanovector using quantum simulations. Then, the nanovectorization of each drug will be studied to demonstrate its ability to target the viral protein. If successful, this novel technology could answer to the difficult question of medical doses since the vector would improve the accessibility of the drug until its target.
Density functional theory
To simulate the adsorption of Azythromicin (AZM), Hydroxychloroquine (HCQ) and to study the nature of the interactions between these molecules and boron nitride oxide (BNO) nanoflakes, theoretical approach based on density functional theory (DFT) method has been used thanks to the SIESTA code [26e29]. The calculations were carried out using a polarized double z basis set (DZP), non-local norm conserving pseudopotentials and, for the exchange correlation functional, in all calculations, we used the generalized gradient approximation (GGA) including the van der Waals interaction [30e32]. The mesh cutoff of 150 Ry with a single k-point at the center G for the Brillouin zone integration was considered to calculate the total energies within a numerical precision of 1 meV. Geometry relaxation was performed by the conjugate-gradient method with the force convergence criterion of 0.02 eV/Å in the same slab volume (i.e., 4 Â 4 Â 4 nm 3 ). AZM as depicted in Fig. 1 In a second step, we studied three different molecule/BNO systems shown in Fig. 2: AZM/BNO, HCQ/BNO and AZM/BNO/HCQ. In order to determine the best conformational/adsorption energy, SIESTA DFT molecular dynamic simulation was performed in vacuum. The initial configurations were calculated by random placements of the molecule on BNO. Then, energy minimizations were performed on the system via a position relaxation calculation to obtain an optimized configuration of the molecule and the nanoflake. We then determine the adsorption energy (E ads ) to verify the stability of the molecule(s) adsorbed on BNO. E ads was calculated using the following equation: is the total energy of molecule(s) adsorbed on BNO after energetic relaxation of its geometry in vacuum. E(BNO) and E(Molecule(s)) are respectively the energy of the optimized nanoflake and of the molecule(s). E ads < 0 corresponds to stable adsorption on the nanoflake. The charge distributions between molecules and the BNO surface were analyzed using the Bader approach [38,39] and used as them in the NAMD force field.
Molecular dynamics simulations
The molecular dynamics (MD) simulations were performed in two different steps. First, the receptor binding domain (RBD) bound to the ACE-2 receptor of the host cell (via the 6M0J protein database (pdb) structure, next called RBD_ACE-2) was studied in order to show where are the possible interaction sites between the drugs and/or the nanovector and the protein. The goal of this step was to determine if our system could prevent this main association for the propagation of the virus. The glycosylation of the protein and its steric influence on drug accessibility to the RBD_ACE-2 protein have also been studied recently in the relaxed crystal structures [40].
Then, in a second step, we study the full SARS-CoV-2 spike glycoprotein trimer conformation obtained through the pdb file structure # 6VXX (next called S-trimer). It has a resolution of 2.80 Å from CryoEM (electron microscopy) measurements and corresponds to the closed state of the protein. It can be described by 3 intercalated chains having different domains such as the NTD (Nterminal domain of the SARS-Cov-2 nucleoprotein) and the RBD (Receptor Binding Domain) which belongs to the S 1 part of the protein. Note here that the intercalation of the three chains did not allow us to consider three independent binding processes for each simulation.
Classical MD simulations were performed by building the molecular force field for HCQ and AZM using the SwissParam Force Field Toolkit package. [41,42] When adsorbed on BNO system, the molecular force fields of the system were modified by taking into account the partial charge changes for each atom belonging to the system. These novel partial charges were determined by the Bader formalism of the DFT part as aforementioned.
The protein was studied using the molecular force field according to the CHARMM-GUI procedure which allows proper relaxation of the protein structures [43,44]. Proteins glycosylation was built from CHARMM-GUI Glycolipid Modeler [45].
To study the interaction between the nanovector and the protein, each protein was then associated with a nanovector and solvated in a water box large enough to cancel the protein interaction with its periodic image during the simulation. We generally chose 1.5 nm of water solvent in each direction to separate the protein from the periodic box limit. This prevents interaction with the image of the system in the adjacent cell in periodic boundary conditions applications. To complete the system, NaCl ions (at a concentration of 0.15 M) were added to the water model (transferable intramolecular potential with 3 points, TIP3P). The CHARMM36 force-field optimization parameters were used in all simulations [46]. Complete systems contained 245630, 245681, 245604, 301706, 301877 and 301707 atoms for BN_AZM/RBD_ACE-2, BN_HCQ/RBD_ACE-2, AZM_BN_HCQ/RBD_ACE-2, BN_AZM/Strimer, BN_HCQ/S-trimer and AZM_BN_HCQ/S-trimer systems, respectively.
All the simulations were performed at constant temperature and pressure. The temperature was fixed at 310 K (Langevin dynamics) and the pressure was 1 atm (Langevin piston), respectively. Long-range electrostatic forces were evaluated using the classical particle mesh Ewald (PME) method with 1.2 Å grid spacing, and fourth-order spline interpolation. The integration time step was equal to 1 fs. Each simulation used periodic boundary conditions in the three directions of space. No constraint was imposed during the production phase of the MD simulation. As a consequence, our results were obtained with all atoms left free in the simulation box. Note that for each protein, we first minimized and balanced the structures using MD simulations (NAMD 2.12 package [47]) for 20 ns under biological conditions before studying the entire system in interaction. All protein structures are depicted in Fig. 3aeb.
The possible interaction sites of the nanovector with the proteins were determined through several simulations starting with different configurations obtained by a docking procedure between the viral protein and the nanovectors. For this, we used AutoDock Vina (a fast and accurate evolution of AutoDock) as the molecular docking engine able of producing a large list of ligand positions within a reasonable time. The different starting configurations of the molecular dynamics simulations were chosen according to the best scoring functions obtained in the optimization algorithm. For the RBD_ACE-2, at least 5 different simulations were performed for each drug. For the S-trimer system, only 2 simulations were performed due to the huge size of the system. Note that for each system, an additional simulation where the position of the nanovector was chosen outside the protein surface was performed in order to study the transport path to the active sites of the protein.
Density functional theory
All DFT simulations on AZM/BNO, HCQ/BNO and AZM/BNO/HCQ did not show desorption of hydroxyl or oxo group from the BNO surface. All the data are summarized in Table 1. We find adsorption energy equal to À3.88 eV between the AZM and BNO surface. The Bader charge difference for the AZM molecule between the final state (adsorbed on the nanoflake surface) and the initial state (isolated AZM) gives a total charge transfer equal to 0.08 e À to the benefit of AZM. In the case of HCQ/BNO, adsorption energy equal to À1.89 eV is obtained. The Bader charge transfer is equal to À0.10 e À in favor of BNO. In the case of the AZM/BNO/HCQ system, the presence of the two molecules on each side of the BNO gives À4.86 eV for the adsorption energy of AZM and HCQ on BNO. The Bader charge difference gives a total charge transfer equal to 0.06 e À and 0.09 e À to the benefit of AZM and HCQ respectively.
There is a reversible charge transfer between BNO and molecules depending on the system. It is tuned according to its environment.
Interaction with RBD_ACE-2
Our first investigations were dedicated to the study of drugs and the nanovector with the RBD part of the protein bound to the receptor of the host cell (ACE-2). Due to the large size of the system, the most relevant interaction sites between the drugs, vectorized or not by the BNO, and the protein were determined by docking simulations. Seven configurations were obtained for HCQ/RBD systems with scoring functions ranging from À7.78 to À6.73. For AZM/RBD systems, nine systems exhibited stable scoring functions from À9.94 to À7.52, while five were kept in the case of BNO/RBD system with a scoring function lower than À8. On the basis of these different configurations, molecular dynamics simulations in full solvent were performed. For these simulations, a 1ns equilibration phase was performed before running 10ns of production simulations. The energy between the drug (or nanovector) and the protein was then determined to obtain the most stable site of interaction by molecular dynamics simulation. Additional MD simulation was also performed where the molecules were far from the most stable interaction site. This allows us to verify the steric accessibility of these sites. We have demonstrated that the HCQ molecule could not find any durable interaction site during its diffusion to the RBD_ACE-2 complex whereas several interaction sites were possible by docking. The fluctuant adsorption energies (À26 ± 14 kcal/mol) can explain this desorption. On the contrary, AZM interacts strongly with the protein and can diffuse toward the protein to adsorb on it (À39 ± 6 kcal/mol) [40]. We performed the same calculations with the BNO nanoflake alone. The results depicted in Fig. 4 a,b show a very high level of BNO affinity towards the RBD_ACE-2 complex since the interaction energy rapidly decreases to À167.5 kcal/mol, which is very low compared to any drugs. Note that this energy is the most stable since 2 other over 5 DM simulations based on stable docking sites were not as interesting as in the prediction. It leads to average value equal to À135 ± 30 kcal/mol. The more interacting residues are shown in the zooming views of Fig. 4. We can not determine through these figures any recurrent tendency.
The root mean square deviations (RMSD) of this protein did not show any significant changes whatever the type of molecules since for each one, it converged to a value close to 3 Å (the highest value is obtained for BNO). However, the differences between each system do not seem too significant to be discussed. The most important difference comes from the interaction site. Indeed, while AZM tends to interact with ACE-2, BNO is more particularly adsorbed on the RBD surface. This could explain the behavior of the energy curves between each entity.
As the nanoflake alone can interact strongly with the protein complex, we then studied the role of the nanovectorization of the two drugs with the protein. The interaction energy valleys are shown in Fig. 4b for the vectorization of AZM (black), HCQ (red) and both AZM/HCQ (green) molecules with BNO. While HCQ alone did not find a solution to adsorb onto the complex, we can see in Fig. 4b that BNO can stabilize the drug on the RBD surface with a mean energy equal to À95 kcal/mol. The site is localized near the RBD/ ACE-2 junction (Fig. 4f). Three other configurations, issued from docking calculations confirm that the nanovector can bring the HCQ on the RBD but with a lower interaction energy (equal to À40 ± 15 kcal/mol). The nanovectorization of AZM also increases its affinity for the protein compared to AZM alone. Indeed, the interaction energy obtained for this nanovector converges to À110 kcal/mol, far below the interaction energies of the drug alone. Five other configurations were tested leading to energy varying from À65 to À107 kcal/mol. Note that the best interaction site is now on the ACE-2 protein. We can underline that the behavior of the two complexes is quite different. In fact, the HCQ faces the protein when the nanovector is adsorbed while it is the BNO surface that interacts directly with the protein during the transport of the AZM molecule. Note that the nanovector which aims to transport the two molecules at the same time did not work in this study. It leads to drug repulsion toward the complex protein as observed for the HCQ molecule. We extracted from all simulation runs the specific interaction energy of the vectorized drug with the protein. For HCQ, this leads to a pair interaction ranging from À20 to À70 kcal/mol, the most stable corresponding to Fig. 4f. Indeed, in this position, the BNO tends to force the molecule to adopt a specific configuration that also improves its interaction with the protein. For AZM, the results are less convincing since it leads to an interaction energy that converges to at most À6 kal/mol. Note however, that in a particular case (where the total energy of the vector was only À78 kcal/mol) the interaction of the vectorized AZM converges to À40 kcal/mol.
Interaction with the full protein (S-trimer)
We have studied the interaction of our different nanovectors with the closed part of the viral protein (S-trimer). Both HCQ and AZM drugs can find through diffusion an interaction site with this part of the virus. The energy resulting from these adsorptions leads to the same value close to À30 kcal/mol (Fig. 5a) as in the case of the RBD_ACE-2 complex. Other sites simulated with DM and whose starting point was determined by docking simulations, did not reveal lowest interaction energy. The BNO nanoflake alone can also interact with the S-trimer since its energy, once stabilized on it, converges to about À100 kcal/mol. This value is still much lower than for any drugs. All the entities adsorb near the RBD site of the protein. For the S-trimer, RBD is in its close form. The molecules can thus also find a stable adsorption site in this case as in the open state of the RBD (when attached to the ACE-2 receptor). We did not find any significant recurring residue which could explain the position of the molecules here. The RMSD of this protein complex remains the same for each molecule, it tends to a value close to 3 Å (the highest value is obtained for BNO). The differences between each system are again not so large to be significant.
The interaction of the nanovector made of BNO combined to one or two drugs with the S-trimer structure was then evaluated. The interaction energy valleys depicted in Fig. 5b show that each nanovector can adsorb close to the protein with an adsorption energy which is equivalent for the three studied systems (about À65 ± 5 kcal/mol). While BNO (drug, respectively) alone adsorbed on the complex with a smaller (higher, respectively) energy, we can see in Fig. 5b that using BNO as a nanovector allowed to better stabilize each drug with the S-trimer surface. The pair interaction of the vectorized drug with the protein tends to À22 ± 4 kcal/mol, a value slightly higher than without BNO. Nanovectorization is however slightly different in terms of adsorption velocity. When both drugs are vectorized with BNO, the adsorption site was obtained faster than when only one drug was transported on the vector. The interaction sites are always located in the same place for the three complexes. Of course, the latter depends on the initial position of the molecules that were chosen in an equivalent manner to accommodate close to the best docking score site. Some other interaction sites are possible on this large protein and two more have been tested based on docking simulations. We did not observe with these two supplementary simulations another more stable site. It should be noted that the search for the lower interaction site is not the main goal of these studies since we aimed to prove that the nanovector improved the drug affinity for the viral protein.
Discussion
Recently, MD simulations have shown that HCQ and AZM aim to block the interaction of RBD with the cell receptor and more particularly with the gangliosides [49]. Indeed, each drug being adsorbed specifically on different targeted sites, the virus attack toward the cell receptor would be efficiently blocked [50]. More, this would improve the treatment of the disease since the two drugs act synergistically on the viral protein [50,51].
Our studies have not shown any real interaction between HCQ molecules and RBD in its opened conformation. However, with the closed state of the viral protein, HCQ can find a way to adsorb on it. For AZM, the interaction energies appear equivalent whatever the simulated target. The influence of the BNO, used as a nanovector is much more interesting. Indeed, its strong interaction with either the open receptor binding domain (RBD) or the closed state structure of the protein makes its use very efficient against the SARS-Cov-2 virus. By comparing the interaction energies of different molecules alone, whatever the protein studied, BNO exhibited an energy at least 3 to 4 times lower than the drug one but is not known to have any therapeutic effect. However, due to its hydrophobic properties, its diffusion to the target is greatly improved at room temperature compared to the attachment of the therapeutic agents with the protein.
When used as a nanovector, the behavior of the nanoflake is not disturbed by the presence of the drug molecules (unless both molecules were adsorbed on both sides of the nanosheet). The drug nanovector, due to the strong affinity of BNO for the proteins, adsorbs on the different proteins (even if the drug alone could not). For instance, HCQ which is not attracted to the RBD_ACE-2 structure, can diffuse there when it is transported by the BNO nanoflake. The energies obtained in each case present comparable values (although slightly higher) to the interaction of the BNO alone with the protein and those of the drugs alone. More, for HCQ, the important attraction of BNO to the protein forces the drug to penetrate deeply into the protein. It thus improves the affinity of HCQ for the virus since we have not found any stable state where HCQ diffused alone. For AZM, most of our simulations show an improvement of the drug transport but a decrease in the direct drug interaction with the protein. The nanosheet, in its more stable site, faces the residues of the protein and does not allow AZM to be in an optimized position regarding the protein. The size of the system is probably at the origin of this result but one of our simulations shows however that a configuration could be found where AZM is in full interaction with the protein. The use of BNO as a nanovector thus slightly decreases its affinity for the virus but makes it possible to stabilize the drug on the different proteins with a rapid transfer from the vector to the protein.
It can thus answer to the problem of the medical dose in case of dual therapy using HCQ plus AZM treatment since the target of this drug will be more easily reached. As a consequence, the nanovectorization of the drugs by the biocompatible BNO surface improves the drug targeting, allowing dose reduction if necessary. Analyses of the different residues responsible for the adsorption of the compound on the protein highlights that the major part of the interaction comes from electrostatic contributions.
Although this study is not exhaustive (all the adsorption states have not been systemically studied while several have been tested), we demonstrate the strong affinity of the nanovector with the viral proteins, and prove by these calculations that the BNO surface could impose the way of drug vectorization. It should be noted however that this study should be improved to be fully exhaustive. Indeed, to demonstrate quantitatively that BNO could effectively transport the drug accurately, longer flooding simulations of the protein in the presence of a higher concentration of drug (with or without BNO) would have to be performed. This is outside the scope of this first proof of concept since this approach would require significant computational resources.
Conclusion
In this period troubled by the pandemic, researchers are looking for an efficient treatment against the SARS-Cov-2 virus. Several treatments have been tested but now, no real one has emerged. Dual therapy consisting of hydroxychloroquine molecules plus azythromicin has proved to be effective, but the doses of drugs needed to answer to the disease could give to patients some important side effects. Using combined quantum and dynamic numerical simulations, the adsorption of these drugs on biocompatible boron nitride oxide nanosheet was possible in a very stable way. Our calculations demonstrated that the strong affinity of the BNO surface towards different parts of the viral protein in its closed or open structures can help the drug to be adsorbed onto the protein. Indeed, the nanovectorization of these drugs using BNO helps the drugs to diffuse rapidly to the viral protein and in some cases improves (HCQ in particularly) the interaction of the drugs with the virus. For the future, we hope that our results can reinforce the research community on the use of the BNO nanovector for the delivery of therapeutic agents in the fight against coronavirus.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2021-01-07T09:12:07.567Z | 2021-01-04T00:00:00.000 | {
"year": 2021,
"sha1": "1b483177fb11971068a6366388878d450f1972a1",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.jmgm.2021.107834",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f3246fbd306840fae2373e2594cbf5cd46417549",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269852302 | pes2o/s2orc | v3-fos-license | Impacts of fertilization methods on Salvia miltiorrhiza quality and characteristics of the epiphytic microbial community
Plant epiphytic microorganisms have established a unique symbiotic relationship with plants, which has a significant impact on their growth, immune defense, and environmental adaptation. However, the impact of fertilization methods on the epiphytic microbial community and their correlation with the yield and quality of medicinal plant was still unclear. In current study, we conducted a field fertilization experiment and analyzed the composition of epiphytic bacterial and fungal communities employing high throughput sequencing data in different organs (roots, stems, and leaves) of Salvia miltiorrhiza, as well as their correlation with plant growth. The results showed that fertilization significantly affected the active ingredients and hormone content, soil physicochemical properties, and the composition of epiphytic microbial communities. After fertilization, the plant surface was enriched with a core microbial community mainly composed of bacteria from Firmicutes, Proteobacteria, and Actinobacteria, as well as fungi from Zygomycota and Ascomycota. Additionally, plant growth hormones were the principal factors leading to alterations in the epiphytic microbial community of S. miltiorrhiza. Thus, the most effective method of fertilization involved the application of base fertilizer in combination with foliar fertilizer. This study provides a new perspective for studying the correlation between microbial community function and the quality of S. miltiorrhiza, and also provides a theoretical basis for the cultivation and sustainable development of high-quality medicinal plants.
Introduction
Currently, artificial cultivation has become a necessity for the production of commercially cultivated medicinal plants, as it ensures consistent supply and quality, and allows for selective breeding to optimize medicinal properties (Pang et al., 2016;Niu et al., 2021).Fertilization is an essential approach for improving quality and efficiency in modern agriculture, and effective fertilization methods are key in determining the yield and quality of medicinal plants.Over-fertilization affects crop growth, leading to excessive enrichment of soil nutrients and a decrease in organic matter content, and causes soil acidification, salinization, and imbalances in microbial diversity.These present significant challenges to sustainable cultivation industries and seriously hinder the healthy development of the contemporary medicinal plant industry (Ti et al., 2015).Therefore, optimizing fertilization strategies, improving fertilizer utilization efficiency, and preventing soil degradation have received great attention (Geng et al., 2019;Iqbal et al., 2020;Ren et al., 2021;Shi et al., 2021).Foliar fertilization, as a supplement to plant nutrient absorption and compensation for insufficient root nutrient absorption, exhibits the characteristics of fast nutrient absorption, strong efficacy, low consumption, high efficiency, and reduced environmental pollution, making it an effective fertilization strategy (Kentelky and Szekely-Varga, 2021).In addition, foliar fertilization also affects the abundance and diversity of plant-associated microbial species (Lin et al., 2019).Studies have demonstrated that microorganisms can synthesize organic compounds, enzymes, hormones, and other bioactive substances, playing an important role in plant growth and development.Hence, further research on the effects of microorganisms on the growth of medicinal plants under the intervention of exogenous nutrients can help improve the yield and quality of medicinal plants, and promote the sustainable development of medicinal plant cultivation and production.
Studies have shown that microorganisms can enrich or colonize the surface or interior of plant tissues (Vorholt, 2012;Vandenkoornhuyse et al., 2015).Compared to endophytes, epiphytes naturally exhibit greater diversity in composition and abundance as they are directly exposed to different ecological environments (Chen et al., 2020).Typically, phyllosphere presents one of Earth's most copious microbial habitats on its surface (Vorholt, 2012).These epiphytic microorganisms maintain a symbiotic relationship with host plants, significantly affect plant growth, immune defense and adaptation to environmental conditions (Berg, 2009;Berendsen et al., 2012).Importantly, the phyllosphere exists in a volatile and unstable environment in which the flora is subject to multifarious stresses, engendering a discernible trend and preference for certain microbial taxa (Bringel and Coueé, 2015).Similarly, rhizospheric microorganisms are indispensable for plant ontogeny, engaging in a synergistic interaction with plant roots.These microorganisms aggregate around the roots, converting organic substrates into inorganic forms to provide vital nutrients for plants.In addition, they secrete factors that promote plant growth, including but not limited to vitamins and growth stimulants (Haney et al., 2015).Lu et al. (2018) found that rhizospheric microbes can modulate the timing of plant flowering, while recent reports have proposed the concept of microbial-root-upper ground, suggesting a profound interconnection between the subterranean and aerial plant components (Almario et al., 2017).Currently, researches on plant microbes have primarily focused on individual niches (Beckers et al., 2017), with limited studies reporting on changes in microbial community composition across different niches, ranging from the rhizosphere to the phyllosphere (Hacquard and SChadt, 2015).Fertilizer, as the most direct nutrient input, exert a significant impact on the epiphytic microbes in different ecological niches of plants, and the microbial community structure is also very sensitive to fertilization (Lozupone et al., 2012).Nutrients influence the community dynamics of pivotal microbial species, thereby regulating the composition of microbial communities (Schmidt et al., 2014).These key species are closely related to the nutrient cycle of carbon, nitrogen, and phosphorus within the soil, playing a vital role in enhancing crop productivity (Li et al., 2017a;Wang et al., 2022a).
Salvia miltiorrhiza Bge.(Labiaceae), a typical Chinese medicinal herb is widely used in the treatment of various diseases such as diabetes (Xie et al., 2021) and antiosteoporotic (Guo et al., 2014), with the efficacy in activating blood to eliminate stasis, transmissible pain relief, heart removal, and cool blood (Buja and Vander Heide, 2016).Research has demonstrated that fertilization and microbial interactions play a crucial role in determining the yield and quality of S. miltiorrhiza.For instance, Wei et al. (2023) reported that fertilization can significantly alter the rhizosphere microbial community, leading to improved biomass and medicinal quality of S. miltiorrhiza.Pu et al. (2022) found that different fertilization regimes can impact mycorrhizal symbiosis, thereby positively affecting the growth and active ingredient content of S. miltiorrhiza.Despite these findings, there are still gaps in the overall understanding of how the salvia quality is associated with the structure of the epiphytic microbial community under different fertilization strategies.
To investigate the influence of fertilization method and ecological niche variations on the composition of epiphytic bacterial and fungal communities in S. miltiorrhiza within field cultivation, and to explore the associations between epiphytes and their host plants, as well as the functional capabilities of epiphytes, we conducted a field experiment with three fertilization treatments (i.e., base fertilizer, foliar fertilizer, and base fertilizer+foliar fertilizer).The abundance, diversity, and composition of epiphytic bacterial and fungal communities in different ecological niches (leaves, stems, and roots) of S. miltiorrhiza were analyzed using Illumina Miseq high-throughput sequencing (HTS) technology.Additionally, co-occurrence networks of microorganisms were established to examine the interactions between fertilization and niche.We formulated the following hypotheses: (1) the composition of epiphytic microbial communities in S. miltiorrhiza varies depending on the fertilization method or ecological niches; (2) extensive intra-community interactions are expected to occur among epiphytes within the same niche or fertilization; (3) epiphytic microorganisms affect the growth of S. miltiorrhiza and rhizosphere soil properties.These findings will lay the groundwork for revealing the ecological functions of epiphytic communities in cultivation of medicinal plants, as well as the biodiversity and survival strategies of epiphytic communities under the condition of exogenous nutrition interference.
Experimental design
A single-factor 4-level randomized block design was used to arrange the field experiment, with a plot area of 4 m × 4 m = 16 m 2 .Each fertilization treatment was set up in three replicates, a total of 4 × 3 = 12 plots, and a protection area of 350 m 2 was set up around the experimental plot.The plant samples used in this study were one-year-old S. miltiorrhiza seedlings.Four fertilization treatments including base fertilizer (F1), foliar fertilizer (F2), base fertilizer +foliar fertilizer (F3) and blank control (CK) were applied.The base fertilizer was a compound fertilizer of nitrogen, phosphorus, and potassium from STANLEY from the local market, applied at a rate of 37.5 kg per acre.For foliar fertilizer, STANLEY potassium dihydrogen phosphate foliar fertilizer was used, with a spraying rate of 50-60 g per acre.Before cultivating seedlings, base fertilizer was applied to the soil, and then starting from the vigorous growth period of S. miltiorrhiza seedlings, spray foliar fertilizer three times every 15 d.Three ridges of 120 cm in width and 30 cm in height were set in each test plot, and a drainage ditch of 25 cm in width was set around the furrow.Two rows of S. miltiorrhiza were planted in each ridge with a row spacing of 15 cm×30 cm.
Collection of plants and soil samples
In March 2022, the experiment was planted and plant samples were collected at the end of the growing season in November 2022.Three healthy plants were randomly selected from each plot, and approximately 10 g of samples were collected from the leaves, stems, and roots using sterile surgical blades.Root samples for rhizosphere microbial analysis were placed in a 4°C cooler and subsequently brought back to the laboratory.Rhizosphere soil microbial analysis samples were from soil samples 10-15 cm away from the roots, and 200 g of soil sample was collected from each plot and placed in a sealed plastic bag.All soil samples were transported to the laboratory in insulated containers.Before experimenting, all samples were sieved (<2 mm mesh) to remove rocks, coarse roots and other litter.Soil samples used for enzyme analysis were stored in a 4°C refrigerator.Other soil subsamples were air-dried and used for determination of soil physicochemical properties.A total of 12 leaf, stem, root samples and 12 soil samples were collected.The samples for high-throughput sequencing were stored in a −80°C freezer for preservation.
DNA extraction, PCR and Illumina Miseq Sequencing
Five grams of plant sample (leaves, stems, or roots) were weighed and put into a 50 mL centrifuge tube, with 50 mL of 0.1 M potassium phosphate buffer (PPB, pH=8.0)added.Plant sample in tubes were washed with 1 min sonication and 10 s vortex, and repeated.Then the samples were transferred to new tubes with 50 mL of 0.1M PPB and washed again.The suspension from two washes was mixed and filtered through a 0.2 µm membrane.The filter membranes with epiphytes were snap frozen in liquid nitrogen and stored at −80°C in the refrigerator for subsequent DNA extraction (Bodenhausen et al., 2013).
The genomic DNA from epiphytic microorganisms was extracted from filter membranes using the FastDNA ® Spin Kit for Soil (MP Biomedicals, USA) according to the user's manual.The DNA purity and concentration were measured with a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, USA), and DNA integrity was examined using 1% agarose gel electrophoresis and stored at −20°C in a refrigerator for subsequent experiments.
The ABI GeneAmp ® 9700 PCR thermal cycler (ABI, USA) was used to amplify the 16S V3-V4 region (5'-GTGCCAGCMGCCGC GGTAA-3' and 806R, 5'-GGACTACHVGGGTWTCTAAT-3') of epiphytic bacteria and the ITS1 region (ITS1F/ITS2, 5'-CTTGGTCATTTAGAGGAAGTAA-3' and 5'-GCTGCGTT CTTCATCGATGC-3') of epiphytic fungi (Tamaki et al., 2011;Tepper and Gaynor, 2015).The PCR reactions were performed in a 20 mL system, including 2 mL of 10×buffer, 2 mL of 2.5 mM dNTP, 0.8 mL of each 5 mM primer, 0.2 mL of Taq polymerase, 0.2 mL of BSA, 10 ng of template DNA, and topped up to 20 mL with ddH 2 O.The amplification of the bacterial 16S V3-V4 region was carried out under the following conditions: denaturation at 95°C for 3 min; 30 cycles of 95°C for 30 s, 55°C for 30 s, 72°C for 45 s; and a final extension at 72°C for 10 min.For the fungal ITS1 region, PCR amplification was performed under the same reaction system and conditions, but with 35 cycles.Each amplification was repeated three times.The PCR products were recovered on a 2% agarose gel, and further purified using the AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, USA).Subsequently, the recovered PCR products were quantified using the Quantus ™ Fluorometer (Promega, USA).The purified amplicons were mixed in equal amounts, and a library was constructed using the NEXTFLEX ® Rapid DNA-Seq Kit.The final sequencing was conducted using the Illumina MiSeq PE300 platform at Shanghai Majorbio Bio-pharm Technology Co., Ltd.The raw data have been deposited in the NCBI SRA database (PRJNA983712, PRJNA982701).
Bioinformatics analysis
Quality control for the raw reads was performed using software tools fastp (version 0.19.6) and FLASH (version 1.2.11) with following steps: (1) filter bases with a mass value of less than 20 and read content containing N bases, set a window of 50 bp, and truncate bases if the average mass value in the window was less than 20, finally, the reading below 50 bp was filtered after quality control; (2) pairs of reads were spliced according to the overlap between PE reads, and the minimum length of overlap was 10 bp; (3) a maximum mismatch ratio of 0.2 was allowed in the overlap region of the spliced sequences, and inconsistent sequences were removed; (4) samples were demultiplexed based on the barcode.Quality-control splicing sequences were clustered into operational taxonomies (OTUs) based on 97% similarity using the UPARSE software (version 7.1).All sequences with mitochondrial and chloroplast annotations were removed.Taxonomic placement of epiphytic bacteria and fungi was annotated according to Silva 16S rRNA gene database (V 138) and UNITE database (version 8.0), respectively, based on an RDP classifier (version 2.11) with a 70% confidence threshold.The community composition of each sample was analyzed at various species taxonomic levels.Bacterial function prediction was based on the PICRUST2 database (Douglas et al., 2020), and bacterial function prediction was based on the FUNGuild database (Nguyen et al., 2016).
Measurement of plant biomass and morphological parameters
The weight of roots and stems was determined using an analytical balance.The lengths of the roots and shoots were ascertained using a tape measure.Plant water content was determined using the drying method.As described previously, the NBT photochemical reduction method was selected to test superoxide dismutase (SOD) activity (Cheng et al., 2015).Soluble protein content was determined using the Bradford assay (Kielkopf et al., 2020).Abscisic acid (ABA, ng/mL), cytokinin (CTA, ng/mL), gibberellin (GA, pmol/mL), Indole-3-acetic acid (IAA, nmol/L), and nitrate reductase (NR, µg/g.h) were measured using enzymelinked immunosorbent assay (ELISA) detection kits.
Determination of active ingredient content
The medicinal ingredients of S. miltiorrhiza referred to in the 2020 edition of Chinese Pharmacopoeia I, and the lipid soluble chromatographic conditions were as follows: Silica gel bonded with octadecyl silane was used as the filler, acetonitrile was employed as mobile phase A, and a 0.05% phosphoric acid solution served as mobile phase B. The gradient elution conditions were indvided four parts: 0-6 min (A was 61% and B was 39%), 6-20 min (A changed from 61% to 90% while B changed from 39% to 10%), 20-20.5 min (A changed from 90% to 61% and B changed from 10% to 39%) and 20.5-25 min (A was 61% and B was 39%).The flow rate was maintained at 1 mL/min.The fat-soluble components determined included cryptotanshione, Tanshinone I, and Tanshinone IIA.Water soluble chromatographic conditions were as follows: using a C 18 chromatographic column, acetonitrile was employed as mobile phase A, and a 0.05% phosphoric acid solution served as mobile phase B. The four parts of gradient elution for 0-15 min, 15-30 min, 30-40 min and 20.5-25 min were A of 17-23%, 23-25%, 25-90%, and 90% and B of 83-77%, 77-75%, 75-10%, and 10%, respectively.The column temperature was maintained at 30°C with a flow rate of 1 mL/min at a detection wavelength of 286 nm.The water-soluble components determined included salvianolic acid B and rosmarinic acid.
Determination of soil parameters
Soil pH was measured in a 1:2.5 (w/w) soil: water suspension with pH 3000 (STEP Systems Gmbh, Germany).Soil organic carbon (SOC) was determined using the combustion loss method (Heiri et al., 2001).The available phosphorus in soil (AP) was measured using the sodium bicarbonate extraction-molybdenum antimony colorimetric method (Olsen et al., 1954), with 0.5 mol/L sodium bicarbonate solution used for extracting the available phosphorus, which reacted with molybdenum antimony to generate phosphomolybdenum blue.The available potassium (OP) was determined using the tetraphenylboron method (Chen et al., 2019), with 1 mol/L NaNO 3 solution used for extracting soil K + , which reacted with tetraphenylboron in a weakly alkaline medium to produce a barely soluble white precipitate.The measurement of ammonium nitrogen (NH 4 + -N), nitrate nitrogen (NO 3 − -N), total phosphorus (TP), and total nitrogen (TN) was performed using the Smartchem 200 analyzer (Alliance, France) (Xie et al., 2017).The activity of sucrase (SC) was determined using the method of Guan (1986).The sucrase in soil catalyzes the hydrolysis of sucrose into reducing sugar, which reacts with 3,5-dinitrosalicylic acid to produce orange 3-amino-5-nitrosalicylic acid under the boiling condition, and the depth of color is positively correlated with the content of reducing sugar.The measurement of nitrate reductase (NR) activity was carried out using the sulfanilamide diazotization colorimetric method (Zhao et al., 2009), where the reaction of nitrite with sulfanilamide and a-naphthylamine under acidic conditions produces a red compound.The urease (URE) activity was measured using the improved Hoffmann and Teicher colorimetric method (Kandeler and Gerber, 1988), with urea as the substrate, and indophenol, which is generated by the reaction of enzyme products and phenol-sodium hypochlorite, is used for analyzing the activity of urease.The activity of alkaline phosphatase (ALP) was measured using the method described by Tarafdar and Marschner (1994), with disodium phenyl phosphate (pNPP) as the substrate.The substrate is hydrolyzed by soil acid phosphatase to produce yellow p-nitrophenol (pNP), and the amount of pNP produced is directly proportional to the absorbance of the yellow solution, which can be used for quantitative analysis.The data analysis was performed using SPSS software version 24.0 (IBM SPSS Statistics, USA).
Statistical analysis
Alpha diversity (Shannon, Chao1, goods_coverage, and pielou_e indices) and beta diversity analysis (NMDS) were computed using the QIIME2 software.Statistical analysis and plotting for Mantel tests were performed using three R packages (dplyr, linkKEt, and ggplot2; http://www.R-project.org).Bivariate correlations between plant growth parameters, soil physicochemical properties, and microbial communities were analyzed using SPSS 24.0 software, with Tukey's test used to compare means (P<0.05).Variance partition analysis (VPA) was used to investigate the effects of growth parameters, soil properties, and plant hormones on the microbial abundance in different parts of Salvia miltiorrhiza.Pheatmap and Vegan were used for statistical analysis and data visualization.Network parameter analyses were conducted using six R packages (igraph, psych, Hmisc, vegan, dplyr, and reshape2).The bNTI value was calculated to assess community assembly processes, using the formula below (Equation 1): 3 Results
Composition of the epiphytic microbial community
The regression curves of OTUs observed at the OTU level for S. miltiorrhiza epiphytic bacteria and fungi gradually flattened, indicating that the amount of sequencing data for epibiotic bacteria and fungi was reasonable and the OTU depth met the requirements for diversity analysis (Supplementary Figure S1).A total of 4256 OTUs of epiphytic bacteria were obtained, belonging to 361 species, 544 genera, 146 orders, 251 families, 53 classes and 27 phyla.In addition, 1702 OTUs of epiphytic fungi were identified, belonging to 299 species, 226 genera, 123 families, 56 orders, 23 classes, and 8 phyla.
At the phylum level (Supplementary Figure S2A-a), Actinomycetes and Firmicutes were the dominant groups of epiphytic bacteria in S. miltiorrhiza leaves, with the abundance of Actinomycetes (13.25%, 8.42%) and firmicutes (13.25%, 8.42%), respectively.The relative representation of Firmicutes (10.22%) and Bacteroidetes within the root system exhibited a reduction, whereas the prevalence of Proteobacteria (75.77%) experienced an elevation.Interestingly, Acidobacteriota expression decreased under both fertilization treatments (Supplementary Table S1).At the genus level (Supplementary Figure S2A-b), a decline in the prevalence of Ralstonia and Massilia was observed across both fertilization treatments.The concentration of Hymenobacter (8.21%, 9.26%), Rhizobium (7.53%), and Sphingomonas (8.97%) also increased.The proportion of Pseudomonas was elevated to 19.14% and 13.82% in the group F2 in leaves, heightened in the groups F2 and F3 in stems, rose in the group F1 in roots, and reduced in the other treatment groups.At the phylum and genus level (Supplementary Figures S2A-c, d), fertilization treatments had a lesser effect on altering the abundance of the epibiotic fungal community in S. miltiorrhiza.
After the fertilization treatment, the unique bacterial OTUs of the three parts of S. miltiorrhiza were lower than those of the blank treatment except for the YF1 treatment (Supplementary Figure S2Ba), and the number of bacterial OTUs was curtailed after the fertilization treatment.The most common bacterial OTUs were found in roots.The number of unique fungal OTUs in the three parts of S. miltiorrhiza increased as a whole except for a few (Supplementary Figure S2B-d).The combination of foliar fertilizer plus base fertilizer showed the greatest increase in OTUs, doubling the amount on average in all three sites.The leaves had the most common fungal OTUs, and the number of bacterial and fungal OTUs decreased under the combination of leaf fertilizer and basal fertilizer.
Analysis of a-diversity of epiphytic communities
When evaluating the a-diversity of epibiotic bacteria in different parts of S. miltiorrhiza based on the OTU level we found that the Good's coverage index values were all close to 1, indicating that the sequencing depth was qualified.This further indicated a negligible likelihood of undetected sequences across the analyzed samples (Supplementary Table S2).The Shannon index and Chao1 index of bacteria and fungi were tested by T-test.After fertilization, no significant change was found the diversity of epiphytic bacteria and fungi from in the three parts of S. miltiorrhiza, but there was a substantial variation in their abundance (Figure 1).Different fertilization treatments had varying effects on abundance.For example, in the leaf section, the concentration of bacteria and fungi was significantly higher in groups F2 and F3 than in group CK.This pattern was also observed in the root section, while only bacteria showed a difference in the stem section.Under fertilization treatment, the diversity of epiphytic bacteria and fungi (excluding stem-associated bacteria) had undergone alterations across groups, albeit not apparently, namely there was a reduction in bacterial diversity and an increase in fungal diversity.Excluding stem bacteria, pronounced fluctuations in microbial community proportions were evident, with the F3 treatment cohort exhibiting the most marked alterations, typified by diminished bacterial and heightened fungal levels.
Comparative analysis of the similarity of epiphytic communities
The NMDS and ANOSIM tests indicated that there was no salient dissemblance in the bacterial and fungal community structure on different parts of S. miltiorrhiza, and the same phenomenon was observed among treatment groups.(Supplementary Table S3; Figure 2).Bacterial community similarity analyses revealed divergent structure compositions in leaves between F2 and F3 cohorts versus groups CK and F1.Analogous differentiation patterns were noted across stem-based bacterial populations.Stembased bacterial community structures exhibited variability between groups F2 and CK.In the roots, all treatments showed divergent community structures (Figures 2A-C).For fungal communities, the community structure of the F3 treatment group was distinct from the CK, F2 and F1 treatments.The fungal community composition in the F2 treatment diverged from the control (CK).In stem, the communities within the F3 treatment were distinct compared to those in the CK, F2, and F1.Community structure in the F1 exhibited variations from the CK.The community structures of epiphytic fungi and bacteria across the three parts of rhizome and leaves were different but these changes were not prominent.
The construction process of epiphytic communities
Based on the zero bNTI system diversity model analysis, the results showed that in S. miltiorrhiza (root, stem, and leaf), at different positions of the surface, bacterial and fungal microbial communities were built in a largely deterministic process (defined as b|NTI| > 2) (Dini-Andreote et al., 2015) (Figure 3).This choice was mainly homogeneous.In fungal communities, all three fertilization methods converted the microbial construction process into a deterministic one.Among the bacterial communities, group F1 had the greatest impact on the construction process compared to the other two treatments, and roots were the most sensitive to the construction process.
Co-occurrence network of epiphytic microorganisms
A co-occurrence network map was constructed with the measured OTUs to illustrate the general symbiosis model of bacteria (Figure 4) and fungi (Figure 5) under different fertilization treatments for the key species group of epiphytic bacteria.In the leaf bacterial network, Bacteroidota and Hymenobacter were significantly increased in the F1 compared with the CK.Firmicutes increased in F2 and F3, and the composition of the major genus-level nodes in this phylum was no longer dominated by Lactobacillus and Faecalibacterium, but switched to other genera (Figure 4A).Moreover, the modularity index and the number of nodes increased after fertilization, and the proportion of positive correlation decreased, which might make the community structure more stable.In the stem bacterial network, compared to the CK treatment, the F1 had more nodes and edges, but the modularity index was reduced.It can also be seen from Figure 4B that most nodes were clustered together, and the distribution was not uniform.For groups F3 and F2, the node distribution was more uniform, meaning that even though the number of nodes and edges decreased, the modularity index was increased, thereby suggesting that the bacterial community in the stem may be more stable after the application of foliate fertilizer.In the root bacterial network, the community structure of roots was more stable than that of leaves and stems, and the modularity index of all treatment groups was greater than 0.4.The positive and negative relationships among species and the node distribution were also more uniform (Figure 4C).There were no significant differences between the treatment groups, but the number of nodes and edges in the network decreased after fertilization.
In the leaf fungal network, compared with CK treatment, Basidiomycota was the dominant phylum in groups F1 and F2, and Cryptococcus and Rhodotorula were the leading genera in groups F1 and F2.In the CK, Dioszegia, Moesziomyces and Tilletiopsis were the main genera.Ascomycota was the principal phylum in CK and F3, while Alternaria and Epicoccum were the major genera in CK and F3, and Mycosphaerella was added in F3 (Figure 5A).Fertilization treatment gradually reduced the modularity index of leaf epiphytic fungi and the stability of the network.In the stem fungal network, compared to the CK treatment, the F2 treatment group was dominated by Basidiomycota, and the species at the genus level had changed greatly.The F2 treatment group was mainly controlled by Vishniacozyma and Cryptococcus.Moreover, Tilletiopsis was the primary species in the CK treatment.Ascomycota was the prevailing phylum in groups CK, F1, and F3.Setophaeosphaeria and Epicoccum were the superior species in CK.The genera Plectosphaerella, Knufia, and Selenophoma were prevalent in the F1, and Alternaria and Paraphoma were predominant genera in the F3 (Figure 5B).Importantly, fertilizer treatment also changed the composition of fungal communities in the stems.In the root fungal network, compared to group CK, the distribution of nodes in F1 was more uniform, the number of nodes and edges in F2 was reduced, and the modularity index of the two treatments was greater than 0.4.Additionally, Mucoromycota and Mortierellomycota were added as new phyla to the F3.But the modularity index was less than 0.4 (Figure 5C).
Prediction of function
PICRUST2 database was used to predict the function of the bacterial community under different fertilization treatments (Supplementary Figure S3-a).It is remarking that the protein function was higher in leaves than in other parts.For instance, the protein function of glycosyltransferases involved in cell wall biosynthesis, as well as the acyl coenzyme A dehydrogenase associated with the alkylation reaction protein AidB, were significantly enhanced.These functions play a pivotal role in plant development and stress response.
The function of epibiotic under different fertilization treatments was determined by the The FUNGuild database, and the result was plotted in Supplementary Figure S3-b.The predicted fungal functions mainly include three categories: pathogenic, symbiotic, and saprophytic.It is subdivided into animal pathogens, arbuscular mycorrhizal fungi, ectomycorrhizal fungi, lichenized fungi, mycoparasites, plant pathogens, undefined saprotrophs, and wood saprotrophs.Animal pathogens, plant pathogens, and undefined saprophytic fungi were saliently enhanced.
Plant growth parameters
The results revealed significant disparities in the growth parameters of the leaves, stems, and roots following fertilization.(Figures 6A, B).There was a reduction in the levels of abscisic acid (ABA), superoxide dismutase (SOD) and nitrate reductase (NR), The null model analysis of bacterial (A) and fungal (B) communities in the roots, stems, and leaves of Salvia miltiorrhiza based on bNTI.Y, leaf.J, stem.G, root.
while the levels of Cryptotanshinone (CTS), rosmarinic acid (RosA) and salvianolic acid B (SalB) increased, The gibberellin (GA), soluble protein (SP), and tanshinone IIA (TSN-SS) content decreased in the stems, while the content of tanshinone I (TI) and RosA increased.The concentration of GA, ABA and CTS decreased in the roots, but the content of liposoluble medicinal ingredients increased.The comparison among different treatment groups revealed that the impact of group F3 was the most pronounced.
Analysis of the ANOVA (Figure 6C) showed that leaf growth parameters and hormones together explained 18% of the variation in the epibiotic bacterial community but without the fungal variation.For the stem, growth parameters and hormones accounted for 14% and 45% of the bacterial community variation, respectively, and were the main factors affecting the bacterial community.11% of the variation in fungal communities can be attributed to growth parameters, but the combined effects of growth parameters and hormones can only cause 9% of changes.In terms of roots, growth parameters and hormones were the main factors affecting the bacterial community, and the variation in the bacterial community can reach 20% and 41%, respectively.Variations in the fungal community are predominantly governed by growth parameters and hormonal activity, contributing to 32% and 38% of the modifications, correspondingly.
Association analysis between epiphytic communities and growth parameters
The Mantel analysis revealed that after fertilization, GA manifestly impacted the community structure of root surface bacteria and fungi.Medicinal components such as TSN-SS, TI, and RosA were found to be associated with changes in the root surface microbiota.(Figure 7).The alterations in stem SOD affected the epiphytic bacteria, whereas IAA influenced the epiphytic fungi.The TSN-SS and TI in the stem were closely linked to the changes in epiphytic microbiota.The growth parameters had a minimal impact on the structure of the epiphytic bacterial microbiota in the leaves, but notably affected the epiphytic fungi, especially GA and SP.In the context of the three fertilization methods, it was anticipated that group F3 would outperform the other treatment groups.The effects of F3 on microorganisms and growth parameters were expected to be more significant compared to the other two treatments.
We conducted a correlation analysis between growth parameters and the top ten microorganisms in terms of relative abundance at the genus level.We found that the growth parameters markedly influenced the presence of Proteobacteria, Mucoromycota, Ascomycota and Mortierellomycota in the roots.The key genera that were beneficial for medicinal compounds and hormones included Sphingobium, Mortierella, and Fusarium.The growth parameters exhibited mutual influences on the stem Bacteroidota, Actinobacteriota, Proteobacteria and Basidiomycota.Among these, the core genera with positive effects included Novosphingobium and Dioszegia.The leaf microbiota included Proteobacteria, Bacteroidota, Firmicutes, Basidiomycota and Ascomycota, with core genera that exerted positive influences, including Blautia and Genolevuria.
Association analysis between epiphytic communities and environmental factors
Performing Mantel analysis on the soil environment and epiphytic microbiota of S. miltiorrhiza, we discovered a substantial influence of environmental factors on the response of root-associated epiphytic bacteria following fertilization, encompassing NH 4 + -N, pH, OP, SC, and NO 3 − -N (Figure 8).The stem section exhibited no correlation, while the epiphytic bacteria on the leaf surface showed correlations with pH, OP, URE, ALP, and NH 4 + -N.After fertilization, environmental factors had a greater impact on epiphytic bacteria than on epiphytic fungi, and only the root and leaf sections showed a response.Similarly, group F3 in the three fertilization methods was expected to exert a more pronounced influence on environmental factors and microorganisms.
In examining the relationships between microbe and environmental factors, the Firmicutes, Proteobacteria, Mucoromycota and Ascomycota were identified in the root section and exhibited noteworthy correlations with environmental factors.Notably, among these, the genera Sphingomonas, Phenylobacterium, Bradyrhizobium, and Sphingobium demonstrated a positive correlation with the soil factors.The core genera in the stem section did not exhibit a positive correlation with environmental factors.However, in the leaf section, Proteobacteria, Bacteroidota, Ascomycota and Basidiomycota were found to be correlated.The core genera within these were Buchnera, Blautia, Shigella, Bacteroides, Hymenobacter, Genolevuria, and Epicoccum.
Discussion
4.1 Effects of fertilization methods on the epiphytic community communities in different niches of S. miltiorrhiza Many studies have shown that reasonable combined application of fertilizers can improve crop yield and quality.For example, for tea, fertilizers elevated the content of amino acids and tea polyphenols while reducing the phenol-to-ammonia ratio (Liu et al., 2023a).Similarly, cucumber yield and vitamin C content were improved with fertilizer application (Wang et al., 2023).In apples, fertilizers increased yield, soluble sugar and vitamin C content, resulting in a higher sugar-acid ratio (Wang et al., 2022b).In this study, plant growth parameters, biomass and the levels of active components were increased, indicating that fertilization could improve the quality of S. miltiorrhiza.Importantly, the effect of the F3 treatment was the most significant.
Recent investigations have shown that the structure of microbial communities is sensitive to fertilization (Lozupone et al., 2012).That is, nutrients can evidently affect the community changes of key microbial species and ultimately regulate the construction of microbial communities (Schmidt et al., 2014).These keystone species are closely related to the nutrient cycling process of carbon, nitrogen, and phosphorus in soil and play a considerable role in improving crop productivity (Li et al., 2017b).The analyzed results from the bNTI model showed that the composition of bacterial and fungal communities in different parts of S. miltiorrhiza (roots, stems, leaves) was a deterministic process.As an abiotic factor, fertilization determined the microbial community, which had a significant effect on the composition of bacterial and fungal communities in S. miltiorrhiza.Jiao and Lu (2020) analyzed soil fungal communities in farmland, forest, wetland, grassland, and desert ecosystems in the Hexi Corridor of China and found that the communities of rare taxa were mainly regulated by deterministic processes of homogeneous selection.Similarly, Xiong et al. (2021) studied fungal community structure in soil, root surfaces and leaf surfaces under different fertilization practices in maize/wheat rotation and maize/barley rotation field systems in different places and found that fungal community structure was mainly affected by deterministic processes.Although only basal fertilizer was applied, the microbial community construction in the shoot was still a deterministic process, indicating a strong connection between the belowground section of plants and the shoot.
Recently, the concept of microbial-root-shoot was also proposed (Almario et al., 2017).In the fungal communities, all three fertilization methods made the microbial construction process a deterministic one in our study.Among the bacterial communities, the F1 treatment had the greatest impact on the construction process compared to the other two treatments, and the root was the most sensitive to the construction process.The different responses of these bacteria and fungi may stem from changes in community stability, as well as varying sensitivities to fertilizers among fungi and bacteria.The results showed that the selection process of the construction of the epiphytic communities of S. miltiorrhiza was distinctly affected by fertilization.Fertilization had a filtering effect on the species of epiphytic communities of S. miltiorrhiza, which directly affected the survival of the epiphytic communities.Moreover, the selection was mainly homogeneous, and the similar abiotic environments such as fertilization interact with the surface microorganisms of S. miltiorrhiza to form the selection effect.Combined current studies show that fertilization can evidently affect microbial community construction, markedly impact the composition and abundance of key species (Lin et al., 2019), and drive changes in microbial community structure and ecosystem function (Fan et al., 2019;Han et al., 2022).
Effects of fertilization methods on the key species of epiphytic communities in different niches of S. miltiorrhiza
It is proved that the fertilization can significantly change plant microbial community composition and diversity (Sabir et al., 2021;Guan et al., 2022;Zhang et al., 2022).An increase in nutrient availability correlates with accelerated microbial growth rates and enhanced substrate metabolism (McCann et al., 1998).Fertilizers are even shown to impact light competition after eutrophication (Hautier et al., 2009), which leads to a reduction in plant microbial diversity.In this study, the abundance of the dominant taxa and genus groups of the epimicroorganisms in different ecological niches of S. miltiorrhiza decreased after fertilization, and the changes of fungi were reduced compared to bacterial communities, which may be due to the different establishment methods of fungal and bacterial community structure.Among them, in comparison to the other treatment groups, the effect of F3 was significantly stronger.These results indicated that fertilization greatly affected the composition and abundance of vital species in the epimicrobial community of S. miltiorrhiza (Lin et al., 2019).Data from diversity index of a, NMDS, and ANOSIM tests showed that the diversity of bacterial and fungal flora in different parts of S. miltiorrhiza was altered but there was no meaningful difference between the groups, while the abundance of bacterial flora was markedly different.The findings indicated that fertilization does not exert a substantial impact on the diversity of S. miltiorrhiza epiflora.The lack of this effect may be attributed to the complexity of the epiphytic environment, where there are multiple factors that affect microbial colonization, and fertilization is not the main determining factor (Bringel and Coueé, 2015).Despite the absence of statistical significance, modifications in certain pivotal species were observed, warranting additional investigation into the alterations of these key species.
Microbes live in complex communities (Ley et al., 2006;Fuhrman, 2009;Faust and Raes, 2012).In addition to abiotic factors affecting microbial diversity, interactions among organisms also play an important role in the overall composition, stability, and biodiversity of microbial ecosystems (May, 1988;Wardle, 2006;Mougi and Kondoh, 2012).They can compete for resources, inhibit the growth of other communities by producing antibiotics, or support each other by cross-feeding (Ives and Carpenter, 2007;Ptacnik et al., 2008;Pande et al., 2014).In general, higher biodiversity often (but not always) implies greater ecosystem stability (Hector et al., 1999).The structure and diversity of microbial communities are sensitive to external environmental factors such as fertilization and irrigation (Lozupone et al., 2012;Bhattacharyya et al., 2017;Chen et al., 2020;Dangi et al., 2020;Khan et al., 2021).That is, nutrients affect the community changes of key microbial species and ultimately regulate the construction of microbial communities.These keystone species are closely related to the nutrient cycling process of carbon, nitrogen, and phosphorus in soil, and have a significant effect on improving crop productivity (Li et al., 2017b).
The node, edge, node type, positive correlation, and modularity index of the co-occurrence network were all increased after different fertilization treatments, indicating that community structure became more stable.The root and stem network had an increased abundance of Proteobacteria, a nutrition-sensitive phylum that include a variety of pathogenic bacteria (Dai et al., 2018).Acidobacteria species decreased in leaf sites, and Acidobacteria abundance was negatively correlated with nutrient content (Fierer et al., 2007;Banerjee et al., 2016), which can promote the dissolution of soil inorganic P, and their abundance was significantly correlated with soil available P (Flieder et al., 2021;Wu et al., 2021).Increased root Rhizobia, microorganisms that favor crop-microbe interactions, usually perform beneficial functions for the host by providing a variety of nutrients and metabolites (Erlacher et al., 2015;Gazdag et al., 2018;Chang et al., 2022).Ascomycota was the most abundant phylum in the fungal network, and fertilization promoted the growth of Ascomycota (Guo et al., 2020).After fertilization, Cryptococcus, Rhodotorula, Apicophorus, and Alternaria were widely distributed in the fungal network structure.The structure of group F3 was more stable among different treatments, mainly origin the combination of two kinds of fertilization resulted in more sufficient and comprehensive fertility across the different parts of the plant.The results showed that fertilization treatment can change the composition and structure of key communities in bacterial and fungal community structures of S. miltiorrhiza (Feng et al., 2018).This may be due to the difference in the type of fertilizer applied, which can notably influence microbial abundance and network complexity (Fan et al., 2019), thereby increasing the abundance of keystone species and radically changing the structure of the epiphytic microbial communities of S. miltiorrhiza.
In the Mantel correlation analysis of growth parameters and microbial community, no matter which site, the significance of microbial community and growth parameters in F3 was greater than that in other treatment groups, indicating that F3 treatment group had a more noteworthy effect on the epiphytic communities of S. miltiorrhiza.The medicinal components in the roots were markedly positively correlated with Sphingobium, Mortierella, Fusarium, and Epicoccun.The medicinal components of the stem were evidently positively correlated with Novosphingobium and Dioszegia, while the medicinal components of the leaves were clearly positively correlated with Blautia and Genolevuria.The variance decomposition showed that the hormones ABA, CTK, and GA were the main factors responsible for the changes in the epiphytic communities of S. miltiorrhiza.Studies have shown that Sphingomonas can improve plant resistance to drought, salt and alkali, and heavy metals, and can synthesize some plant hormones to promote plant growth (Haney et al., 2015).The Fusarium can produce plant-stimulating hormones (gibberellin), which increases crop yield.These keystone species possess the capability to promote plant nutrient uptake, and enhance plant tolerance to environmental stressors (Trivedi et al., 2020;Zhang et al., 2020).
In the Mantel correlation analysis showed that the influence of environmental factors on the rhizosphere bacteria of S. miltiorrhiza surpasses that on fungi, and these factors exert a stronger impact on the roots compared to other plant parts.Apparently, within the various treatment groups, the F3 treatment demonstrated a more pronounced influence on environmental factors in comparison to the other treatment groups.The phyla Proteobacteria, Actinobacteria, and Bacteroidetes showed a positive correlation with NO 3 − -N and a negative correlation with NH 4 + -N (Liu et al., 2023b).In addition to fertilization, these indicators are influenced by various other factors, such as soil pH (Wang et al., 2017), root exudate metabolites (Wu et al., 2017), and microbial diversity levels (Xun et al., 2019).Spatially different biotic and abiotic environmental factors may lead to different selection pressures between bacteria and fungi (Li et al., 2017b;Saleem et al., 2018).Keystone species significantly affected the process of community construction, likely because they exhibited high connectivity in community microbial networks and were good predictors of community deviation and turnover (Herren andMcMahon, 2017, Herren andMcMahon, 2018).
Relationship between niche epiphytic communities and quality of S. miltiorrhiza
Analysis of the epiphytic communities in niches showed that fertilization significantly affected some key species in the epiphytic communities of S. miltiorrhiza.In the bacterial co-occurrence network, Bacteroidota and Firmicutes were the dominant phyla, and other indices in the network increased, indicating that the bacterial community network was more stable after fertilization.In the fungal co-occurrence network, Basidiomycota, Mucoromycota, and Mortierellomycota were the dominant phyla, while other indices in the network decreased, showing that the fungal community network was no longer stable after fertilization.In the functional prediction, the protein functions of epiphytic bacterial communities were markedly enhanced.The function of pathogenic and saprophytic fungi was elevated, and that of commensal fungi was reduced.Mantel correlation analysis revealed heightened coefficients across all plant segments under the F3 treatment compared to other treatment cohorts, denoting F3 exerted a more pronounced impact on S. miltiorrhiza epiphytic populations.According to the variance decomposition, ABA, CTK, and GA were the main factors responsible for the changes in the epiphytic communities of S. miltiorrhiza.Studies have shown that Sphingomonas can improve the resistance of plants to drought, Salinization, heavy metals, and can synthesize some plant hormones to promote plant growth.Fusariums can produce plant stimulating hormone (GA), which can improve crop yield.These key species could enhance plant nutrient absorption, boost plant disease resistance, and improve plant stress tolerance (Trivedi et al., 2020;Zhang et al., 2020).Our work found that fertilization can select dominant and stable key species in the community, and then stimulate plants through hormones secreted by these key species to improve the yield and quality of S. miltiorrhiza.The effect of F3 treatment on the epiphytic communities was significant and gave the highest quality of S. miltiorrhiza.Therefore, we determined the best fertilization method is the combination of base fertilizer and foliar fertilizer.
Conclusion
In this study, we systematically examined the effects of fertility on community composition, species diversity, community assembly, and keystone species diversity of the epiphytic bacteria and fungi in S. miltiorrhiza from different ecological niches.The results showed that fertilization significantly affected the composition and abundance of the epiphytic microbial community of S. miltiorrhiza.The influence of compound fertilization on the bacterial community in various niches of S. miltiorrhiza was more remarkable than on the fungi.The microbial community construction of epiphytic bacteria and fungi in each part of the plant was dominated by deterministic processes.In the community, the Sphingobium, Mortierella, Fusarium, Epicoccun, and Novosphingobium exhibited an evident positive correlation with the medicinal components of S. miltiorrhiza.The hormones ABA, CTK, and GA were the main factors impacting the distribution difference of epiphytic colony in the S. miltiorrhiza.The combination of base fertilizer and foliar fertilizer was established as the optimal fertilization strategy.
FIGURE 4 Co-occurrence network plots analysis of the bacterial community structure in Salvia miltiorrhiza under different fertilization treatments.(A), Leaf.(B), stem.(C), roots.a, CK treatment group.b, F1 treatment group.c, F2 treatment group.d, F3 treatment group. | 2024-05-19T15:37:13.530Z | 2024-05-16T00:00:00.000 | {
"year": 2024,
"sha1": "526b7204e75d8fbeb926319e12b77a44a8bb08fd",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/journals/plant-science/articles/10.3389/fpls.2024.1395628/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "10a27f40f2fa16810675032b8578845afde1d57c",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
232193642 | pes2o/s2orc | v3-fos-license | Context and Barriers to the Prescription of Nonoccupational Postexposure Prophylaxis Among HIV Medical Care Providers: National Internet-Based Observational Study in China
Background: Nonoccupational postexposure prophylaxis (nPEP) is an effective HIV biomedical prevention strategy. The research and use of nPEP are mainly concentrated in the developed world, while little is known about the knowledge, attitudes, and practices of nPEP among HIV medical care providers in developing countries. Objective: We aimed to assess the nPEP knowledge and prescribing practice among HIV medical care providers in mainland China. Methods: HIV medical care providers were recruited in China during May and June 2019 through an online survey regarding nPEP-related knowledge, attitudes, and clinical prescription experiences. Multivariable logistic regression was performed to identify factors associated with prescribing nPEP among HIV medical care providers. Results: A total of 777 eligible participants participated in this study from 133 cities in 31 provinces in China. Of the participants, 60.2% (468/777) were unfamiliar with nPEP and only 53.3% (414/777) of participants ever prescribed nPEP. HIV care providers who worked in a specialized infectious disease hospital (vs general hospital, adjusted odds ratio [aOR] 2.49; 95% CI 1.85-3.37), had practiced for 6-10 years (vs 5 or fewer years, aOR 3.28; 95% CI 2.23-4.80), had practiced for 11 years or more (vs 5 or fewer years, aOR 3.75; 95% CI 2.59-5.45), and had previously prescribed occupational PEP (oPEP, aOR 4.90; 95% CI 3.29-7.29) had a significantly positive association with prescribing nPEP. However, unfamiliarity with nPEP (aOR 0.08; 95% CI 0.05-0.11), believing nPEP may promote HIV high-risk behavior (aOR 0.53; 95% CI 0.36-0.77) or result in HIV drug resistance (aOR 0.53; 95% CI 0.36-0.77) among key populations, and self-reported having no written oPEP guideline in place (aOR 0.53; 95% CI 0.35-0.79) were negatively associated with nPEP prescription behavior. Conclusions: HIV medical care providers have insufficient nPEP knowledge and an inadequate proportion of prescribing, which may impede the scale-up of nPEP services to curb HIV acquisition. The implementation of tailored nPEP training or retraining to HIV medical care providers would improve this situation. JMIR Public Health Surveill 2021 | vol. 7 | iss. 3 | e24234 | p. 1 https://publichealth.jmir.org/2021/3/e24234 (page number not for citation purposes) Ding et al JMIR PUBLIC HEALTH AND SURVEILLANCE
HIV Epidemic in Key Populations
The Joint United Nations Program on HIV/AIDS (UNAIDS) and World Health Organization (WHO) estimate that 38 million people were living with HIV in 2019, with over two-thirds concentrated in low-income developing countries [1,2]. The epidemic of HIV is concentrated in key populations [3], including men who have sex with men (MSM) [4,5]. There has been an increasing number of new HIV infections in China over the past 5 years [6], with approximately 958,000 people reported living with HIV in 2019 [7]. Data based on the HIV Sentinel Surveillance System in China showed that MSM had an HIV infection prevalence rate of 6.9% in 2018 [8].
Effectiveness of Nonoccupational Postexposure Prophylaxis
Nonoccupational postexposure prophylaxis (nPEP) is an effective and cost-effective HIV biomedical prevention strategy [9,10]. There have been no randomized controlled trials for nPEP due to ethical considerations, but a case-control study of occupational postexposure prophylaxis (oPEP) demonstrated an 81% reduction in the odds of HIV transmission [11]. nPEP guidelines have been in use by WHO, European AIDS Clinical Society, United States, and Canada for years to offer guidance on nPEP uptake [12][13][14][15][16], and the research and use on nPEP in the developed world is extensive. However, nPEP services are not widely used in most developing countries with relatively severe HIV epidemics, even though some have released their own guidelines. Additional efforts are needed to target nPEP uptake to end the AIDS epidemic by 2030.
Previous Studies and Existing Gap
HIV medical care providers play an indispensable role in nPEP uptake, especially medication prescription [17]. Previous surveys have reported on HIV care providers prescribing nPEP in developed countries [18][19][20][21][22][23], most often including factors such as practice specialty, the number of persons living with HIV in treatment, provider familiarity with nPEP, and the nPEP guideline in place [18,19,23]. As these surveys were conducted in developed countries with nPEP guidelines, it is uncertain whether the situation is similar in developing countries without nPEP guidelines. A clear understanding of obstacles encountered by providers in developing countries without nPEP guidelines will be beneficial to the scale-up of nPEP uptake and control of the HIV epidemic.
A positive attitude has emerged recently in China on the use of nPEP for HIV prevention. The Chinese Center for Disease Control and Prevention (China CDC) carried out a pilot program of nPEP among MSM in 7 provinces to promote the uptake of PEP and preexposure prophylaxis (PrEP) between 2018 and 2019 [24]. In addition, China released the Program to Reduce AIDS (2019-2020) to ensure that the HIV epidemic was controlled at a low level, which encouraged the application of nPEP programs [25]. Considering an increasing body of evidence, China released the nPEP guideline in October 2020 [26]; however, little is known about the knowledge, attitude, and practice of nPEP in HIV medical care providers in China. It is necessary to understand the nPEP perception among HIV medical care providers and barriers associated with prescribing nPEP to provide targeted interventions.
Objectives
We sought to understand nPEP perceptions and practice among HIV medical care providers and factors correlated with nPEP prescription under the current efforts of scale-up of nPEP services.
Study Design and Participant Enrollment
We conducted a nationwide online survey among HIV medical care providers during May and June 2019. After a presurvey to adjust the questionnaire items, a survey invitation was sent to 937 HIV medical care providers from two WeChat groups, "National clinicians group majors in HIV/AIDS" and "National physician platform for communicating of difficult cases in HIV/AIDS." These WeChat groups are currently the leading online WeChat-based communication platforms for HIV-related clinicians in China, with the largest number of registered HIV-related clinicians. The investigator released recruitment information via the WeChat groups, including the study aims, procedure, and requirements of the survey. Eligible participants completed an anonymous online survey by scanning the QR (quick response) code link of the online questionnaire. Inclusion criteria included being age 18 years or older, self-reported practicing in HIV-related medical institutions, having treated at least one person living with HIV over the past year, and providing online informed consent to the study content and protocol. Each individual was allowed to access the online survey once. Each internet protocol address is restricted to answer only one questionnaire. A 30-yuan honorarium (approximately US $4.50) was paid to each participant through WeChat accounts after completion of the 5 to 10 minute questionnaire survey. We used contact information only for releasing rewards and did not disclose it to others.
Data Collection
After providing informed consent, participants completed anonymous online questionnaires on sociodemographic characteristics (age, sex, ethnicity, and educational background), hospital types, technical titles, practice specialty, length of practice, nPEP-related knowledge, attitudes, and clinical prescription experiences (Multimedia Appendix 1). The 3 questions on nPEP-related knowledge (with possible answers yes, no, and I don't know) were as follows: Do you think China has issued national clinical guidelines on nPEP? Do you think unprotected anal intercourse (UAI) risk exceeds percutaneous occupational exposure risk? Do you think percutaneous occupational exposure risk exceeds unprotected vaginal intercourse (UVI) exposure risk?
Data on nPEP-related attitudes (with possible answers agree, neutral, and disagree) were also collected as follows: Do you agree that clinicians have enough time to prescribe nPEP? Do you agree that prescribing nPEP in clinical settings is feasible? Do you agree that prescribing nPEP will promote HIV drug resistance? Do you agree that prescribing nPEP will promote high-risk behaviors?
Additionally, we collected nPEP-related experiences, including the experience of encountering key populations seeking nPEP help and nPEP prescribing history. Before submission, participants could review all items of the questionnaire and make sure mandatory items were completed. To evaluate the impact of geographic HIV epidemic level on prescribing nPEP, we categorized regions into high, middle, and low epidemic levels according to the number of HIV/AIDS cases reported in 2017 (Multimedia Appendix 2). The top one-third of regions were classified as having a high epidemic level, while the bottom one-third were classified as having a low epidemic level. Further, to evaluate the impact of the nPEP pilot program recently conducted by China CDC, we divided the provinces into 2 categories, nPEP and non-nPEP pilot provinces. The study protocol was reviewed and approved by the institutional review board committee of the First Affiliated Hospital of China Medical University ([2019]2015-138-9). We have completed the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) for this study (Multimedia Appendix 3).
Sample Size Calculation
We calculated the sample size of participants based on the formula of a 2-sided confidence interval for one proportion: N = Z 2 1−α/2 × P × (1 -P)/D 2 . For a conservative estimate of sample size, the proportion of nPEP prescription (P) was set to be 0.5.
At a 5% significance level (α) and 5% margin of error (D), the smallest sample size was calculated as 384 observations.
Data Analysis
Category variables were described by frequency and percentage and continuous variables by mean and standard deviation or median and interquartile range (IQR). All the core variables involved in the questionnaire are required. For variables with a missing ratio of less than 5%, we imputed related missing values in the database by mean for continuous variables and mode for categorical variables in the course of data processing. Variables with more than 5% missing ratio would have been deleted, but there were none in this study. For the needs of analysis, we transformed some variables (eg, familiarity of nPEP) into the binary forms yes (extremely familiar, very familiar) or no (generally familiar, not familiar very much, not familiar at all). We used univariable logistic regression to calculate odds ratios (OR) and their 95% confidence intervals for factors associated with prescribing nPEP among HIV medical care providers. Multivariable logistic regression was applied to estimate associations between predictors and nPEP prescribing history after adjustment for age, sex, ethnicity, and educational background. We used SPSS Statistics version 26.0 (IBM Corporation) for analysis. Variables with 2-tailed P<.05 were considered statistically significant.
Knowledge, Experiences, and Attitudes
Overall, only 39.8% (309/777) of participants reported that they were familiar with nPEP, and just 6.8% (53/777) correctly answered all 3 nPEP knowledge-related questions (Table 1). Further, 59.3% (461/777) of participants had provided medical services to fewer than 50 persons living with HIV over the past month, 40.2% (312/777) reported that they had encountered key populations seeking nPEP prescriptions over the past 6 months, and 74.0% (575/777) reported that they had a written oPEP guideline in place (
Principal Findings and Significance
Our study showed that most HIV medical care providers in China were unfamiliar with nPEP, and only a bit more than half of participants had previously prescribed nPEP. We also found that unfamiliarity with nPEP, self-report of having no written PEP-related guideline in place, and less HIV care experience were possibly important barriers to nPEP prescription among HIV medical care providers. This study addresses a gap in the research and shows the negative impact of insufficient knowledge, such as misunderstanding nPEP-related HIV drug resistance and side effects, on the scale-up of nPEP services and subsequent inadequate nPEP prescription by clinicians. It may help public health policymakers learn about HIV medical provider perception of nPEP, thereby providing the opportunity to implement corresponding measures to counter nPEP-related obstacles. Our data also have great significance for further practice after initiating the national nPEP guideline to inform HIV medical care providers in their implementation of nPEP. Additionally, the results of this study are a reference for other countries with similar HIV contexts and insufficient uptake of nPEP services.
Comparison With Prior Work
We found that the proportion (60.2%) of HIV care providers unfamiliar with nPEP was higher than that reported in a previous study from the United States (51.5%) [19]. About 70% of participants incorrectly thought China had already issued a national clinical guideline on nPEP before this survey. This finding may indicate that a high proportion of HIV medical care providers confused oPEP guidelines, released in 2004 [27], with nPEP guidelines or thought the Chinese Guidelines for Diagnosis and Treatment of HIV/AIDS, updated in 2018 [28], were nPEP guidelines. An accordingly high proportion of prescribing nPEP, though, was not found among these providers. This can be attributed to insufficient familiarity with nPEP because of a lack of media advertisements and tailored training. Newly reported HIV cases in China still show an increasing trend [6], however, with strong acceptance of and great demand for nPEP among key populations [29]. This gap could hinder efforts to curb the spread of the HIV epidemic; therefore, intensified publicity through diverse channels and reinforced training or retraining should be offered to improve the knowledge of these providers.
In our study, the proportion of lifetime prescribing of nPEP among HIV care providers (53.3%) was lower than that reported by previous studies from the United States (67.1%) [23], France (58.0%) [30], and Spain (77.3%) [31], which may indicate a huge gap between China and developed countries in the prevention of HIV spread. The gap between the proportion of HIV medical care providers prescribing nPEP and the demand of key populations for nPEP [29] implies that improving the level of nPEP prescription would likely have a remarkable effect on preventing HIV spread. Previous studies found knowledge plays an indispensable role in PrEP prescription behavior [32,33]. Another study found that HIV-related training has a significant correlation with the increased nPEP and PrEP knowledge and the improved PrEP prescribing practice among HIV care providers [20], which also means a possible effect on nPEP prescription through increasing nPEP knowledge by training. Furthermore, there are many nPEP-related challenges, including risk assessment and management of viral hepatitis, frequent transitions from nPEP to PrEP, and the management of low follow-up rates and poor medication adherence [34], which, if addressed improperly, will bring adverse effects and even harm from nPEP. These challenges will not be resolved in the near-term without targeted nPEP training integrating practical skills exercises into didactic sessions, thereby ultimately delaying the progression of HIV prevention.
Factors Associated With nPEP Prescription
In addition, we identified independent factors positively correlated with nPEP prescription among HIV medical care providers. Compared with providers working in general hospitals, those in specialized infectious disease hospitals had a significantly higher proportion of prescribing nPEP, probably due to more awareness of HIV-related information. HIV-related stigma remains severe in China, however, and key populations are more inclined to visit the general hospital for HIV-related services to protect their privacy and avoid disclosure [35], which may limit access to nPEP services. Thus, for those providers in general hospitals, reinforced targeted training is necessary to improve their perception and enhance willingness to prescribe nPEP. We also found significantly higher proportions of nPEP prescription among HIV care professionals (vs non-HIV care professionals), chief physicians (vs general physicians), providers with more than 5 years of working experience (vs 5 or fewer years), and those having provided HIV care to more than 50 persons living with HIV over the past month (vs 50 or more persons living with HIV). Providers with professional knowledge, high-ranking technical titles, and rich HIV care experience are usually skilled, which can attract more patients. They as well have more opportunities to attend HIV-related international conferences and obtain information on nPEP from other countries. This finding suggests that nPEP-related training should also be focused on young providers to enrich their nPEP knowledge and improve practical skills, which could even be delivered during student medical training. Moreover, the establishment of specific support mechanisms via senior clinicians would help overcome the obstacles to prescribe nPEP faced by young clinicians.
In contrast, we found that unfamiliarity with nPEP, incorrect beliefs that nPEP will promote HIV drug resistance or high-risk behaviors, self-reported lack of written oPEP guideline in working settings, and unfamiliarity with oPEP were all negatively correlated with nPEP prescription among HIV medical care providers. Although the nPEP guideline was released in October 2020 [26], further outreach efforts to clinicians in working settings are needed or existing incorrect perceptions caused by insufficient nPEP knowledge will continue to impede the scale-up of nPEP services. The oPEP guideline was released about 15 years ago, so HIV medical care providers are more familiar with oPEP (62.4%) than nPEP (39.8%).
The associations between the practice of oPEP and nPEP, two methods targeted at different types of HIV exposure, have rarely been explored in previous studies. In our study, the level of prescribing nPEP was higher among HIV medical care providers who had previously prescribed oPEP than that among those who had not. There are many similar features between nPEP and oPEP about assessing HIV exposure risk, principles of treatment, and the types of antiviral drugs. HIV medical care providers who master the oPEP practice may be relatively more familiar with prescribing nPEP. Therefore, training programs combining nPEP with oPEP can create a synergistic effect on both prescription behaviors of HIV care providers. Notably, despite the emphasis of simplifying prescribing practice from the updated WHO guideline for nPEP [13], regardless of HIV exposure types, different types vary in the risk of acquiring HIV and subsequent laboratory test items [36]. Providers confusing the standards of nPEP and oPEP practices may well prescribe nPEP improperly to some individuals at low risk of HIV acquisition [37] or miss some items, such as pregnancy testing and the collection of forensic specimens [36]. It again underlines the necessity to provide targeted nPEP training or retraining based on the nPEP guideline.
Compared with North China (63.4%), we found a surprisingly lower proportion of nPEP prescription in Northwest (38.5%) and Northeast (46.3%) regions where providers have insufficient nPEP familiarity (30.8% and 33.5%, respectively) and less HIV care practice (23.1% and 32.3%, respectively). In the contrast, the HIV epidemic is highly prevalent in Xinjiang Province, located in Northwest China. Hence, more attention should be paid to these regions, especially the Northwest with its limited resources, in future national nPEP training efforts. Given the difficulty of organizing centralized training for HIV medical care providers from various regions, internet-based online training is critical for nPEP implementation. It has clear advantages for transmitting up-to-date knowledge and ideas, particularly for providers in the Northwest regions with insufficient resources for nPEP implementation. Besides traditional didactic sessions, online simulation trainings related to practical skills are also promising methods to offset the gap of resource from regions.
Finally, there was no significant association between high HIV epidemic level and nPEP prescription behavior of HIV medical care providers. This indicates that key populations in those provinces at high HIV epidemic level may miss the opportunity to obtain nPEP services even after exposure to HIV. Similarly, we did not find the effect of an nPEP pilot program on nPEP prescription behavior of these providers, which may be explained by the relative short duration implementation time and limited number of involved cities. Therefore, it is necessary to further enhance the advertisement of nPEP at a national level to raise the wide attention of HIV medical care providers.
Strengths and Limitations
Our study has many strengths. First, this is a representative cross-sectional study of nPEP perception and prescribing practice among HIV medical care providers in all 31 provinces of China, and the sources of participants from previous studies have been limited. Second, the sample size of this study was larger than those of previous similar studies. Last, as this is the first study of nPEP perception and prescribing practice among HIV medical care providers in China, the results represent a vital reference that could contribute to solving the obstacles to nPEP prescription, popularizing the use of nPEP nationwide, and controlling HIV spread among key populations.
This study also has limitations. First, this study was conducted by two WeChat groups, and our results rely on self-reporting data, which to some extent would cause sampling bias and reporting bias. Second, HIV care providers from Hong Kong, Macao, and Taiwan were not included in the WeChat groups, and the number of samples from western China (ie, Tibet) was insufficient; hence, the results may not well represent the characteristics of HIV medical care providers from these regions. Additionally, given the cross-sectional design, the causal relationships between prescribing nPEP and other factors are uncertain and will require further prospective studies to confirm.
Conclusions
This is the first cross-sectional survey of nPEP-related knowledge, attitudes, and prescribing experience among HIV medical care providers in a country without extensive use of nPEP services. Our results underline the insufficient nPEP knowledge and inadequate proportion of nPEP prescription among these providers. Implementing targeted nPEP training or retraining through the internet, particularly for young providers from general hospitals, should be priorities to eliminate obstacles in popularizing nPEP services and ultimately reducing HIV incidence among national key populations. | 2021-03-12T06:16:08.175Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "d5945a75f3dc76a7126593021dab8bddac1b74fd",
"oa_license": "CCBY",
"oa_url": "https://publichealth.jmir.org/2021/3/e24234/PDF",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7fab81a532db3281c263deb597436707a43ee131",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236252851 | pes2o/s2orc | v3-fos-license | Closed Endotracheal Suctioning Impact on Ventilator-Related Parameters in Obstructive and Restrictive Respiratory Systems: A Bench Study
Featured Application: The presented paper evaluates disparities in ventilator-related parameters between a patient’s pulmonary mechanic characteristics (airway resistance, lung compliance) and ventilation characteristics (pressure-controlled, volume-controlled) with a closed suctioning system. These results can provide safe and effective mechanical ventilation during the closed suctioning system in clinical care. Abstract: A closed suctioning system (CSS) in patients with coronavirus disease 2019 (COVID-19) prevents spraying respiratory secretions into the environment during suction. However, it is not clear whether ventilation is maintained during the suction procedure, especially in patients with compromised pulmonary mechanics. This paper determines the effects of endotracheal tube (ETT) size, suction catheter size, and two lung mechanics (resistance and compliance) on ventilator-related parameters measured during suction. Suction was performed on an adult training lung, ventilated with either volume-controlled (VC-CMV) or pressure-controlled mandatory ventilation (PC-CMV), using ETT sizes of 6.5–8.0 mm paired with suction catheter sizes of 8–14 French (Fr). Peak inspiratory pressure (PIP) increased by 50% when the ETT’s ventilation area was less than 25 mm 2 in size, especially in patients with high airway resistance ventilated with VC-CMV. Positive end-expiratory pressure (PEEP) levels significantly decreased when using 14 Fr SC during VC-CMV and fewer effects during PC-CMV. Change of expiratory minute volume increased with higher outer diameter of suction catheters and decreased with severe lung compliance during PC-CMV. The change in ventilator-related parameters were intently monitored in the patient whose pulmonary mechanic was compromised through the CSS endotracheal tube suctioning procedures in clinical airway management.
Introduction
Closed suctioning was originally introduced for hygiene reasons and as a method of avoiding desaturation and reduction in lung volume during suctioning. In a closed suctioning system (CSS), the catheter is a part of the ventilator circuit and there is no need to disconnect the ventilator. Continuing connection to the ventilator helps prevent loss of both positive end-expiratory pressure (PEEP) and lung volume [1,2]. Thus, it may enable volume recruitment in the lung and avert a drop in oxygenation. CSS can thus reduce the risks of hypoxemia, atelectasis, and hemodynamic fluctuations [3].
During open endotracheal suctioning and disconnection of a ventilator, patients may be exposed to a sudden unintended withdrawal of PEEP, which may induce repeated lung derecruitment and hypoxia [4][5][6]. However, CSS can prevent alveolar derecruitment and maintain appropriate oxygenation through a steady functional residual capacity (FRC) [3,4]. Regional lung derecruitment after endotracheal suction has been measured by electrical impedance tomography. The results demonstrated that FRC decreased by 58 ± 24% of baseline at disconnection and 22 ± 10% further during open suctioning [7].
This study investigates an important question: whether ventilator-related parameters are maintained during a closed suctioning procedure. These procedures induce an increase in airway resistance (R aw ), resulting from the insertion of a suction catheter (SC) into an endotracheal tube (ETT) with a narrower area. During mechanical ventilation, the peak inspiratory pressure (PIP) is higher to overcome the increased resistance. PIP is a predictor for pulmonary barotrauma in the indications for mechanical ventilation [8,9]. Barotrauma is a well-recognized complication of mechanical ventilation and frequently occurs in patients with a wide range of underlying pulmonary conditions [10].
The primary disease in the decline of pulmonary mechanics is divided into obstructive and restrictive respiratory diseases. Mechanical ventilation support is often needed when pulmonary mechanics cannot function properly. Obstructive diseases can cause unstable airways that are prone to collapse, leading to difficulty in exhaling full tidal volume before the next breath, known as dynamic hyperinflation, which occurs in conditions such as asthma and chronic obstructive pulmonary disease (COPD) [9,11]. Mechanical ventilation can alleviate this problem by reducing minute volume and extending expiratory time. In some cases, it is necessary to increase the external PEEP for ventilator synchronization [12][13][14]. However, little research has been carried out on the use of CSS in obstructive disease patients for endotracheal suctioning.
Furthermore, restrictive diseases impair compliance in pulmonary mechanics. Acute respiratory distress syndrome (ARDS) features severe inflammation caused by impairment of lung compliance [15,16]. Mechanical ventilation is an essential life support for these patients; it maintains the openness of the lungs, minimizing lung injury caused by repetitive alveolar collapse and overdistention [13,17,18]. Clinical management aims to significantly reduce end-expiratory lung volume via CSS to avoid repetitive alveolar collapse and oxygen desaturation during suctioning [4]. However, guidelines for clinical endotracheal suctioning practices are needed in regard to how to ventilate the critically ill, with or without ARDS [13,19,20].
Therefore, clinical trials have focused on those patients who can benefit clinically from CSS. However, airway disease or lung tissue disease will affect real-time ventilator-related parameters during the CSS suction procedure, at a scale that is yet to be defined. This study aimed to determine the effects of endotracheal tube size, suction catheter size, different modes (VC-CMV and PC-CMV), and pulmonary mechanics (obstructive and restrictive) on ventilator-related parameters. Suction was performed on a simulated lung model with operating ventilation mode.
Materials and Methods
The study examined the effects of progressive decline in pulmonary mechanics on ventilator-related parameters during the process of closed suction. Such patients often experience a life-threatening moment during the closed suctioning process used with mechanical ventilation in clinical practice. For the sake of ethics and patient safety, the experiment was carried out using an adult training lung (TTL) model in a university laboratory, without any human subjects involved in the procedure. It was not necessary to obtain ethical approval from the Institutional Review Board or Ethics Committee.
Preparing the Lung Model and Closed Suction System
The study adopted a TTL (model 5600i, Michigan Instruments Inc., Grand Rapids, MI, USA) fixed to an inside diameter (ID) 20 mm artificial plastic trachea, which was intubated with endotracheal tubes (Mallinckrodt Taper Guard Oral/Nasal Tracheal Tube, Cuffed, Murphy Eye, Covidien). TTL's mechanical ventilation was connected to a Hamilton G5 ventilator (Hamilton Medical, Bonaduz, Switzerland). The study simulated the restrictive respiratory system in the TTL, which had adjusted lung compliance (C rs ) on both sides (0.08, 0.05, 0.04, 0.03, 0.02, 0.01 L·cm H 2 O −1 ). The increased airway resistance (R aw ) (5,10,20,30,40 cm H 2 O·L −1 ·s) simulated the obstructive respiratory system with a restrictor before the artificial airway. The lung compliance was set to 0.05 L·cm H 2 O −1 in a simulated obstructive model, and the airway restrictor was set to 5 H 2 O·L −1 ·s in a simulated restrictive model.
For mimics in the clinical patient intubations, a 6.5 to 8.0 mm endotracheal tube (ETT), paired with a CSS catheter (Unimax Medical Systems, Inc., New Taipei City, Taiwan) of 8 Fr to 14 Fr, was employed during closed suctioning. Thus, the experiment employed four ETTs of various IDs, paired with four CSS catheters of various outer diameters (ODs), including a 6.5 mm ETT paired with 8 Fr, 10 Fr SC; a 7.0 mm ETT paired with 10 Fr, 12 Fr SC; a 7.5 mm ETT paired with 10 Fr, 12 Fr SC, and an 8.0 mm ETT paired with 12 Fr, 14 Fr SC. The percentages of SC (OD)/ETT (ID) ranged from 41% to 58%. The Hamilton-G5 ventilated at a frequency of 15 breaths/minute, with a PEEP of 10 cm H 2 O [7]. In VC-CMV mode, frequency was set at a tidal volume of 0.6 L with 60 L/min of constant flow and a pause time of 0.5 s. Inspiratory pressure was set at 20 cm H 2 O with a time of 1.0 s for PC-CMV, see Figure 1. The study adopted a TTL (model 5600i, Michigan Instruments Inc., Grand Rapids, Michigan, USA) fixed to an inside diameter (ID) 20 mm artificial plastic trachea, which was intubated with endotracheal tubes (Mallinckrodt Taper Guard Oral/Nasal Tracheal Tube, Cuffed, Murphy Eye, Covidien). TTL's mechanical ventilation was connected to a Hamilton G5 ventilator (Hamilton Medical, Bonaduz, Switzerland). The study simulated the restrictive respiratory system in the TTL, which had adjusted lung compliance (Crs) on both sides (0.08, 0.05, 0.04, 0.03, 0.02, 0.01 L・cm H2O −1 ). The increased airway resistance (Raw) (5,10,20,30,40 cm H2O・L −1 ・s) simulated the obstructive respiratory system with a restrictor before the artificial airway. The lung compliance was set to 0.05 L・cm H2O −1 in a simulated obstructive model, and the airway restrictor was set to 5 H2O・L −1 ・s in a simulated restrictive model.
For mimics in the clinical patient intubations, a 6.5 to 8.0 mm endotracheal tube (ETT), paired with a CSS catheter (Unimax Medical Systems, Inc., New Taipei City, Taiwan) of 8 Fr to 14 Fr, was employed during closed suctioning. Thus, the experiment employed four ETTs of various IDs, paired with four CSS catheters of various outer diameters (ODs), including a 6.5 mm ETT paired with 8 Fr, 10 Fr SC; a 7.0 mm ETT paired with 10 Fr, 12 Fr SC; a 7.5 mm ETT paired with 10 Fr, 12 Fr SC, and an 8.0 mm ETT paired with 12 Fr, 14 Fr SC. The percentages of SC (OD)/ETT (ID) ranged from 41% to 58%. The Hamilton-G5 ventilated at a frequency of 15 breaths/minute, with a PEEP of 10 cm H2O [7]. In VC-CMV mode, frequency was set at a tidal volume of 0.6 L with 60 L/min of constant flow and a pause time of 0.5 s. Inspiratory pressure was set at 20 cm H2O with a time of 1.0 s for PC-CMV, see Figure 1.
Experimental Protocol
Before the experiment, the flow sensor of the ventilator was calibrated and tested. The endotracheal tube cuff pressure is inflated at 50 cm H2O to avoid the leak during ventilation. The CSS was a tight-fitting three-way device connecting the ventilator to the
Experimental Protocol
Before the experiment, the flow sensor of the ventilator was calibrated and tested. The endotracheal tube cuff pressure is inflated at 50 cm H 2 O to avoid the leak during ventilation. The CSS was a tight-fitting three-way device connecting the ventilator to the ETT and a suctioning apparatus (Pacific Hospital Supply Co., Ltd., Taipei, Taiwan) connection to a pressure gauge. The suction pressure applied to a vacuum level of −150 ± 10 mmHg.
The ventilator circuits were set up as standard adult double circuits without a humidifier to prevent any condensation effect. These circuits' compression ratio values stood at 2.1 mL/cm H 2 O at 0.05 L·cm H 2 O −1 of lung compliance. The CSS suctioning system had a manually operated suction flow switch and a plastic sheath around it. During the suction process, a suction catheter was inserted at a spot 2 cm below the endotracheal tube tip [21] before being moved back to the plastic sheath.
The implementation of a continuous suction procedure was as follows. First, a suction catheter was inserted into the endotracheal tube without the application of suction; next, the catheter was moved to the plastic sheath with suctioning. Each complete suction process was performed in 15 s and repeated four times. Data on suctioning procedures were collected: the baseline ventilation status was recorded with the suction catheter standing by in the plastic sheath without any suction flow. Ventilator-related parameters were recorded for each ventilation episode by the Hamilton Medical ventilator data logger (version 3.27.1), including minute volume, airway pressure, R aw , flow rate, time constant, and respiratory rate. Each time suctioning was completed, verification was sought that the ventilator had returned to baseline ± 10% within one minute.
Statistical Analysis
Data were input into software IBM SPSS Statistics 24 (IBM Corp., Armonk, NY, USA), including descriptive (mean, frequency) and analytical (independent t-test, ANOVA) statistics. Linear trend analysis was carried out to test the average value trends at each ordinal factor level. Multiple regression analysis estimates were made of the quantitative effects affecting airway pressure and minute volume. The significance level of the test was set at <0.05.
Effect of Closed Suction on Respiratory Resistance in Both Respiratory Systems
The TTL was set for mechanical ventilation simulating restrictive disease with a gradual decline of C rs in pulmonary mechanics or obstructive disease with an increase of R aw in pulmonary mechanics. During endotracheal suctioning, insertion of the suction catheter into the endotracheal tube significantly increases respiratory resistance. The change of respiratory resistance was verified by the inspiratory resistance (R INSP ) and expiratory resistance (R EXP ) levels. Results showed high levels of R INSP and R EXP when CSS was applied in both ventilation modes (VC-CMV and PC-CMV), and a progressive decline in pulmonary mechanics ( Table 1). The experiment showed that R INSP and R EXP levels' rise was higher in the obstructive respiratory system (Table 1a) than in the restrictive respiratory system (Table 1b). The increase in R INSP was higher than the increase in R EXP.
Furthermore, the study examined how closed suction induced a rise in the change of respiratory resistance and its effects on ventilator-related parameters in different ID endotracheal tubes paired with OD suction catheters, during the processes of PC-CMV and VC-CMV, for dissimilar pulmonary mechanics.
Varied Effects of ETT and SC on PIP in Both Respiratory Systems
Although VC-CMV enables the patient to receive a specific minute volume of ventilation, PIP varies in line with the change in the pulmonary mechanic of respiratory systems. The study examined the PIP effects on VC-CMV of endotracheal tubes paired with suction catheters in the closed suctioning system on two different pulmonary mechanics. Figure 2 shows an increase in PIP levels with a progressive decline in pulmonary mechanics during VC-CMV, with or without closed suctioning. The results exhibit significantly raised PIP levels along with a decline in pulmonary mechanics and small IDs of the endotracheal tubes. Table 1. Changes of mean respiratory resistance in the various pulmonary mechanics during the closed suctioning process: (a) Change in airway resistance of pulmonary mechanics for the obstructive respiratory system; (b) Change in the lung compliance of pulmonary mechanics for the restrictive respiratory system during two ventilation modes (VC-CMV or PC-CMV). 8.0 mm endotracheal tube with 14 Fr SC (8.0 mm−14 Fr). The PIP level rose further with progressive Raw pulmonary mechanics. While the distance between the two curves (with suction and without a range) for a 6.5 mm endotracheal tube was higher than for an 8.0 mm endotracheal tube, the distance diminished in the case of progressively severe Raw. In restrictive respiratory systems, the PIP level rose in line with the decrease in the endotracheal tubes' IDs and compliance levels ( Figure 2b). The result indicated that the PIP level was affected by the endotracheal tubes' IDs and the pulmonary mechanics; the severe decline in lung compliance curbed this effect. this mode is set to V T 0.6 L with peak flow 60 Lpm; PIP: peak inspiratory pressure; R aw : airway resistance; C rs : lung compliance; ETT: endotracheal tube; SC: suction catheter; 6.5 mm−08 Fr: 6.5 mm ETT paired with 08 Fr SC, and so on. The closed suctioning procedure was repeated three times. The error bars represent the standard deviation of 30 breaths in each of the pulmonary mechanic scenarios (five airway resistance levels and six lung compliance levels). The independent samples t-test was used to compare with the control group. * p < 0.05, ** p < 0.01.
In the obstructive respiratory system (Figure 2a), the PIP level for the pairing of a 6.5 mm endotracheal tube with 10 Fr SC (6.5 mm−10 Fr) was higher than for the pairing of an 8.0 mm endotracheal tube with 14 Fr SC (8.0 mm−14 Fr). The PIP level rose further with progressive R aw pulmonary mechanics. While the distance between the two curves (with suction and without a range) for a 6.5 mm endotracheal tube was higher than for an 8.0 mm endotracheal tube, the distance diminished in the case of progressively severe R aw . In restrictive respiratory systems, the PIP level rose in line with the decrease in the endotracheal tubes' IDs and compliance levels ( Figure 2b). The result indicated that the PIP level was affected by the endotracheal tubes' IDs and the pulmonary mechanics; the severe decline in lung compliance curbed this effect.
Varied Effects of Ventilation Area on PIP/PIP% in Both Respiratory Systems
In the case of obstructive respiratory systems (Figure 3a), it was found that an increase of PIP/PIP % levels for endotracheal tubes with larger IDs was lower than for endotracheal tubes with smaller IDs. There was a rise of PIP/PIP % levels for 6.5mm-10Fr and 7.0 mm-12Fr which exceeded 50%, but PIP/PIP % levels were less than 40% for 7.5mm-12Fr and 8.0mm-14Fr at the R aw 10 cm H 2 O·L-1 ·s. In addition, the change of PIP/PIP % level in suction catheters with larger ODs was more significant than for suction catheters with smaller ODs when paired with the same endotracheal tube. The change was less noticeable in the case of worse R aw . Notably, with the use of 8.0 mm-12Fr for closed suctioning, the PIP/PIP % level dropped to zero or even below zero when R aw exceeded 20 cm H 2 O·L −1 ·s.
Appl. Sci. 2021, 11, x FOR PEER REVIEW 8 of 18 Figure 3. The effect of the ventilation area on △PIP/PIP% during VC-CMV: (a) Airway resistance of pulmonary mechanics for the obstructive system; (b) Lung compliance of pulmonary mechanics for the restrictive system. △PIP/PIP% decreased for the larger ventilation area and significantly reduced for severe respiratory systems. △PIP: (delta) suctioning peak inspiratory pressure level substrates a baseline peak inspiratory pressure level; △PIP/PIP%: percentage of PIP (baseline PIP level) divided by △PIP; VC-CMV: volume control continuous mandatory ventilation, this mode is set to VT 0.6 L with peak flow 60 Lpm; Raw: airway resistance; Crs: lung compliance; ETT: endotracheal tube; SC: suction catheter; VA: ventilation area (ETT cross-section area minus SC cross-section area); 6.5−08: 6.5 mm ETT paired with 08 Fr SC, and so on. The independent samples t-test was used to compare with 0.08 L・cm H2O −1 of Crs and 5 cm H2O・L −1 ・s of Raw group. * p < 0.05, ** p < 0.01.
Analysis of the Impact Factors for △PIP
Multiple regression analyses were conducted to examine the △PIP level between various potential predictors. The obstructive respiratory system model of the △PIP level with all four predictors produced R 2 = 0.63, p < 0.01 (Table 2a). The suction catheter area had a significant positive regression of the △PIP level, indicating that suction catheters with higher ODs had higher △PIP levels after controlling for the other variables in the model. The endotracheal tube area and Raw had a significant negative regression of the △PIP level, indicating that the higher endotracheal tube areas and Raw had lower △PIP levels after accounting for the suction catheter area. The △PIP levels impacted in the VC-CMV mode more than in the PC-CMV.
Moreover, the restrictive respiratory system model of the △PIP level with all four Figure 3. The effect of the ventilation area on PIP/PIP% during VC-CMV: (a) Airway resistance of pulmonary mechanics for the obstructive system; (b) Lung compliance of pulmonary mechanics for the restrictive system. PIP/PIP% decreased for the larger ventilation area and significantly reduced for severe respiratory systems. PIP: (delta) suctioning peak inspiratory pressure level substrates a baseline peak inspiratory pressure level; PIP/PIP%: percentage of PIP (baseline PIP level) divided by PIP; VC-CMV: volume control continuous mandatory ventilation, this mode is set to VT 0.6 L with peak flow 60 Lpm; R aw: airway resistance; C rs : lung compliance; ETT: endotracheal tube; SC: suction catheter; VA: ventilation area (ETT cross-section area minus SC cross-section area); 6.5−08: 6.5 mm ETT paired with 08 Fr SC, and so on. The independent samples t-test was used to compare with 0.08 L·cm H 2 O −1 of C rs and 5 cm H 2 O·L −1 ·s of R aw group. * p < 0.05, ** p < 0.01.
For restrictive respiratory systems (Figure 3b), the experiment showed that an increase in PIP/PIP % level for endotracheal tubes with smaller IDs was higher than for endotracheal tubes with larger IDs. The PIP/PIP % level rose over 70% for 6.5mm-10Fr but only 20% for 8.0mm-14Fr at the C rs 0.05 L·cm H 2 O −1 . In addition, the change of PIP/PIP % level for suction catheters with larger ODs was less than for suction catheters with smaller ODs for the same endotracheal tube. Moreover, the change of PIP/PIP % level then dropped to C rs 0.03 L·cm H 2 O −1 .
Thus, the result indicated that the PIP/PIP% level rise was associated with usable endotracheal tubes' ventilation area and pulmonary mechanics. The PIP/PIP% level was more stable during PC-CMV than VC-CMV. The PIP/PIP% level increased along with a reduction in ventilation area and a decline in pulmonary mechanics. However, this increased level was restricted if R aw was over 20 cm H 2 O·L −1 ·s and C rs less than 0.03 L·cm H 2 O −1 .
Analysis of the Impact Factors for PIP
Multiple regression analyses were conducted to examine the PIP level between various potential predictors. The obstructive respiratory system model of the PIP level with all four predictors produced R 2 = 0.63, p < 0.01 (Table 2a). The suction catheter area had a significant positive regression of the PIP level, indicating that suction catheters with higher ODs had higher PIP levels after controlling for the other variables in the model. The endotracheal tube area and R aw had a significant negative regression of the PIP level, indicating that the higher endotracheal tube areas and R aw had lower PIP levels after accounting for the suction catheter area. The PIP levels impacted in the VC-CMV mode more than in the PC-CMV. : delta (a variation level that the suctioning PIP, PEEP, and Vexp level subtract at baseline level); PIP: delta peak inspiratory pressure; Vexp: delta expiratory minute volume; PEEP: delta positive end-expiratory pressure; ETT area: endotracheal tube cross-section area. SC area: suction catheter cross-section area; C rs : lung compliance; R aw : airway resistance; Mode: VC-CMV (volume control continuous mandatory ventilation); PC-CMV (pressure control continuous mandatory ventilation). B: unstandardized regression coefficient; SE: standard error; t = t statistic, which evaluates the predictor; R 2 : adjusted R-squared; mode of VC-CMV dummy variable is 0. The mode of the PC-CMV dummy variable is 1. * p < 0.05, ** p< 0.01. Moreover, the restrictive respiratory system model of the PIP level with all four predictors produced R 2 = 0.72, p < 0.01. As can be seen in Table 2b, the suction catheter area had a significant positive regression of the PIP level, indicating that suction catheters with higher ODs had higher PIP levels after controlling for the other variables in the model. The endotracheal tube area had a significant negative regression of the PIP level, indicating that after accounting for the suction catheter area, the higher endotracheal tube areas had lower PIP levels. The C rs did not contribute to the multiple regression model. showed that PEEP levels for 6.5 mm and 7.0 mm were similar to 7.5 mm endotracheal tubes with suction catheters that could be kept at the set scale during closed suction in both ventilation modes (Figure 4). In the 8.0 mm endotracheal tube with a 12 Fr suction catheter, PEEP levels began to experience a slight variation in both pulmonary mechanics. However, with the 14 Fr suction catheter, the PEEP level demonstrated a rising trend (as an intrinsic PEEP) during PC-CMV if R aw increased to 40 cm H 2 O·L −1 (Figure 4a). These changes did not appear in the severely restrictive system (Figure 4b). In contrast, VC-CMV with the 14 Fr suction catheter showed a significant reduction in PEEP level during closed suctioning. Fr SC, and so on. The closed suctioning procedure was repeated three times. The error bars represent the standard deviation of 30 breaths in each of the pulmonary mechanic scenarios (five airway resistance levels and six lung compliance levels). The independent samples t-test was used to compare with the control group. * p < 0.05, ** p < 0.01.
Analysis of the Impact Factors for △PEEP
The obstructive respiratory system model of △PEEP with all four predictors produced R 2 = 0.13, p = 0.04. As can be seen in Table 2a, the only suction catheter area had a significant positive regression △PEEP level, indicating that suction catheters with higher ODs had higher △PEEP levels. Endotracheal tube area, Raw, and mode did not contribute to the multiple regression model. The restrictive respiratory system model of the △PEEP level with all four predictors produced R 2 = 0.23, p < 0.01 (Table 2b). The suction catheter area had a significant positive regression △PEEP level, indicating that suction catheters with higher ODs had higher △PEEP levels after controlling for the other variables in the model. The Crs had a significant negative △PEEP level, indicating that higher Crs had lower △PEEP levels after accounting for the suction catheter area. Endotracheal tube area and mode did not contribute to the multiple regression models.
Varied Effects of ETT and SC on Vexp in Both Types of Respiratory Systems
Closed endotracheal suctioning significantly reduced lung volume loss by handling the suctioning procedure in terms of suctioning pressure and time. The study examined the expiratory minute volume (Vexp) and the control of suction pressure time. Figure 5 shows that the Vexp level decreased in both progressively declining respiratory systems during PC-CMV. The distance between the Vexp curves of VC-CMV and PC-CMV with a 6.5 mm endotracheal tube was less than with an 8.0 mm endotracheal tube. Moreover, the distance between the two curves with the restrictive respiratory system was less than with ); R aw : airway resistance; C rs : lung compliance; ETT: endotracheal tube; SC: suction catheter; 7.5 mm−10 Fr: 7.5 mm ETT paired with 10 Fr SC, and so on. The closed suctioning procedure was repeated three times. The error bars represent the standard deviation of 30 breaths in each of the pulmonary mechanic scenarios (five airway resistance levels and six lung compliance levels). The independent samples t-test was used to compare with the control group. * p < 0.05, ** p < 0.01.
Analysis of the Impact Factors for PEEP
The obstructive respiratory system model of PEEP with all four predictors produced R 2 = 0.13, p = 0.04. As can be seen in Table 2a, the only suction catheter area had a significant positive regression PEEP level, indicating that suction catheters with higher ODs had higher PEEP levels. Endotracheal tube area, R aw , and mode did not contribute to the multiple regression model. The restrictive respiratory system model of the PEEP level with all four predictors produced R 2 = 0.23, p < 0.01 (Table 2b). The suction catheter area had a significant positive regression PEEP level, indicating that suction catheters with higher ODs had higher PEEP levels after controlling for the other variables in the model. The C rs had a significant negative PEEP level, indicating that higher C rs had lower PEEP levels after accounting for the suction catheter area. Endotracheal tube area and mode did not contribute to the multiple regression models.
Varied Effects of ETT and SC on Vexp in Both Types of Respiratory Systems
Closed endotracheal suctioning significantly reduced lung volume loss by handling the suctioning procedure in terms of suctioning pressure and time. The study examined the expiratory minute volume (Vexp) and the control of suction pressure time. Figure 5 shows that the Vexp level decreased in both progressively declining respiratory systems during PC-CMV. The distance between the Vexp curves of VC-CMV and PC-CMV with a 6.5 mm endotracheal tube was less than with an 8.0 mm endotracheal tube. Moreover, the distance between the two curves with the restrictive respiratory system was less than with the obstructive respiratory system.
In the obstructive respiratory system (Figure 5a), the Vexp level in lower Raw was higher than in higher Raw during PC-CMV. The distance between the two curves (VC-CMV and PC-CMV) of Vexp had a more extensive range for lower Raw. However, the curve distance diminished when Raw exceeded 20 cm, H2O・s・L −1 of. In the restrictive respiratory system (Figure 5b), the Vexp level was higher in higher Crs than lower Crs during PC-CMV. In addition, the distance between the two curves in Vexp increased in line with the rise of Crs, but the two curves began descending along with the Crs progressive decline to 0.02 L・cm H2O −1 . When the Crs decreased to 0.01 L・cm H2O −1 , the Vexp of PC-CMV was equal to that of VC-CMV. Figure 5. Comparison of the changes in Vexp effects in different respiratory systems and ventilation modes. Various ETTs paired with SCs for CSS conducted endotracheal suctioning in airway resistance of pulmonary mechanics for the obstructive system (a) and lung compliance of pulmonary mechanics for the restrictive system (b) with VC-CMV or PC-CMV. PC-CMV: pressure control continuous mandatory ventilation, this mode set inspiratory pressure at 20 cm H2O with inspiratory time at 1 sec; VC-CMV: volume control continuous mandatory ventilation, this mode is set to VT 0.6 L with peak flow 60 Lpm; C: control group (0.08 L・cm H2O −1 of Crs and 5 cm H2O・L −1 ・s of Raw); Vexp: expiratory minute volume; Raw: airway resistance; Crs: lung compliance; ETT: endotracheal tube; SC: suction catheter; 6.5 mm−08 Fr: 6.5 mm ETT paired with 08 Fr SC, and so on. The closed suctioning procedure was repeated three times. The error bars represent the standard deviation of 30 breaths in each of the pulmonary mechanic scenarios (five airway resistance levels and six lung compliance levels). The independent samples t-test was used to compare with the control group. * p < 0.05, ** p < 0.01.
Varied Effects of ETTs and SCs on △Vexp/Vexp% in Both Types of Respiratory Systems
This study examined the change of Vexp percentage (△Vexp/Vexp%) levels in different ventilation modes and pulmonary mechanics during closed suctioning ( Figure 6). In the case of obstructive respiratory systems (Figure 6a), △Vexp/Vexp% levels for 8.0 mm−14 Fr were higher than 6.5 mm−08 Fr (40% versus 20%). The levels were also higher Figure 5. Comparison of the changes in Vexp effects in different respiratory systems and ventilation modes. Various ETTs paired with SCs for CSS conducted endotracheal suctioning in airway resistance of pulmonary mechanics for the obstructive system (a) and lung compliance of pulmonary mechanics for the restrictive system (b) with VC-CMV or PC-CMV. PC-CMV: pressure control continuous mandatory ventilation, this mode set inspiratory pressure at 20 cm H 2 O with inspiratory time at 1 sec; VC-CMV: volume control continuous mandatory ventilation, this mode is set to VT 0.6 L with peak flow 60 Lpm; C: control group (0.08 L·cm H 2 O −1 of C rs and 5 cm H 2 O·L −1 ·s of R aw ); Vexp: expiratory minute volume; R aw : airway resistance; C rs : lung compliance; ETT: endotracheal tube; SC: suction catheter; 6.5 mm−08 Fr: 6.5 mm ETT paired with 08 Fr SC, and so on. The closed suctioning procedure was repeated three times. The error bars represent the standard deviation of 30 breaths in each of the pulmonary mechanic scenarios (five airway resistance levels and six lung compliance levels). The independent samples t-test was used to compare with the control group. * p < 0.05, ** p < 0.01.
In the obstructive respiratory system (Figure 5a), the Vexp level in lower R aw was higher than in higher R aw during PC-CMV. The distance between the two curves (VC-CMV and PC-CMV) of Vexp had a more extensive range for lower R aw . However, the curve distance diminished when R aw exceeded 20 cm, H 2 O·s·L −1 of. In the restrictive respiratory system (Figure 5b), the Vexp level was higher in higher C rs than lower C rs during PC-CMV. In addition, the distance between the two curves in Vexp increased in line with the rise of C rs , but the two curves began descending along with the C rs progressive decline to 0.02 L·cm H 2 O −1 . When the C rs decreased to 0.01 L·cm H 2 O −1 , the Vexp of PC-CMV was equal to that of VC-CMV.
Varied Effects of ETTs and SCs on Vexp/Vexp% in Both Types of Respiratory Systems
This study examined the change of Vexp percentage ( Vexp/Vexp%) levels in different ventilation modes and pulmonary mechanics during closed suctioning ( Figure 6). In the case of obstructive respiratory systems (Figure 6a), Vexp/Vexp% levels for 8.0 mm−14 Fr were higher than 6.5 mm−08 Fr (40% versus 20%). The levels were also higher for suction catheters with larger ODs than those with smaller ODs with the same endotracheal tube. However, the Vexp/Vexp% level showed no significant difference in R aw 's progressive rising, and the Vexp/Vexp% of the curve distance showed no apparent distinction during CSS.
Appl. Sci. 2021, 11, x FOR PEER REVIEW 12 of 18 for suction catheters with larger ODs than those with smaller ODs with the same endotracheal tube. However, the △Vexp/Vexp% level showed no significant difference in Raw's progressive rising, and the △Vexp/Vexp% of the curve distance showed no apparent distinction during CSS. Thus, for restrictive respiratory systems (Figure 6b), the △Vexp/Vexp% level exhibited a declining trend along with a progressive reduction of Crs. The △Vexp/Vexp% levels for 8.0 mm−14 Fr were higher than for 6.5 mm−08 Fr (40% versus 20%) during PC-CMV. The levels were also higher for suction catheters with larger ODs than those with smaller ODs with the same endotracheal tubes. In contrast, there was a difference in the △Vexp/Vexp% of the curve distance only for 6.5 mm−08 Fr. Furthermore, no significant correlation was found between △Vexp/Vexp% levels and VC-CMV, except for 7.0 mm−10 Fr. The △Vexp/Vexp% levels reduced considerably at Crs of 0.01 L・cm H2O −1 during PC-CMV. Figure 6. Comparison of △Vexp/Vexp% effects in different respiratory systems and ventilation modes. Various ETTs paired with SCs for CSS conducted endotracheal suctioning in airway resistance pulmonary mechanics for the obstructive system (a) and lung compliance pulmonary mechanics for the restrictive system (b) with VC-CMV or PC-CMV. (△) Vexp: (delta) expiratory minute volume (baseline Vexp level substrates with suctioning Vexp level); △Vexp/Vexp%: percentage of baseline Vexp level divided by △Vexp; PC-CMV: pressure control continuous mandatory ventilation, this mode set inspiratory pressure at 20 cm H2O with inspiratory time at 1 sec; VC-CMV: volume control continuous mandatory ventilation, this mode is set to VT 0.6 L with peak flow 60 Lpm; Raw: airway resistance; Crs: lung compliance; ETT: endotracheal tube; SC: suction catheter; 6.5 mm−08 Fr: 6.5 mm ETT paired with 08 Fr SC, and so on. The independent samples t-test was used to compare with 5 cm H2O・L −1 ・s of Raw and 0.08 L・cm H2O −1 of Crs group. * p < 0.05, ** p < 0.01.
Analysis of the Factors Affecting △Vexp
The obstructive respiratory system model of △Vexp levels with all four predictors produced R 2 = 0.79, p < 0.01 (Table 2a). The suction catheter area had a significant positive regression △Vexp level, indicating that suction catheters with higher ODs had higher △Vexp levels. The endotracheal tube area and Raw had a significant negative regression Figure 6. Comparison of Vexp/Vexp% effects in different respiratory systems and ventilation modes. Various ETTs paired with SCs for CSS conducted endotracheal suctioning in airway resistance pulmonary mechanics for the obstructive system (a) and lung compliance pulmonary mechanics for the restrictive system (b) with VC-CMV or PC-CMV. ( ) Vexp: (delta) expiratory minute volume (baseline Vexp level substrates with suctioning Vexp level); Vexp/Vexp%: percentage of baseline Vexp level divided by Vexp; PC-CMV: pressure control continuous mandatory ventilation, this mode set inspiratory pressure at 20 cm H 2 O with inspiratory time at 1 s; VC-CMV: volume control continuous mandatory ventilation, this mode is set to V T 0.6 L with peak flow 60 Lpm; R aw : airway resistance; C rs : lung compliance; ETT: endotracheal tube; SC: suction catheter; 6.5 mm−08 Fr: 6.5 mm ETT paired with 08 Fr SC, and so on. The independent samples t-test was used to compare with 5 cm H 2 O·L −1 ·s of R aw and 0.08 L·cm H 2 O −1 of C rs group. * p < 0.05, ** p < 0.01.
Thus, for restrictive respiratory systems (Figure 6b), the Vexp/Vexp% level exhibited a declining trend along with a progressive reduction of C rs . The Vexp/Vexp% levels for 8.0 mm−14 Fr were higher than for 6.5 mm−08 Fr (40% versus 20%) during PC-CMV. The levels were also higher for suction catheters with larger ODs than those with smaller ODs with the same endotracheal tubes. In contrast, there was a difference in the Vexp/Vexp% of the curve distance only for 6.5 mm−08 Fr. Furthermore, no significant correlation was found between Vexp/Vexp% levels and VC-CMV, except for 7.0 mm−10 Fr. The Vexp/Vexp% levels reduced considerably at C rs of 0.01 L·cm H 2 O −1 during PC-CMV.
Analysis of the Factors Affecting Vexp
The obstructive respiratory system model of Vexp levels with all four predictors produced R 2 = 0.79, p < 0.01 (Table 2a). The suction catheter area had a significant positive regression Vexp level, indicating that suction catheters with higher ODs had higher Vexp levels. The endotracheal tube area and R aw had a significant negative regression Vexp level. After accounting for the suction catheter area, those with a higher endotracheal tube area and R aw had lower Vexp levels. The Vexp level was impacted in PC-CMV mode more than VC-CMV mode.
The restrictive respiratory system model of Vexp level with all four predictors produced R 2 = 0.68, p < 0.01. As can be seen in Table 2b, the suction catheter area and C rs had a significant positive regression Vexp level, indicating that suction catheters with higher ODs and C rs had higher Vexp levels after controlling for the other variables in the model. The endotracheal tube area had a significant negative regression Vexp level, indicating that after accounting for the suction catheter area and C rs , the higher endotracheal tube area had lower Vexp levels. The Vexp level was impacted in PC-CMV mode more than VC-CMV mode.
Discussion
This study examined the effects of ventilator-related parameters during closed endotracheal suctioning; even CSS used for 15 s of suctioning had advantages in clinical practice. The study found that CSS of endotracheal suction caused increased respiratory resistance (R INSP and R EXP ), mainly due to the effects of positive pressure ventilation, suction catheter insertion into endotracheal tubes, and patients' pulmonary mechanics. VC-CMV offers a pre-set tidal volume and minute ventilation safety. It also requires set inspiratory flow, flow waveform, and inspiratory time for delivering tidal volume. However, airway pressure increases in response to reduced lung compliance or increased airway resistance during VCV-CMV. In PC-CMV, the ventilator provides a high inspiratory flow rate to achieve the pre-set pressure and time during the inspiration phase. However, tidal volume is dependent on the pulmonary mechanics.
Additionally, R aw is the opposition to flow caused by the forces of friction. It is usually defined as the ratio of driving pressure to the airflow rate during the mechanical ventilation, R aw = ∆P/Flow (∆P is the pressure applied to the airway above PEEP). Expiratory flow is normally passive and is determined by alveolar pressure, as well as R aw , the elapsed time since initiation of exhalation, and constant time. Lung units with higher resistance and/or compliance will have a longer time constant and require more time to fill and empty. End-expiratory flow is limited if R aw is high and expiratory time is not sufficient, indicating the presence of air trapping (auto-PEEP).
The rise in PIP is a significant ventilator-related parameter during VC-CMV with closed suction. It is a phenomenon that can be aggravated when the pulmonary mechanics of the respiratory system was compromised. PEEP changed in 14 Fr SC of CSS, with a more significant reduction in VC-CMV than in PC-CMV. PEEP was at an increased level in severely R aw pulmonary mechanics during PC-CMV, but a decreased level was found in severely C rs pulmonary mechanics. Additionally, minute volume was a critical ventilatorrelated parameter in PC-CMV. The change of Vexp was associated with larger ODs of suction catheters and with pulmonary mechanics. The amount of the decrease in Vexp reduced in severely C rs pulmonary mechanics.
The study shows that a change of PIP% during VC-CMV hinges on the ventilation area of the endotracheal tubes. For instance, the change of PIP% for a 12 Fr suction catheter paired with a 7.0 mm endotracheal tube was greater than for an 8.0 mm endotracheal tube (50% versus 30%) in 0.03 L·cm H 2 O −1 of C rs . These results were consistent with a previous cardiac surgery study, which showed that the airway pressure in volumecontrolled mode peaks when the insertion of 14 Fr SC into an endotracheal tube reaches an 8.0 mm depth position [21]. This earlier research pointed to changes in the radius of the tube, which had a significant effect on resistance [22]. Due to this higher resistance, a ventilator needs more energy (positive pressure) to expand the lungs [23][24][25].
A previous bench study put forth an alternative solution for pairing suction catheters with endotracheal tubes, suggesting a volume or area ratio of SC/ETT 50%, in correspondence with a diameter ratio of 70% SC/ETT ratio [26]. Clinical studies have associated R aw 's rise in CSS with a practice-related factor and the effect of the reduced area of the artificial airway tube [27,28]. When closed suction was underway for patients with a progressive decline in pulmonary mechanics with VC-CMV, PIP rose more significantly. In addition, a higher PIP resulting from higher trans-airway pressure can cause barotrauma to the detriment of the lung, especially for patients with COPD [8,29]. A study on animals found that histopathologic changes in the lung with a tidal volume of 15 mL/kg induced a PIP of 40 cm H 2 O for up to 32 h [30]. Unlike other research carried out in this area, we find a significant difference in PIP according to the endotracheal tube's ventilation area and pulmonary mechanics during closed suction with different modes. The optimal OD of suction catheters can be extended to the ventilation area; this is critical for the efficacy and effectiveness of PIP management during closed endotracheal suctioning. Using PC-CMV mode can avoid raising transpulmonary pressure during closed suction. A maximum airway pressure alarm was set in advance for barotrauma incidents through a higher PIP in patients with deteriorating pulmonary mechanics.
A higher PEEP level (intrinsic PEEP) than the set level was observed in severe R aw cases during PC-CMV. PC-CMV is advantageous since the operator can control inspiratory pressure ventilation directly. During suctioning, the inspiratory pressure decreased significantly with a 14 Fr suction catheter in CSS. The ventilator has to provide extra flow to compensate for the loss of inspiratory pressure in PC-CMV. The PEEP level in breathing circuits rose when the inflow volume exceeded gas outflow during catheter suction. Our study demonstrates that in the obstructive respiratory system, when undergoing PC-CMV with closed suction, the rise of R aw may worsen, resulting in an intrinsic PEEP effect. In the clinical setting, for COPD patients with dynamic hyperinflation, it is necessary to generate high pleural pressure swings for patients undertaking mechanical ventilation to a level higher than intrinsic PEEP before triggering the activation of the ventilator [31,32]. However, the ventilator will be ineffective for this triggering function if patients have a weakness in fatigue of the respiratory muscle [33]. New monitoring waveform systems can be used with other inputs, helping detection and improving triggering synchrony management. These results show that careful attention must be paid to the PEEP level when obstructive respiratory system patients are undergoing closed suction.
PEEP levels decreased significantly for 14 Fr of SC in VC-CMV, a PEEP level lower than the set level observed for the severe C rs pulmonary mechanics during PC-CMV. This PEEP effect was at odds with the results of previous studies. A prior review of CSS's effectiveness shows that the prevention of lung collapse can be helped by performing suction without interrupting mechanical ventilation to maintain PEEP [7]. In this way, the decline of PaO 2 in mechanical ventilation patients can be improved [4,17,34,35]. There is a possible explanation for the result in this study: 14 Fr SC has a higher cross-section area conducive to removing more airflow in the airway, especially in VC-CMV, which can control the tidal volume. It can thus be reasonably assumed that VC-CMV with less than 14 Fr CSS or PC-CMV was applied to patients with high PEEP to preserve more alveoli volume during suction at the end of expiration.
Approximately 5% of patients with COVID-19 eventually develop ARDS, septic shock, and/or multiple organ failure. The mainstay of clinical treatment for patients with respiratory failure is mechanical ventilation [36]. CSS can decrease the clinical signs of hypoxemia by creating a small volume loss to preserve PEEP. It can also be used during suction to limit the spread of Severe Acute Respiratory Syndrome Coronavirus 2 into the environment, and contamination of the person during suction [37]. Therefore, there are significant advantages to the clinical practitioner adopting CSS for critically ill patients with COVID-19.
This study found that the conserved ventilation volume was affected by the OD size of suction catheters in PC-CMV. Vexp decreased with suction catheters with larger ODs, but this was less noticeable when respiratory systems were severe. A previous clinical study has shown that a steady tidal volume in the lung is positively correlated with improved oxygenation during suctioning [38]. A randomized clinical study observed an immediately decreased V T , but that minute volume was maintained during closed endotracheal suctioning [39]. Additionally, a lung injury animal model proved closed suction only protected against derecruitment when a small catheter is used, especially in the nondependent lung [40]. An electrical impedance tomography can be used to examine lung volume during CSS cleaning. The presence of a valve CSS should be considered essential in preserving lung volume and uninterrupted ventilation in mechanically ventilated patients [41]. However, a randomized study in post-cardiac surgery patients demonstrated that closed suctioning minimized lung volume loss during suctioning but counterintuitively resulted in a slower recovery of end-expiratory lung volume post suction than open suction [42]. Finally, the performance of the recruitment maneuver after either suction method has been recognized to be beneficial in restoring end-expiratory lung volume [6,42].
The advantage of VC-CMV lies in the ability to control tidal volume and minute ventilation. During endotracheal suctioning, two crucial factors can cause unexpected volume loss. The first is a potential higher negative pressure if the suction flow exceeds ventilation or if the secretions are lining the inside surface of the tube [43]. The second is the compression ratio that equals the gas volume compressed per pressure (cm H 2 O) of PIP in a breathing circuit. The compression ratio values ranged from 0.3 to 4.5 mL/cm H 2 O at the highest and lowest compliance settings (0.15 versus 0.01 L·cm H 2 O −1 ), respectively. Higher PIP leads to an increase in gas compression in the ventilator circuits [44]. A reduction in delivered tidal volume accounts for up to 20% [45] of the problem. Hence, the measured tidal volume level of ventilators makes the circuit compliance a critical factor, with consequent estimation errors for alveolar ventilation. Pressure-controlled ventilation is capable of providing a more consistent tidal volume.
Limitations
This study used the lung simulator model due to medical ethical reasons. A dual adult lung simulator by TTL instrument simulated accurate, normal, and a variety of pathological pulmonary conditions. It is widely used to replace humans to research many invasive treatments. Our study used the TTL to mimic various pulmonary conditions to investigate the effect on ventilator-related parameters during mechanical ventilation with closed suction. It is plausible that some limitations could have influenced the results obtained. CSS uses endotracheal suctioning to remove secretions to prevent obstruction of the tube and lower airways in mechanical ventilation patients. Accumulation of secretion decreases the diameter of endotracheal tubes as well as being a factor in rising airway resistance. The ventilator-related parameters may be underestimated during closed suctioning, where there is a secretion in the airway. In addition, in this experiment, results are only relevant to patients with obstructive or restrictive conditions using closed suction and are unable to be applied to a discussion on the patient coexistence with obstructive and restrictive respiratory systems.
Conclusions
In this study, most ventilation-related parameter effects during closed suction appear to result from the size of endotracheal tubes paired with suction catheters, ventilation mode, and the patient's pulmonary mechanics. An increase in PIP was produced by reducing the ventilation area during VC-CMV. The decrease in Vexp depends on the size of the suction catheter during PC-CMV. Patients with a progressive decline in their pulmonary mechanics will produce additional effects. However, in patients with severe pulmonary mechanics, these effects will gradually decrease.
Patients with an obstructive respiratory system should be given smaller dimension catheters in CSS to avert an increase of intrathoracic pressure during VC-CMV, and an intrinsic PEEP should be developed during PC-CMV. Patients with restrictive diseases of the respiratory system will reserve more Vexp and maintain PEEP if the size of the suction catheter is reduced and PC-CMV is selected. The patient's delivered alveolar volume can be overestimated during VC-CMV because the increased compression volume in breathing circuits is neglected at a higher PIP level. When patients with different respiratory system issues experience closed endotracheal suctioning, we believe that selecting the optimal size of the suction catheter will create a more stable and desirable change in ventilator-related parameters.
Data Availability Statement:
The authors confirm that the data supporting the findings of this study are available within the article. | 2021-06-10T13:16:36.202Z | 2021-06-06T00:00:00.000 | {
"year": 2021,
"sha1": "5296208f26a74723bdee3232d5de48d736ccf343",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/11/5266/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1bf3a54ebcf2a861f0bb4e869dbe4d13b560345f",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56452002 | pes2o/s2orc | v3-fos-license | New measurement of orbital and spin period evolution of the Accretion Disk Corona source 4U 1822-37
4U 1822-37 is a Low Mass X-ray Binary (LMXB) system with an Accretion Disk Corona. We have obtained 16 new mid-eclipse time measurements of this source during the last 13 years using X- ray observations made with the RXTE-PCA, RXTE-ASM, Swift-XRT, XMM-Newton and Chandra observatories. These, along with the earlier known mid-eclipse times have been used to accurately determine the timescale for a change in the orbital period of 4U 1822-37. We have derived an orbital period Porb = 0.23210887(15) d, which is changing at the rate of \cdot Porb = 1.3(3) x 10-10 d d-1 (at T0 = MJD 45614). The timescale for a change in the orbital period is Porb/ \cdot Porb of 4.9(1.1) x 106 yr. We also report the detection of 0.59290132(11) s (at T0 = MJD 51975) X-ray pulsations from the source with a long term average \cdot Pspin of -2.481(4) x 10-12 s s-1, i.e., a spin-up time scale (Pspin/ \cdot Pspin) of 7578(13) yr. In view of these results, we have discussed various mechanisms that could be responsible for the orbital evolution in this system. Assuming the extreme case of conservative mass transfer, we have found that the measured \cdot Porb requires a large mass transfer rate of (4.2 - 5.2) x 10-8 M\odot yr-1 which together with the spin up rate implies a magnetic field strength in the range of (1-3) x 108 G. Using the long term RXTE-ASM light curve, we have found that the X-ray intensity of the source has decreased over the last 13 years by ? 40% and there are long term fluctuations at time scales of about a year. In addition to the long term intensity variation, we have also observed significant variation in the intensity during the eclipse.
INTRODUCTION
Low mass X-ray binaries (LMXBs) are stellar systems that consist of a compact object, such as a neutron star or a black hole, accreting matter from a companion star by Roche lobe overflow. Mass transfer in these systems can be conservative or non-conservative. In most systems, the mass of the binary system and the total angular momentum remains constant, which is the case of conservative mass transfer. However, in ⋆ E-mail: chetanajain11@gmail.com some systems, a significant mass loss may occur and in such systems the mass transfer is called non-conservative. Mass loss from the binary system may occur through various processes such as, irradiative evaporation of the secondary star, jet emission from the compact star or emission of a wind from the accretion disk (Ruderman et al. 1989). Accretion may also be driven by the loss of orbital angular momentum through gravitational wave radiation, or magnetic braking (Hurley et al. 2002;Rappaport, Verbunt & Joss 1983). The orbital period of X-ray binaries is expected to change due to redistribution of the angular momentum due to interaction between the components of the binary system. Measurement of the rate of change of the orbital period (i.e., orbital period derivative,Ṗ orb ) of the binary system is therefore necessary in order to understand evolution of compact binary systems.
In some accretion powered X-ray binaries, repeated measurements of the orbital ephemeris have led to an accurate determination of the orbital period evolution. Pulse arrival time delay is one of the several techniques used to determine the evolution of the binary orbit. Using this technique, the orbital evolution timescales have been accurately determined in several X-ray binary systems (Her X-1: Deeter et al. 1991;Paul et al. 2004;Staubert, Klochkov & Wilms 2009, 4U 1538−52: Baykal et al. 2006Mukherjee et al. 2006, Cen X-3: Kelley et al. 1983;Paul et al. 2007, LMC X-4: Levine et al. 2000, SMC X-1: Levine et al. 1993, Wojdowski et al. 1998. Pulse folding and χ 2 maximization with a varying orbital ephemeris have been successfully applied to LMC X-4 ) and SAX J1808.4−3658 (Jain et al. 2007) to determine the X-ray pulsations and the mid eclipse times. However, measurement ofṖ orb for a nonpulsing LMXB requires a stable fiducial point in the X-ray light curve. Such a measurement is difficult because most of the LMXBs do not exhibit sharp orbital features and even the pulsating objects often show variable pulse profiles.
In the absence of pulses from the compact object, eclipsing binary systems provide a good fiducial timing marker for precise determination of the orbital evolution. Parmar et al. (1986) and Wolff et al. (2009) applied the X-ray eclipse timing technique to determine the orbital evolution in the LMXB EXO 0748−676. Eclipse timings were also used to establish orbital evolution timescales in 4U 1822−37 (Hellier et al. 1990;Parmar et al. 2000) and 4U 1700−37 (Rubin et al. 1996).
In the case of non-eclipsing X-ray binaries, such as Cyg X-3, the orbital evolution has been measured using its stable orbital modulation light curve (Singh et al. 2002). Van der Klis et al. (1993) also studied the orbital evolution of the LMXB 4U 1820−303 by analyzing the modulation of the 685 s orbital light curve.
Amongst the above mentioned LMXBs, the orbital separation has been found to be increasing in 4U 1822−37 (Parmar et al. 2000), X2127+119 (Homer & Charles 1998) and SAX J1808.4−3658 (Jain et al. 2007). The observed orbital evolution timescale of ∼10 6 yr in the case of 4U 1822−37 (Heinz & Nowak 2001) and X2127+119 (Homer & Charles 1998) is assumed to be due to short lived mass exchange episodes. An orbital evolution timescale of about 70 ×10 6 yr has been determined in the accretion powered millisecond X-ray pulsar SAX J1808.4−3658 (Jain et al. 2007;Burderi et al. 2009;Hartman et al. 2009). The timescale of orbital evolution was proposed to be due to a strong tidal interaction between the components of the binary. A decreasing orbital period has been detected in 4U 1820−303 (van der Klis et al. 1993) and Her X−1 (Deeter et al. 1991;). However, the observed orbital period evolution is faster than that predicted by theoretical models of conservative mass transfer. In the case of EXO 0748−676, Wolff et al. (2009) found four distinct orbital period epochs in the last 20 years and attributed it to be due to magnetic cycling in the companion star.
4U 1822−37 is an LMXB with an orbital period of 5.57 hr (Hellier et al. 1990;Parmar et al. 2000). It is one of the few LMXBs that harbors an accretion powered X-ray pulsar (Jonker & van der Klis 2001). The source is surrounded by an accretion disc corona formed by evaporation of matter from the inner accretion disc by radiation pressure of the neutron star (White & Holt 1982). The light curve exhibits a narrow and a broad dip in the intensity. The narrow dip in the X-ray light curve is attributed to the partial eclipse of the corona by the companion star (White et al. 1981;White & Holt 1982;Mason & Cordova 1982;Hellier & Mason 1989, Hellier et al. 1992, whereas the broad dip is interpreted as resulting from occultation of the corona by a bulge on the outer edge of the accretion disc (White & Holt 1982;Hellier & Mason 1989). Being an eclipsing system, the orbital inclination is also known with a small uncertainty (Heinz & Nowak 2001;Jonker et al. 2003). An accurate ephemeris of 4U 1822−37 is also known from the eclipse timing (Parmar et al. 2000) and the size of the binary orbit is known from the pulse timing (Jonker & van der Klis 2001). It is thus an ideal system for determination of the masses of the stellar components. From the spectroscopic measurements of the binary system and by assuming a mass of 1.4 M⊙ for the neutron star, Cowley et al. (2003) calculated the mass of the companion. A good estimate of the mass of the neutron star is also known from the K-correction of the radial velocity curves (Muñoz−Dariaz et al. 2005).
We report here on a detailed timing analysis of the eclipsing X-ray binary pulsar, 4U 1822−37. We have determined 16 new mid-eclipse time measurements of the source with data obtained from instruments onboard RXTE, Swift, XMM-Newton and Chandra observatories. We have also measured the spin period from several newer RXTE observations. Combining these new results with previously published measurements, we have derived new updated estimates of the orbital and spin parameters. Using these measurements, we have also determined the accretion luminosity and the magnetic field strength of the pulsar, which is crucial for this system. Even though the binary parameters have been reliably measured in the past, only rough estimates of the source intrinsic luminosity and the neutron star's magnetic field have been made by various authors, with inconsistent results. Assuming a distance of 2 kpc, Mason & Cordova (1982) had estimated an isotropic luminosity of ∼ 10 36 ergs s −1 . From the broad band spectrum analysis, Parmar et al. (2000) measured the 1−10 keV X-ray flux to be 5 × 10 −10 ergs cm −2 s −1 . This implies an X-ray luminosity of 5.6 × 10 34 (d/1kpc) 2 ergs s −1 . Assuming a magnetic field strength in the range (1−5) × 10 12 G, Jonker & van der Klis (2001) derived an intrinsic luminosity of (2−4) × 10 37 ergs s −1 from the spin-up measurements.
OBSERVATIONS AND ANALYSIS
In this work, we have analyzed observations made with the Proportional Counter Array (PCA) and the All Sky Monitor (ASM) on board the Rossi X-ray Timing Explorer (RXTE ); the X-ray Telescope (XRT) on board Swift; the European Photon Imaging Cameras (EPIC) -MOS instruments on board XMM-Newton; and the Advanced CCD Imaging Spectrometer (ACIS) on board the Chandra observatory. A log of the X-ray observations used for the present work is presented in Table 1. 4U 1822-37 was monitored regularly by the RXTE -ASM (Levine et al. 1996), which comprises three wide-field scanning shadow cameras (SSCs) which are mounted on a rotating boom. The SSC's are rotated in a sequence of "dwells" with an exposure typically of 90 s, so that most of the sky can be covered in one day. The dwell data are also averaged for each day to yield a daily-average. The data used in the present analysis, covered the time between MJD 50088 to MJD 54756. The 1.5−12 keV long term ASM light curve was corrected for the earth motion using the tool earth2sun of the HEAsoft analysis package, ftools ver 6.5.1. The light curve, binned with a binsize of 50 days, is shown in Figure 1. It shows a gradual decrease in the X-ray intensity over the last 13 years, alongwith long term fluctuations over timescales of about an year. There could be many reasons for the observed decrease in the X-ray flux; decrease in mass accretion rate and a change in the structure of the corona (Bayless et al. 2010) being two possibilities.
To check whether the observed decrease in the source intensity is an instrumental effect, we also studied the light curve of the well known Crab Nebula. As shown in Figure 1, the intensity of Crab does not show a decreasing pattern over the same time period as in 4U 1822-37. The RXT E-ASM measurements of long term intensity variations are quite reliable and for some sources correlated variability has also been reported with different instruments (4U 1626-67: Jain et al. 2010).
We have also analyzed the data taken with RXT E-PCA, consists of five xenon/methane proportional counter units (PCUs) and is sensitive in the energy range of 2−60 keV with an effective area of 1300−6500 cm 2 , depending on the number of operating PCUs (Jahoda et al. 1996). Data for the eclipse timing analysis were chosen such that it covered the entire eclipse phase. The data were taken from the Standard-1 mode of PCA and the background count rate estimated using the runpcabackest tool was subtracted from the light curves. The photon arrival times of the background subtracted light curve were then corrected for the solar system barycenter using the ftool fxbary. To search for the pul- . XMM-Newton carries three X-ray mirrors and three focal plane instruments, a European Photon Imaging Camera (EPIC)pn, MOS1 and MOS2, each with a field of view of about 30 ′ × 30 ′ . All the cameras (Struder et al. 2001;Turner et al. 2001) were operated in full frame mode with medium filter. The observation details are summarized in Table 1. The EPIC observation data files were processed using the XMM -Science Analysis System (SAS version 8.0.0). For the present analysis, we have used data taken with the EPIC-MOS2 in the energy range 0.2−15 keV. The X-ray events were extracted from a circular region of radius 10 ′′ centered on the position of the target in the EPIC-MOS image. The background X-ray events were extracted from a source-free circular region, with a radius of 20 ′′ . The background subtracted light curve was barycenter corrected using the SAS tool barycen, using the JPL-DE405 ephemerides.
4U 1822-37 was observed with Chandra-ACIS (Weisskopf et al. 2000) on August 23, 2000 for an exposure time covering 2 full binary periods (Cottam et al. 2001). We used the Chandra Interactive Analysis of Observations (CIAO) software (ver. 4.0; CalDB ver.3.4.2) and standard Chandra analysis threads to reduce the data. No background flares were found, so all data were used for further analysis. For the present work, we used the energy range 0.3−12.0 keV. Light curves were extracted from a circular region with a radius of 5 ′′ . Background events were obtained from an annular region with an inner (outer) radius of 15 ′′ (30 ′′ ). The background subtracted light curve was barycenter corrected using the CIAO-tool axbary. It should be remarked that pileup is an important phenomena which is inherent to CCD detectors, specially with instruments on board Chandra. It is a major concern while measuring flux/spectra, especially when the source is bright. But there will be no effect on the determination of the mid-eclipse time. Pile-up can make the eclipse shallower but cannot change the mid-eclipse time.
We have also analyzed data from the Swift observatory (Gehrels et al. 2004). The scientific payload consists of a wide field instrument, the gamma ray Burst Alert Telescope (BAT; Barthelmy et al. 2005) and two co-aligned narrow field instruments: the X-ray Telescope (XRT, Burrows et al. 2005) operating in the 0.2-10 keV energy band and the Ultraviolet/Optical Telescope (UVOT, Roming et al. 2005). For the present work, the XRT data was processed with XRTDAS software data pipeline package (XRT-PIPELINE v.0.12.0). Calibrated and cleaned level 2 files were produced with the xrtpipeline task. We have used an energy range of 0.2−10 keV and all data were extracted in the Window Timing mode, with a total exposure time of 17.3 ks. X-ray events from within a rectangular region of width 6 and height 40 pixels were extracted for timing analysis. Background data was extracted from a neighbouring source free region of similar dimensions. We have applied earth2sun correction to the background subtracted light curve which was produced with a timing resolution of 1 s.
Eclipse-timing analysis
The light curves obtained from Chandra, XMM-Newton, Swift and RXTE -PCA observations were folded with the known orbital period of 5.5706 hr (Parmar et al. 2000). But since the RXT E-ASM dataset represents an average value over years of data, we folded the RXT E-ASM light curves with the best measurement of the orbital period derived from the actual RXT E-ASM data. We obtained an orbital period of 20054.27 s and 20054.24 s from the RXT E-ASM data spanning MJD 50088-52432 and MJD 52432-54756, respectively. The RXTE -ASM light curve was therefore, folded in two segments (MJD 50088−52432 and MJD 52432−54756), while a single folded profile was obtained from each of the other observations.
The folded light curves of the data obtained from RXTE -ASM, Chandra, XMM-Newton and Swift missions are shown in Figure 2. The RXTE -PCA light curves are shown in Figure 3. It should be noticed that the eclipse morphology is changing. The eclipses seem to vary in depth and shape. The variable eclipse depth shows that the projected geometry of the accretion disk and corona is changing. It is also possible that a part of the change in the eclipse morphology in Figure 2, is due to the slightly different energy bands used and the differences in instrument efiiciency over the energy bands.
The light curves show clear signs of orbital modulation (i.e. partial eclipse and a sinusoidal modulation) with an orbital period of 5.57 hr. A model consisting of a Gaussian and a constant was fit to the eclipse interval (0.45−0.55 orbital phase) in each folded light curve (as in Parmar et al. 2000). The reduced χ 2 of the fits were in the range 0.5−4 for 13 d.o.f. Figure 4 shows the eclipse interval (0.45−0.55 orbital phase) for one of the folded RXTE -PCA light curves (ObsId 70037-01-03-00). Gaussian and a constant model was fit to this interval. Solid line in the top panel of Figure 4 shows the best fit model and the bottom panel shows the residuals of the fit. The arrival time of the eclipse which occurred closest to the mid time of the observation was taken for further analysis. The new mid eclipse time measurements along with 1σ uncertainties are given in Table 2. The orbit number (cycle) is with respect to the first reported mid-eclipse time (Hellier & Smale 1994). The newly determined arrival times were combined with the earlier known values and fitted with a quadratic model. We obtained a χ 2 of 432 for 35 d.o.f. However, it should be remarked that the uncertainties in the newly determined mid eclipse time measurements are too small to give a reliable estimate. Therefore, we rescaled the errors in the individual measurements (by multiplying the individual errors by the square root of the above mentioned reduced χ 2 ), in order to compare the results with the earlier known estimates of the orbital parameters f. This imply an orbital evolution timescale of 4.9(1.1) × 10 6 yr. The resulting best fit orbital parameters are given in Table 3. We subtracted the best fit linear component from the ephemeris history and the residuals are plotted in Figure 5. The derived values of the rate of change in orbital period and the timescale of orbital evolution are consistent, within measurement errors, with those obtained from timing of the eclipses in the optical and UV data (Bayless et al. 2010).
Pulse-timing analysis
We have performed a pulsation analysis to determine the spin period of the neutron star and the pulse period evolution. The light curves were corrected for the orbital mo- (10 6 yr) 3.1 ± 0.7 3.6 4.9(1.1) †Quadratic Ephemeris: JD = T 0 + N×P orb + N 2 × c Table 2. The curvature of the locus of the residuals is a measure of the orbital period derivative of the binary system. The square boxes are the values known from previous measurements. The newly determined mid eclipse times are shown with "•". The two horizontal bars indicate the time span of the RXTE -ASM data.
tion using the long term orbital solution obtained from the eclipse timing technique described above. Figure 6 shows the spin period history of the neutron star. The pulse period was found to be continuously decreasing with time at an average rate (Ṗspin) of -2.481(4) × 10 −12 s s −1 , indicating a spin-up timescale of 7578 (13) Table 4. Mid-eclipse times and the corresponding pulse period.
Mid-eclipse times (MJD) Spin period (s) 51975.9968(1) 0.59290132(11) 52094.83802 (7) 0.59286109(8) 52095.76593 (6) 0.59286421(12) 52432.78961 (5) 0.5927922(13) 52489.42192 (5) 0.5927790(06) 52491.74614 (7) 0.5927795(11) 52503.34773 (5) 0.5927737(10) 52519.36724 (6) 0.5927721(08) 52882.38623 (6) 0.5926793(15) 52883.3151 (1) 0.5926852 (21) completeness. A considerable variation was seen in the pulse profile. The pulse profile exhibited a non-sinusoidal profile at T0 = MJD 51975; a relatively broad maximum at T0 = MJD 52432; a sharper profile at around T0 = MJD 52491 and a sinusoidal variation around T0 = MJD 52519. However, it is difficult to quantify the observed variation in the pulse shape at this point. We also tried to obtain an independent measurement of orbital evolution using the technique of pulse folding and χ 2 maximization Jain et al. 2007). But it should be remarked that in this source, the light travel time across the orbit (ax sin i) is only a factor of two larger than the spin period. Moreover, in case of pulse timing technique, the pulse profile is assumed to be invariant. But even a small orbital phase dependence of the pulse shape, for example caused by varying absorption can lead to systematic errors in measurement of the orbital parameters. Therefore, though this analysis has proved to be successful for other LMXBs, we could not find a very accurate measurement of the orbital parameters by this technique.
DISCUSSIONS
We have performed a detailed timing analysis of the low mass X-ray binary pulsar 4U 1822−37 using the archival X-ray data from several X-ray observatories. Observations used in the present work covered a time span of 13 years and more than 14,000 binary orbits. Using the 16 new accurately measured mid-eclipse times, we have obtained an orbital period of 0.23210887(15) d with a significant orbital period derivative of 1.3(3) × 10 −10 d d −1 (at T0 = MJD 45614). It indicates an orbital evolution timescale P orb /Ṗ orb = 4.9(1.1) Myr. The orbital and spin parameters were also measured by correcting the light curves for the binary mo- Figure 6. The variation of spin period (P spin ) of the neutron star with time. The first two points indicate the spin period determined by Jonker & van der Klis (2001). The straight line represents the best fitted linear curve, with a χ 2 of 13.7 for 12 d.o.f. tion of the pulsar and then optimizing the pulse detection. However, the results from the pulsation analysis did not improve the orbital evolution measurements.
Orbital Evolution
X-ray binaries can evolve by various mechanisms such as mass transfer within the system due to Roche lobe overflow, tidal interaction between the components of the binary system, gravitational wave radiation, magnetic braking, and X-ray irradiated wind outflow (As mentioned before in Section 1). Orbital evolution has been measured in some other low magnetic field LMXBs, such as 4U 1820−30 (van der Klis et al. 1993), EXO 0748−676 (Wolff et al. 2008) and SAX J1808.4−3658 (Jain et al. 2007), and several models have been proposed to explain the orbital period evolution in them. In the case of 4U 1822-37, the reason behind a high rate of orbital evolution is not known. The measured rate of change of orbital period in 4U 1822−37 is much greater than that expected due to gravitational wave radiation (Verbunt 1993). We also note that the timescales of orbital evolution due to tidal interaction between the components of the binary system ranges from a few Myr in HMXBs to about 10 10 years in LMXBs (Applegate & Shaham 1994). Conservative mass transfer, mass loss from the binary system due to an X-ray irradiated wind outflow and magnetic cycling in one of the binary components are the other possibilities. We first examine the possibility and consequences of the case of a conservative mass transfer in this system, followed by the case of an X-ray irradiated wind outflow and magnetic cycling in the companion star.
Conservative mass transfer
In the case of conservative mass transfer, the mass transfer rate from the companion star equals the accretion rate onto the neutron star. We have estimated the mass accretion rate using the best known estimate of the mass of the neutron star (Mns) and the companion star (Mc) (Muñoz-Dariaz et al. 2005) In the case of compact binaries, the orbital angular momentum (J) is given by (King 1988): where, M = Total mass of the binary system = Mc + Mns G = Gravitational constant a = Binary orbital separation.
For a neutron star with a magnetic moment µ (∼ Br 3 ), the spin frequency derivativeν is related with the mass accretion rate as (Frank et al. 2002): Using data obtained from RXTE -PCA, we have detected 0.59290132(11) s (at T0 = MJD 51975) X-ray pulsations with an average spin-up rate of -2.481 × 10 −12 s s −1 . These values imply a magnetic field strength of (1−3) × 10 8 G.
The following caveats apply to the case of conservative mass transfer: • It requires a large mass transfer which corresponds to luminosity near the Eddington rate. Considering the uncertainties in some of the parameters used above, a near-Eddington accretion luminosity is not inconceivable, especially because this source has a corona surrounding it (White & Holt 1982;Heinz & Nowak 2001;Bayless et al. 2010). A comparison of the Lx/Lopt ratio of this system with other LMXBs also suggest that the true X-ray luminosity of the central X-ray source is probably significantly higher. See Bayless et al. (2010) for more discussion on this. For example, in SS433 the mass transfer rate is believed to be much higher than the Eddington rate, that results in accumulation of material around the compact object which blocks its X-ray emission (Begelman, King, & Pringle 2006;Clark, Barnes, & Charles, 2007).
• It requires the neutron star to have a low magnetic field strength of (1−3) × 10 8 G while from the high energy cutoff in the X-ray spectrum (Parmar et al. 2000), Jonker & van der Klis (2001) determined a magnetic field strength of ∼(1−5)×10 12 G. It should be noted that the coronal X-rays dominate the X-ray spectrum of this source and the pulsed X-rays contribute to only a few percent of the total X-ray emission. Thus the X-ray spectral shape is unlikely to be a reliable indicator of the magnetic field strength of the compact star.
• In accreting neutron stars, we expect pulsations only if the compact object possesses a magnetic field strong enough to disrupt the inner regions of the accretion disc and chan-nel the accretion flow onto the polar caps. In other words, accretion onto the neutron star is controlled by the magnetic field if the magnetospheric radius is larger than the stellar radius. In case of 4U 1822-37, the observed values of the mass transfer rate and the magnetic moment, indicate a magnetospheric radius of 2 × 10 6 cm, which is of the same order as the neutron star radius. Therefore, it is not certain whether the conservative mass transfer is indeed responsible for the observed changes in the orbital period. However, several low mass X-ray binaries, such as SAX J1808.4-3658, do show pulsations at similar luminosity level even though they have a low surface magnetic field of (1−5) × 10 8 G (Di Salvo & Burderi 2003).
• For the accretion luminosity, spin period and magnetic field strength mentioned above for a conservative mass transfer case, the neutron star is far from spin equilibrium. Detection of such a system is a novelty and unlikely, unless the current X-ray state is a long lived transient phase.
Here one may note that in the recent years, many LMXBs have been found which spend a small fraction of the time in transient high states, for example, the millisecond accreting pulsars.
X-ray irradiated wind outflow
Mass loss from the binary system can occur in the form of X-ray irradiated wind outflow. In case of LMXB 4U 1822-37, there is no signature of wind outflow in the form of absorption lines in high resolution X-ray spectrum (Cottam et al. 2001). Recently, Bayless et al. (2010) reported a broad C IV emission line in the UV spectrum of 4U 1822-37, indicating a strong disk wind outflow. They have derived a wind outflow velocity of 4000 km s −1 , based on a measured width of 45 A • of the C IV emission line. However, these observations are insufficient to estimate the total mass outflow rate.
Magnetic cycling in the Companion star
Secular changes such as magnetic cycles in the secondary star (Hellier et al. 1990) is also a possible mechanism responsible for the observed orbital evolution in 4U 1822-37. However, the spectral type and the evolutionary history of the companion star is unknown (Muñoz−Dariaz et al. 2005).
CONCLUSIONS
We have presented new and more accurate measurement of orbital evolution of the LMXB 4U 1822-37 and a longer time base for measurement of its spin evolution. Considering the possibility of a large intrinsic Lx and some evidence of mass outflow, we conclude that the orbital evolution in this system is complex including the effects of a large mass transfer rate and X-ray irradiated wind outflow. In this scenario, the magnetic field strength of the neutron star is probably in between the same in typical low mass X-ray binaries and high mass X-ray binaries.
ACKNOWLEDGEMENT
We are grateful to the anonymous referee for some useful comments in improving the manuscript. This research has made use of the data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. In particular, we thank the RXTE -ASM teams at MIT and at the RXTE -SOF and GOF at NASA's GSFC for provision of the ASM data. | 2010-07-10T16:21:12.000Z | 2010-07-10T00:00:00.000 | {
"year": 2010,
"sha1": "0271af9dd93c706cd588dabf2c32eac744f6b715",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/409/2/755/18582456/mnras0409-0755.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "0271af9dd93c706cd588dabf2c32eac744f6b715",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
269902089 | pes2o/s2orc | v3-fos-license | Studies on Biochemical Contents of Stilesia Sp. (Cestoda: Anaplocephalidea) in Capra Hircus (L.) from Nashik Region
: This study investigates the biochemical content of the parasite Stilesia sp. in the goat species Capra hircus from the Nashik region. The research reveals that the parasite has a higher glycogen percentage compared to protein and lipid. The study provides insights into the impact of the parasite on the hosts nutritive value, contributing to our understanding of host - parasite interactions.
Introduction
Helminthes parasite is the major concern in relation to animal Helminthes parasite affects the nutrient status of host by causing increased nutrient loss, decreased food intake and nutrient absorption (Edirishinghe & Tomkin, 1995).The metabolic process of the host depends on the food, feeding habits and the rich nourishment available in the gut of the host.These parasitic worms use this nourishment for their growth and development.The worm getting nutrition from the host's gut by highly specialized metabolically active body surface (Smyth and McManus, 2007).Gastrointestinal cestodes are the most pathogenic parasites in Capra hircus in tropical and subtropical areas.The Parasitism, especially by helminthic parasites, impairs health by causing inappetance, diarrhea, anemia and in severe cases death (Kumar et al., 2015a).The helminth infections of gastrointestinal tract of small ruminants not only cause direct adverse effect on the health leading to morbidity and mortality but also indirectly effect economically involving cost of treatment and control of parasites (Nwosu et al. 2007).Previous investigations made in different regions along the length of the strobila of tapeworm reveal regional differences in morphological and anatomical features (Andersen, 1975;Thompson et al., 1980), chemical composition (Roberts, 1961;Mettrick and Cannon, 1970;Rani et al., 1987a, b) nucleic acid levels (Bolla and Roberts, 1971;Mettric and Cannon (1970) and gene expression (Bo et. al., 2012).Literature reveals that the parasites able to adopt themselves to the parasitic mode of life, only due to protein usually constitutes between 20 and 40 % of the dry weight have been reported (John Barrett 1981).The higher content of lipid is found in older proglottids (Brand and Van T 1952).The present investigation deals with biochemical study of protein, glycogen and lipid content in intestine and cestode parasites like Stilesia of Capra hircusfrom Nashik region.
Materials and Methods
Goat intestine were brought to the laboratory and dissected carefully.Host intestine and cestodes were collected for powder.The cestodes were placed on the blotting paper for removing excess of water and the material was kept in oven for drying at 58° to 60°C for twenty -four hours.With the help of mortar and pestle the powder was prepared for biochemical estimation.
Cestode parasites from the infected intestine were collected and observed under the microscope.Identical worms were sorted out; few of these were fixed in 4% formalin for taxonomical study.These were later stained with Harris Haematoxylin and identified genus as Stilesia.The worm, infectedhost tissues and normal intestinal tissue were blot dried using blotting paper.After determining the weight samples were placed in hot air oven at 80 0 C for 24 hours.Thenthe dried materials were ground to a fine powder using mortar and pestle.Dried powder of each stage was used for the estimation of protein, carbohydrate and lipid.The estimation of protein content in the cestode parasites were carried out by Lowry's method, Carbohydrate was estimated using the Anthrone reagent (Roe, 1955)
Result and Discussion
Biochemical estimation in cestode parasites i.e.Stilesia, Infected host intestine, Normal intestine are shown in table no.1.The protein content was very high in normal intestine 25 mg/gm as compared to infected intestine 18.20 mg /gm.And in Stilesia worm it was 16.30 mg/gm.The lipid content was very high in normal intestine 16.00 mg/gm as compared to infected intestine 12.60 mg /gm.And in Stilesia worm it was 14.10 mg/gm.The Glycogen content was very high in normal intestine 25.60 mg/100 gm as compared to infected intestine21.50mg /100 gm.And in Stilesia worm it was 20.60 mg/100 gm.
From this biochemical study we observe that the percentage of protein is high in Stilesia parasites as compared to lipid and glycogen.Protein content in worm16.30mg/gmwt of tissue, Lipid content in worm is14.10 mg/gm while Glycogen content worm is 20.60mg/100mlof solution.From the above biochemical examinations we concluded that the protein percentage is higher in parasite as compared to lipid and glycogen.The same finding has been reported by Shinde (2002), Humbe (2011) and Sonune (2012) in Ovisbharal and Capra hircus respectively These worm absorb most of nutrients from host and fulfills its regular growth needs and responsible for hindrance in the proper development of intestinal and body tissue (Jadhav et al., 2008).
Conclusion
The study reveals that the parasite Stilesia sp. in Capra hircus has a higher glycogen percentage compared to protein and lipid.This finding enhances our understanding of the biochemical interactions between parasites and their hosts, which could have significant implications for the treatment and management of parasitic infections.Further research is needed to explore these interactions in more detail and to investigate their impact on the health and wellbeing of the host species. | 2024-05-20T15:13:28.917Z | 2023-07-05T00:00:00.000 | {
"year": 2023,
"sha1": "3d22511b44d16f48a69b2734d50df50e735bc856",
"oa_license": null,
"oa_url": "https://doi.org/10.21275/sr23626111300",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0dc2c41ae6bf06ad4031b598e6029c20c3df24ed",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": []
} |
270381851 | pes2o/s2orc | v3-fos-license | “Evaluating the efficiency of public expenditure in municipal waste collection: A comparative study of Portuguese municipalities”
Effective waste management is fundamental to sustainable development and the well-being of societies. This study focuses on the financial efficiency of urban waste collection in Portuguese municipalities, with the aim to analyze the effects of the allocation of public resources in the waste management sector. The main objective is to analyze the relationship between public spending and waste collection over a five-year period. Through the application of the classic data envelopment analysis model (DEA), the study seeks to observe the existence of benchmarking patterns, identify possible inef-ficiencies, and determine opportunities for improvement in urban waste management and collection practices. The results suggest substantial variations in waste collection efficiency between municipalities and a positive correlation between public spending and the volume of waste collected. The results emphasize the need for a strategic allocation of financial resources in order to promote sustainable waste management practices. The paper highlights the importance of municipalities reassessing their strategies for allocating financial resources to ensure a better balance between funding and efficiency in the use of resources. The conclusions offer valuable practical implications for defining strategies and managing municipal waste collection services in Portugal and other countries with similar contexts.
INTRODUCTION
The efficiency of municipal waste collection is an essential area of research due to its environmental, economic, and social implications.Waste management is one of the crucial aspects of society, since the growing amount of waste produced and the need to adopt sustainable management practices are increasingly essential for reducing environmental effects.
Governments need to optimize the allocation of financial resources to the growing collection of urban waste.Defining a balance between environmental needs and budgetary constraints requires academia to have a greater understanding of the dynamics of how public resources are allocated.It is crucial to address the relationship between public spending and waste collection results.By elucidating this relationship, the results can contribute to the perception of waste collection efficiency levels and assist in the public decision-making process through the implementation of public policies that promote the practice of more efficient collection processes.
LITERATURE REVIEW AND HYPOTHESES
Efficiency in the collection of municipal urban waste is currently one of the most relevant areas of research for the development of the theoretical field of territorial management and public administration (Rodrigues, 2016).Observing efficiency involves assessing the processes, methods, and practices used by municipalities to collect municipal waste (Phillips & Thorne, 2013).This helps one identify benchmarking practices and opportunities to optimize the financial resources allocated to public services (Simões & Marques, 2009).
For financial theory, one of the main metrics for analyzing the efficiency of public spending is the relationship between the financial resources allocated and the results obtained by public services (Aquino, 2011).Thus, analyzing the efficiency of public services, regardless of their substance and purpose, includes the costs associated with the remuneration of employees, the equipment assigned to the service, as well as the time and costs associated with carrying out operations (Rocha, 2020).In a broader analysis, it is also possible to consider the relationship between the quality of the public service provided and the level of public spending carried out for the purposes of pursuing the financial public interest (Inês, 2014).
Fonseca (2016) emphasizes the importance of comparing private and public management models and operational structuring processes in the municipal waste collection sector, particularly in terms of efficiency.The techniques adopted to measure efficiency levels alternate between parametric (stochastic frontier model) and non-parametric (DEA model) systems.However, according to Zhu (2001), due to the sensitivity of the results, the analysis can significantly affect the reliability of the efficiency assessment.Accordingly, the efficiency of municipalities in municipal waste collection should be analyzed from the perspective of the super-efficiency model approach.
The theory allows the most efficient municipalities to be identified among the most efficient (Bruno & Erbetta, 2013).
Rogge and Jaeger (2012), Afonso and Fernandes (2005) This study analyzes the efficiency of urban waste collection in Portuguese municipalities between 2018 and 2022, looking at the level of correlation between public spending on collection and the amount of waste collected.The paper aims to assess the efficiency of Portuguese municipal waste management in order to estimate the effectiveness of strategies for allocating public financial resources to the urban services provided.
To this end, the following hypotheses were established: H1: Municipal spending on municipal waste collection services is not related to the tons collected with an efficiency equal to 1 (eλ = 0).
H2: Municipal expenses associated with municipal waste collection services have some level of relationship with the tons collected of efficiency equal to 1 (0 > eλ < 1).
H3: Municipal spending on municipal waste collection services has a perfect relationship with the tons collected with an efficiency equal to 1 (eλ = 1).
METHODS
This analysis is based on a case study.Statistical data from 308 Portuguese municipalities were analyzed relating to public spending (on waste collection) and the quantities of waste collected (tons declared by municipal services) from 2018 to 2022.The expenditure and tonnage databases were obtained from the official websites of Banco de Portugal and PORDATA.
The statistical information from the municipalities was distributed in a matrix and arranged into decision-making centers (DMUs), inputs (public spending), and outputs (undifferentiated, selective, and total collection).For each type of collection, the weight of the inputs (actual public expenditure) and their relationship with the outputs (tons of waste) were assessed.
The paper applies the non-parametric DEA methodology, in which a direct relationship between inputs and outputs is not defined.The efficiency of public expenditure (inputs) in relation to the total amount of municipal waste collected is assessed using an index (Paço & Pérez, 2013), in which the municipality is more efficient when it is able to collect the greatest number of tons with the lowest level of financial resources allocated to carry out the public service (Carvalho & Rizzo, 1994).
Waste collection efficiency varies between 0 (inefficient) and 1 (efficient) and is measured by the distance between the inefficient DMU and the most efficient DMU (Jahanshahloo & Afzalinejad, 2006).The methodology selected for analyzing the results was the input-oriented model, where efficiency is described using the following expression (Vincová, 2005): where U r = weight of tons of municipal waste collected; Y r = the level of tons of municipal waste; V i = weight of actual public expenditure associated with municipal urban waste collection; X i = the level of effective public expenditure; s = number of sectors related to the tons of municipal waste collected; and m = number of sectors related to actual public expenditure.
The procedure adopted assesses the efficiency of the allocation of public financial resources (actual public expenditure) in each of the Portuguese municipalities.The numerator (∑ s r=1 U r Y r ) measures the total amount of tons of municipal waste (out-puts) generated by the municipalities, weighted by the weights associated with each sector of municipal waste collected.The denominator (∑ m i=1 V i X i ) evaluates the total amount of actual public spending by municipalities, weighted by the weights associated with each sector of public spending by municipalities.
The efficiency of the allocation of public resources in municipal waste collection is obtained by dividing the tons of municipal waste collected by the total weighted efficiency of actual public spending on municipal waste collection.The results obtained provide a practical perspective on the level of efficiency in the allocation of public spending on the collection of waste produced in each of the political and geographical areas, bearing in mind the respective limitations and theoretical reservations of the DEA model (Lins et al., 2007).
The efficiency levels of each municipality were measured using Excel software.The DEA model was selected as a tool for measuring municipal urban waste collection due to its high degree of flexibility in defining the input-output matrix.In addition, it allows for selecting the model's orientation, since the preference of the analysis is to maintain levels of effective public spending (inputs) and increase the efficiency of municipal urban waste collection (outputs).
RESULTS
The results provide a new perspective on the levels of efficiency between the ratio of actual public expenditure in each of the 308 municipalities and the amount of municipal waste collected each year.Table 1 and Figure 1 show municipalities that are the most and least efficient in allocating public spending on municipal waste collection by district and autonomous region.
In relation to the autonomous regions of Madeira and the Açores, the most efficient municipalities in allocating public spending to waste collection in 2018 were Câmara de Lobos and Vila da Praia da Vitória, respectively.
With regard to the autonomous regions, Table 1 and Figure 1 show that the least efficient municipalities were Porto Moniz, with 21.2599% (Madeira), and Vila do Porto, with 9.7021% (Açores).
In the autonomous regions, the results suggest that the most efficient cities in the collection of selective and undifferentiated waste were Câmara de Lobos (Madeira) and Calheta (Açores).
As far as the autonomous regions are concerned, the results show that the least efficient cities in the allocation of public expenditure for the management of municipal waste collection were Porto Moniz, with 21.6058% (Madeira), and Vila do Porto, with 11.7833% (Açores).
As for the autonomous regions, the most efficient municipalities in allocating public funds to the waste collection service were Câmara de Lobos (Madeira) and Calheta (Açores).
In the autonomous regions, the estimated data show that the least efficient municipalities when it comes to allocating public resources to municipal waste collection were Porto Moniz, with 18.8914% (Madeira), and Vila do Porto, with 10.5876% (Açores).
In terms of the autonomous regions, the municipalities of Santana (Madeira) and Vila da Praia da Vitória (Açores) had the highest levels of efficiency in the ratio between public spending and municipal waste collection.
With regard to the autonomous regions, the data suggest that the least efficient municipalities in terms of the ratio between public spending and municipal waste collection were São Vicente, with 25.0122% (Madeira) and Lajes do Porto, with 9.5928% (Açores).
As for the autonomous regions in 2022, the results suggest that the most efficient cities in allocating public financial resources to waste collection were Ribeira Brava (Madeira) and Calheta (Açores).
As far as the autonomous regions are concerned, the results show that the least efficient municipalities in terms of the ratio between public spending and ton of waste collected were Porto Moniz, with 12.3370% (Madeira), and Lajes do Pico, with 4.6685% (Açores).Table 1 shows that the results on the efficiency of municipal waste collection in Portugal by district reject hypotheses 1 (eλ = 0), 2 (0 > eλ < 1), and 3 (eλ = 1).The data suggest that there is some relationship between public spending and the quantities of municipal waste collected, i.e., the level of spending has contributed in some way to the efficiency of municipal waste collection (selective and undifferentiated) in Portuguese municipalities.
The results of the average efficiency of public spending on municipal waste collection are irregular (Table 1), which allows the hypotheses defined to be validated and rejected simultaneously.
In 2018, hypothesis 1 was rejected, hypothesis 2 was confirmed in 251 municipalities, and hypothesis 3 was validated in 57 municipalities.In 2019, hypothesis 1 was rejected, hypothesis 2 was confirmed in 244 municipalities, and hypothesis 3 was accepted in 64 municipalities.For 2020, hypothesis 2 was accepted in 251 municipalities and hypothesis 3 in 57 municipalities; hypothesis 1 was rejected.
As for 2021, hypothesis 1 was rejected, hypothesis 2 was confirmed in 250 municipalities, and hypothesis 3 was confirmed in 58 municipalities.Finally, in 2022, hypothesis 1 was rejected, hypothesis 2 was confirmed in 244 municipalities, and hypothesis 3 was validated in 64 municipalities.
DISCUSSION
Analyzing the efficiency of public spending on municipal waste collection services is fundamental for understanding the efficiency levels of municipal collection services and, on the other hand, for defining the best strategies for allocating financial resources to ensure that municipal waste collection services are provided in an appropriate and sustainable manner (Ferreira et al., 2020).The efficiency of public spending plays a decisive role in the effectiveness and sustainability of municipal waste collection services.In Portugal, by prioritizing the adoption of more efficient and transparent financial management (Magalhães et al., 2023), municipalities can and should define operational strategies together with citizens in order to increase the quality of public services (Humphreys, 1998), promote more responsible environmental practices (Keles et al., 2023), and meet the needs and expectations of citizens (Meirinhos et al., 2022).
The results corroborate the need for municipalities to increase the efficiency of public expenditure allocated to waste collection and for public decisionmakers to adopt high-performance management models with low levels of expenditure (Bevilacqua et al., 2010).The data are in line with the conclusions of the aforementioned studies.
The results stress the need for Portuguese municipalities to re-evaluate decisions on the allocation of public financial resources in terms of expenditure on municipal urban waste collection services.Due to inefficiencies and the emergence of a private sector, Nepal et al. (2022) state that municipalities have been forced to reform their waste management strategies.On the other hand, the results suggest a correlation between the efficiency of public spending and the quality and effectiveness of waste collection services.In other words, municipalities with higher levels of efficiency in municipal waste collection tend to have better levels of management of available resources (Volsuuri et al., 2023).
The study defends the relevance of measuring the various levels of performance of municipalities in the collection of urban waste, especially in terms of the expenditure allocated to the pur-suit of the public interest.In line with the conclusions of Rogge and Jaeger (2012) and Afonso and Fernandes (2005), the levels of efficiency of municipalities in allocating public expenditure to waste collection and, consequently, the level of productivity of public resources are necessary to consider.On the other hand, Camanho et al. ( 2024) note advantages to selecting and applying the DEA model to assess the efficiency of public service expenditure, especially in terms of operational activities.However, the results obtained do not support the conclusions, as the efficiency of Portuguese municipalities is clearly higher than that of Flemish municipalities.This is mainly due to the low levels of public revenue allocated to urban waste collection services in Belgium.
As As for the hypotheses, the data mostly point to the validation of hypothesis 2, since the majority of municipalities do not have a technical efficiency ratio equal to 1, but rather a ratio between public spending and tons of municipal waste collected greater than 0 and less than 1.
addition to analyzing the efficiency levels of municipal services, a pertinent question for future research is to study the effects of adopting artificial intelligence in defining possible routes, managing the various types of municipal waste, and promoting the circular economy.
CONCLUSION
The study analyzed the financial efficiency of municipal waste collection in Portugal over five years using data envelopment analysis.The aim was to provide evidence-based public policy on municipal waste collection to improve the sustainability and effectiveness of waste management.
The results suggest that the level of expenditure made has contributed in some way to the efficiency of waste collection.There is a notable disparity in levels of technical efficiency between the 308 Portuguese municipalities.The data confirm a positive correlation between realized expenditure and the quantities of municipal waste collected.
The paper shows that municipalities with higher levels of public funding tend to exhibit higher levels of efficiency in waste collection.However, efficiency is not only determined by the amount of expenditure made; it is also affected by political rationality, which tends to override economic logic.
The results suggest that Portuguese municipalities need to re-evaluate and adjust their strategies for allocating financial resources to waste collection in order to ensure a better balance between the levels of funding required and the efficient use of available resources.The analysis highlights the importance of municipalities promoting social awareness and environmental education among citizens in order to encourage active participation in achieving efficiency gains in municipal waste collection.
The findings emphasize the importance of efficiency in the operational management of municipal waste, the need to draw up more effective environmental policies, the existence of adequate levels of funding, and the involvement of the community in the process of continuous improvement.However, this study has some limitations.It only covers a five-year period, which may not be enough time to observe all the variations and trends over time.It is solely focused on Portuguese municipalities, which may limit the application and generalization of the results to other geographical contexts.Finally, efficiency is based on the relationship between public spending and the amount of waste collected and disregards other factors that may also affect efficiency.
Regardless of the identified limitations, the study presents a perspective on the efficiency of the allocation of public resources to municipal urban waste collection services, highlighting the need to continue improving the performance of the public service .
Figure 1 .
Figure 1.The more (green) and less (red) efficient municipalities by districts and autonomous regions, 2018-2022
Table 1 (
cont.).Results by districts and autonomous regions(2018)(2019)(2020)(2021)(2022) Hoang et al., 2024).The existence of high levels of efficiency in the collection of urban waste, in fact, contributes to reducing the costs related to waste treatment and relieving pressure on public resources. | 2024-06-12T15:05:29.227Z | 2024-06-10T00:00:00.000 | {
"year": 2024,
"sha1": "810104967aebb47d14c80ee4d9bcc0cfb9a4f7f1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21511/ee.15(1).2024.15",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8ba2f10977a4c26cdaa6658e639cdfda3062f9bc",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": []
} |
245059218 | pes2o/s2orc | v3-fos-license | Adherence to inhalers and associated factors among adult asthma patients: an outpatient-based study in a tertiary hospital of Rajshahi, Bangladesh
Background Adherence to inhaler medication is an important contributor to optimum asthma control along with adequate pharmacotherapy. The objective of the present study was to assess self-reported adherence levels and to identify the potential factors associated with non-adherence to the inhalers among asthma patients. Methods This facility-based cross-sectional study was conducted in the medicine outpatient department of Rajshahi Medical College Hospital from November 2020 to January 2021. A total of 357 clinically confirmed adult asthma patients were interviewed. Inhaler adherence was measured using the 10-item Test of Adherence scale (TAI).. Both descriptive and inferential statistics were used to express the socio-demographic of the patients and predictors of poor adherence to inhaler. Results A substantial number of participants were non-adherent (86%) to inhaler medication. Patients non-adherent to inhaler medication are often younger (23.15, 95% CI 3.67–146.08), lived in the rural area (23.28, 95% CI 2.43–222.66), less year of schooling (5.69, 95% CI 1.27–25.44), and belonged to the middle income (aOR 9.74, 95% CI 2.11–44.9) than those adherent with the inhaler. The presence of comorbidities (12.91, 95% CI 1.41–117.61), prolonged duration of inhaler intake (5.69, 95% CI 1.22–26.49), consulting non-qualified practitioners (13.09, 95% CI 3.10–55.26) were the significant contributor of non-adherence. Conclusion Despite ongoing motivation and treatment, non-adherence to inhalation anti-asthmatic is high and several factors have been found to contribute. Regular monitoring and a guided patient-centered self-management approach might be helpful to address them in long run.
Background
Asthma is a heterogeneous disease, usually characterized by chronic airway inflammation, bronchial reversible obstruction, and hyperresponsiveness to direct or indirect stimuli [1]. Every year almost 495 million deaths Open Access *Correspondence: dr.jahid61@gmail.com 2 Pi Research Consultancy Center, Dhaka, Bangladesh Full list of author information is available at the end of the article occur worldwide from this chronic respiratory disease [2]. The prevalence is increasing by 50 % every decade, especially in the low and middle-income countries of the South-East Asian region [3,4]. In Bangladesh, a lowermiddle-income country of this region, more than eight million people are suffering from asthma that constituents almost 5.2% of the total population [5].
Successful asthma management depends on several drugs and patient-related factors like age, smoking, environmental and occupational factors, asthma-related comorbidities, choice of drug and device, patients' adherence to the prescribed medications, and their inhaler handling techniques [6]. Inhalation therapy remains the mainstay of asthma management mostly due to its rapid onset of action, high therapeutic efficacy, and lower systemic adverse effects [6][7][8]. Inhaled corticosteroids along with short or long-acting beta-2 agonists and/or anticholinergic agents are most commonly prescribed as the first-line treatment for controlling asthma [8]. However, despite adequate pharmacotherapy, the control of asthma often has shown suboptimal as only prescribing appropriate medication is not adequate for achieving optimum asthma control. The Global Strategy for Asthma Management and Prevention adopted by the Global Initiative for Asthma (GINA) recommended a patient-caregiver partnership and guided self-management, along with adequate drug therapy for achieving long-term control and decreasing the frequency of exacerbation of asthma [7]. In this patient-centered approach, increasing adherence to the prescribed inhalation therapy is highly emphasized as it was evidenced that almost half of the patients with chronic diseases fail to take their long-term medications as directed at least part of the time [7,9]. Non-adherence to the prescribed medications is an important contributor to uncontrolled asthma as well as increased healthcare utilization and increased cost [7,[10][11][12]. The rate of adherence varies across the country and also exists in between the age and sex groups [13]. Almost 43% of asthma patients worldwide are non-adherent to their inhalation therapy [14]. However, some studies suggest that the rate may be as high as 87% among patients with severe asthma [15,16]. A number of personal and socioeconomic factors including patients' perception about the disease and medications, fear of side effects, patientprovider communication quality, family and social support as well as cost and availability of the drugs may influence the adherence of patients to their prescribed treatment [11,12,14].
Addressing non-adherence to the inhalation therapy should be a priority in the clinical assessment of asthma patients, especially those who have difficult-to-control asthma, and addressing non-adherence is likely to have greater benefits in this group than any novel treatment [17]. Despite this fact, there is hardly any evidence on adherence to asthma medication and its influencing factors among patients of Bangladesh. Moreover, using the non-validated or generalized tool for adherence assessment may invariably underestimate the incidence of non-adherence rates to the inhalers among asthma patients [9]. Hence, the present study aimed to assess self-reported adherence level and to identify the potential factors associated with non-adherence to the inhalers among asthma patients.
Study design and participants
This facility-based cross-sectional study was conducted in the medicine outpatient department of Rajshahi Medical College Hospital, a tertiary care referral hospital from November 2020 to January 2021. All the adult patients (aged ≥18 years) visiting the department with the diagnosis of asthma were the study population. The sample size was calculated from the following formula: , where, z = 1.96 for 95% confidence level, p = assumed prevalence of poor adherence to inhaler therapy, d = allowable error of assumed prevalence. Due to lack of existing evidence, we assumed the prevalence of adherence to inhaler therapy as 50% among the asthma patients of Bangladesh, and the calculated sample size was 384. Assuming a 5% non-response rate we approached a total of 400 patients. Patients aged ≥18 years, had diagnosed asthma, and were using at least one metered-dose inhaler (MDI) with or without a spacer and/or dry powder inhaler (DPI) for at least one year were included in the study. Patients having asthma-COPD overlap syndrome, other obstructive lung diseases, chronic debilitating conditions (e.g. carcinoma), women with pregnancy, and using inhalers for less than one year were excluded. The consecutive eligible patients during the study period were recruited until the targeted number of patients was reached. After excluding the incomplete data 357 patients were included in the final analysis.
Data collection
Face-to-face interview by five trained physicians after the consultation using a structured questionnaire and checklists was conducted to collect data from the patients and their medical records respectively. The questionnaire had four parts: (i) socio-demographic characteristics of the patients, (ii) information of inhalers they used and measurement of inhaler adherence, (iii) a demonstration session of their inhaler using technique to identify any critical error, and (iv) asthma control status using the Asthma Control Test (ACT) The questionnaire was prepared in English and translated to Bangla. Back translated version was compared with the original version to confirmed the equivalence across the language. A consortium was made to check the consistency of the translation and was pretested among 20 asthma patients before using it.
The 10-item Test of Adherence scale (TAI) based on a five-point Likert scale, which was developed and validated by Plaza et al. [18] and widely used in different countries [19,20] were used to assess the inhaler adherence of asthma patients. However, the scale was not previously used among Bangladeshi patients, hence it was not previously validated. Patients are considered as good, intermediate, and poor adherents if they score 50, 46-49, and ≤ 45 respectively [18,19]. In our study, poor adherence (TAI score ≤ 45) was considered non-adherent.
Patients were requested to demonstrate their inhaler using technique and were scored in a checklist adapted from a previous study based on the recommendation of the American Thoracic Society according to the steps completed by the patients to identify any critical error [2].
Outcome and independent variables
Non-adherence to inhalers among asthma patients (TAI score ≤ 45) was the outcome variable of the present study. Independent variables were sociodemographic characteristics of the patients (age, sex, residence, educational status, family income, etc.), disease profile (smoking history, comorbidity, health-seeking behavior), and inhaler related information (type, number, and duration of inhaler usage, perceived difficulty and critical error of inhaler using technique, and self-reported efficacy of inhaler).
Statistical analyses
All the statistical analyses were made by using STATA version 16.0. Both univariable and multivariable logistic regression models adjusted for socio-demographic and inhaler-related factors were used to determine the predictors of poor adherence to inhalers of the asthma patients. The variance inflation factor (VIF) was used to detect any evidence of multicollinearity problem among independent variables. Statistical significance level was set at p-value < 0.05 for a 95% confidence interval (CI).
Ethical consideration
Ethical approval was obtained from the ethical review committee of Rajshahi Medical College to conduct the study; Memo no: RMC-IRB-2020/178. Informed written consent was also obtained from each respondent after explaining the purpose of the study.
Characteristics of the participants
A total of 357 asthma patients were included in the study. Their mean (SD) age was 34.5 (10.2) years. Almost twothirds of the participants were female (65%) and hailing from rural areas (62%). Almost half of them attended up to the secondary level of education and were from lowincome families. MDI was the most commonly used inhaler device by the patients (75% without spacer and 13% with spacer) followed by DPI (12%). In accounts of inhaler using duration, almost 20% were using an inhaler for less than one year, 47% for 2 to 5 years, and 33% for more than 5 years. Almost half of them preferred nonqualified practitioners for their regular respiratory problems (Table 1).
Adherence to inhaler
The Cronbach's Alpha of the TAI scale was 0.87. The mean (SD) TAI score of the asthma patients was 36.5 (7.9). The majority of the patients (86%) reported poor adherence to their inhalation therapy (TAI score ≤ 45). Almost 8% of them reported good adherence (TAI score 50) and 6% showed moderate adherence (TAI score 46-49) ( Table 1). Responses to each question on the TAI scale by the asthma patients are demonstrated in Table 2.
Multivariable logistic regression models demonstrated that younger people were more chance to be non-adherent to their inhaler therapy (aOR 23
Discussion
Adherence to the inhaler and the correct using technique of the device is crucial for asthma control. Our study provides a birds' eye view on the non-adherence to inhaler medication among the adult asthma patients of Bangladesh which exceeds 86%. Existing evidence on this issue is scarce from this country to compare with. However, some recent studies from neighboring India reported the rate of poor adherence to inhalation therapy as 71% among adults and 55% among pediatric asthma patients [21,22]. Another study from some developing countries of Africa, like Ethiopia and Egypt, reported that almost half of the asthma patients were non-adherent to their medication [9,23]. Though these adherence rates were also suboptimal, the situation was quite better compared to ours. However, we used the self-reported 'Test of Adherence to Inhaler' scale, which was a subjective assessment and might overestimate the non-adherence. The TAI test yielded high rates of poor adherence even in developed countries. For example, the ASCONA study conducted among the asthma patients of Europe reported that almost 60% of patients were poor adherent to their prescribed therapy [24]. A recent study reported an almost 58% poor inhaler adherence rate using the 'TAI' scale, while the rate was 29% using the pharmacy refill records, which was a more objective scale [25]. However, another study from Denmark suggested that self-reported measurements overestimate the adherence rate and might not be used as a reliable indicator [26].
A number of personal and socioeconomic factors are reported to influence inhaler adherence among asthma patients. In our study, younger people were more likely to be non-adherent irrespective of their gender. A similar phenomenon was observed in a recent meta-analysis which reported that female and younger patients are more like to be non-adherent to their inhaler therapy [14]. In our study, the rural patients belonged to middleincome families and those who were using their inhalers for a longer period had a comparatively lower adherence rate than the urban patients. In contrast to our findings, a large-scale multi-country study from European asthma patients reported no such association of these factors with inhaler adherence [13]. Though some studies suggested that patients using DPI devices had better adherence to their inhalers [27,28], our finding did not support that. However, a very small number of patients in our study were using DPI to conclude. Besides, patients who visited non-qualified practitioners for their regular respiratory problems had more chances to be non-adherent to their therapies. A similar finding was reported by a study among inhalers using COPD patients that reported that patients who received primary care from feedback from non-qualified care providers were less sustained in medication adherence [29].
Our study suggests that asthma patients with comorbidities have a higher chance to be non-adherent. These patients showed less adherence in some previous studies too [30,31]. Having comorbidities like diabetes, hypertension, coronary artery diseases, etc. often increase the pill burden and cost of treatment which influences the patients to be ignorant to their prescription, especially in the resource-poor socioeconomic setting. Besides these, patients' belief and perception about the disease and its severity, self-care ability, family and social support, communication quality with healthcare providers as well as perceived efficacy of the therapy was reported as influencing factor for inhaler adherence in several studies [11,12,14,[32][33][34][35]. These factors were not explored extensively in our study. However, patients who reported inhaler using technique as difficult had a higher chance of non-adherence. Further qualitative studies addressing the patients' behavioral factors and perceived barriers for inhaler adherence is necessary for better understanding. A multidisciplinary approach to support the patient both mentally and physically shared decision making between provider and patients based on possible risk and benefits could better the inhaler adherence. The ASCONA study conducted among a large European asthma cohort reported that patients having good adherence to their inhaler therapies had better asthma control irrespective of age, sex, comorbidity and treatment modality [24]. Another large scale cohort study reported that asthma patients maintaining high adherence to their inhalers over time, had a better control of asthma [36]. Moreover, a recent review of published articles on this topic reported that good adherence to inhalers decreased the number and frequency of severe asthma exacerbations in high-quality studies [37]. Based on this evidence, it may be concluded that adherence to inhalers is a major contributor to asthma control.
Limitations
Our study had several limitations. Firstly, it was conducted among the patients with asthma who visited the hospital outdoor for their exacerbation or other issues and so, the findings could not be inferential for the overall patient population from the community. Moreover, we used a self-reported adherence measuring tool that could potentially underestimate the non-adherence rate to inhalers as social desirability bias could not be rolled out. Some of our variables showed bizarre odds ratios in the logistic regression model which needs cautious interpretetion. Heterogeneity in the patient sample might result in these findings. More cautious inclusion criteria should be adopted in future studies. Finally, a detailed exploration of the perceived barriers of the patients was not explored extensively.
Conclusions
Despite being an important contributing factor to asthma control, the adherence rate to the inhalers was poor among our patients. Regular investigation for patients' adherence to the prescribed inhalers is necessary for patients with uncontrolled asthma. Adequate patient education and counseling about the nature of disease and the importance of regular use of inhalers as well as encouraging patients to seek treatment from qualified physicians is suggested to improve inhaler adherence. | 2021-12-12T16:09:24.504Z | 2021-12-10T00:00:00.000 | {
"year": 2022,
"sha1": "bf636d78b53b08337a7e5c281fa67e3660f2e60a",
"oa_license": "CCBY",
"oa_url": "https://asthmarp.biomedcentral.com/track/pdf/10.1186/s40733-022-00083-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b407608dccdfe75155dec131d363ed8c0aa3017",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
245892218 | pes2o/s2orc | v3-fos-license | Multi-scale photonic emissivity engineering for relativistic lightsail thermal regulation
The Breakthrough Starshot Initiative aims to send a gram-scale probe to Proxima Centuri B using a laser-accelerated lightsail traveling at relativistic speeds. Thermal management is a key lightsail design objective because of the intense laser powers required but has generally been considered secondary to accelerative performance. Here, we demonstrate nanophotonic photonic crystal slab reflectors composed of 2H-phase molybdenum disulfide and crystalline silicon nitride, highlight the inverse relationship between the thermal band extinction coefficient and the lightsail's maximum temperature, and examine the trade-off between the acceleration distance and setting realistic sail thermal limits, ultimately realizing a thermally endurable acceleration minimum distance of 16.3~Gm. We additionally demonstrate multi-scale photonic structures featuring thermal-wavelength-scale Mie resonant geometries, and characterize their broadband Mie resonance-driven emissivity enhancement and acceleration distance reduction. Our results highlight new possibilities in simultaneously controlling optical and thermal response over broad wavelength ranges in ultralight nanophotonic structures.
have only recently been put within reach through advances in nanofabrication, [20][21][22] radiative cooling, 23,24 and photonics. [25][26][27][28] As the sail accelerates, incident laser light will become redshifted in the sail's frame of reference. This sets a restriction on usable sail film materials to those that have little or no measurable absorption over the entire redshifted laser band.
The sail must furthermore be highly reflective over the laser band in order to accelerate to its ultimate speed in as short a distance as possible and thereby limit the laser-on time. In addition to reflectivity, the sail must also exhibit sufficient mechanical robustness to survive the extreme acceleration-induced forces, as well as its interaction with the interstellar medium. [29][30][31] While accelerating, the sail must be shaped 32 or patterned properly so as to stably ride the laser beam when faced with non-ideal beam shapes and alignments. 33 Finally, the sail must possess sufficient emissivity to effectively radiate heat generated due to any residual absorption of the sail material.
Motivated by the Breakthrough Starshot Initiative's goals, pioneering work on this topic proposed and analyzed a range of lightsail designs that minimize laser requirements, 19 suggested the use of highly thermally emissive material layers for radiative cooling, examined how the sail material's laser band absorptivity can affect sail temperatures, pointed to the need for realistic thermal limits, and demonstrated an accelerative trade-off between sail reflectivity and mass through a newly defined figure of merit. 34 This was followed by demonstrations of methods to passively stabilize beam-riding surfaces using spherical, parabolic, and conical sail shapes. 35 Passive beam-riding stability of curved lightsails was then adopted to design and analyze flat metasurface and diffractive beam stable structures and the optomechanical considerations of these sails, showing that their designs were able to meet a material melting point-based thermal sail limit. 36,37 Recent work has proposed designs that achieve even lower acceleration distances using generalized gradient descent and topological optimization methods for nanophotonic sail design. 38 While these works have demonstrated key advances in sail engineering, thus far they have only explored the use of Si, SiO 2 , 34,36,37 and Si 3 N 4 38,39 as sail materials. Additionally, the literature up to this point has not bench-marked photonic sail designs for acceleration distance under realistic thermal constraints, or determined furthermore how photonic designs might simultaneously enhance emissivity over infrared wavelengths while also minimizing the sail mass and maximizing its laser reflection.
More broadly, given an increasingly large range of possible sail designs and materials, it is important to develop a methodology by which one can select sail designs constrained by their thermal performance and material degradation limits. With this in mind, a central challenge in determining the survivability of the sail lies in mapping the trade-off between the sail being minimally absorptive across the Dopplershifted laser wavelengths and having high emissivity at longer thermal wavelengths, given the need to maximize reflectivity in order to minimize the acceleration distance. Motivated by this, we developed a multilayer 2D photonic crystal slab-based geometry that features molybdenum disulfide (MoS 2 ) and silicon nitride (Si 3 N 4 ) as its key constituent materials. 40 Third, large area monolayer samples have been fabricated successfully, a significant step toward future lightsail-scale films. [41][42][43] Si 3 N 4 has also been recently investigated for lightsails 38,39,44 and remains a well-qualified lightsail candidate material due to its mature fabricability, low density, and high decomposition temperatures. 45,46 In addition to these considerations, our primary motivation for using this material is its desirable thermal emissivity at wavelengths longer than the Dopplershifted laser band, which will be discussed in detail below. 47 As used in our sail design, Si 3 N 4 acts as the primary radiative cooler, preventing thermal failure during acceleration.
We emphasize that in our design we do not assume the presence of an additional layer or layers that provide non-zero emissivity. Rather, the combined sail design is meant to be holistic and comprehensive.
Due to the stringent mass constraints of Starshot, our design involves a 2D photonic crystal with a close-packed array of large-radius holes with respect to the lattice constant of the structure. In addition to the mass reduction benefits, the photonic design provides reflective enhancement through coupling to broadband guided modes, building on conventional 2D photonic crystal slab theory. 26 However, the extreme performance required of the lightsail necessitates designs typically not employed by conventional photonic crystal reflectors. Specifically, to our knowledge, previous conventional photonic crystal reflectors in the literature have not been severely mass constrained. Structurally, an important advantage of our proposed design is that it is fully connected and requires no additional substrate to function as a standalone sail. 38 In practice however, we expect it will be beneficial to use a large corrugated support backbone to allow the sail to withstand the extreme forces it will undergo as it accelerates. 48 This structure would provide macroscale sail curvature to increase stability and mechanical robustness 32 while additionally limiting crack propagation in our proposed designs due to patterned hole-induced stress concentrations.
To merit sail designs, we assumed typical values for the Starshot Initiative: a uniform I = 10 GW/m 2 laser irradiance on the sail, a laser output wavelength of λ = 1.2 µm, a 10 m 2 sail area, and a ∼1 g chip payload mass unless otherwise stated. The acceleration distance figure of merit, as defined by Jin et al. 38 is: ρ is areal density in kg m 2 (discussed more below), β is the unitless relative velocity (to the speed of light), and R(λ(β)) is the spectral reflectance over the Doppler-shifted laser band.
Optimizing the period and hole diameter of the patterned holes allows for high transmission and reflection bands of varying spectral bandwidth. 26 dependence of acceleration distance on key design parameters in our sail design space. Each color map represents a two dimensional slice of the five dimensional design space composed of the period/lattice constant, the hole diameter-to-period ratio, and the thicknesses of each of the three layers. The tile colors represent the minimum acceleration distance design possible for the parameter values specified on the axis (this minimum is achieved by changing the unseen parameters to their optimal values for the given tile). Our optimal acceleration distance merited design has a period of 1.16 µm, a hole period to diameter ratio of 90%, 5 nm thick emissive Si 3 N 4 layers, and an 90 nm thick MoS 2 reflective core, placing it in a regime of very low values of thickness relative to the lattice constant, < 0.1a. Thinning of the high-index core maintains access to broad-band Fabry-Perot-like reflection modes at normal incidence with the added effect of minimizing the overall sail mass. This demonstrates the broad range of possible acceleration distance values that our design space encompasses.
While the actual mass of the payload chip has not been determined yet, it is important to understand the relative effect of the payload mass on our optimal reflective design. Note that payload mass can be converted to the areal density value shown in (1) easily by dividing its mass by sail area: ρ payload = m payload /A sail . Figure 2c plots the laser band reflection spectra of the lowest acceleration distance design for three given payload weights, demonstrating that as the payload mass increases, reducing the sail's mass is rewarded less than increasing its integrated reflectance. This means that sail mass becomes a stronger consideration when the payload mass is small. A further analysis showing the minimum acceleration distance vs.
payload mass can be found in the Supporting Information, which is corroborated by results in Jin et al. 38 The Si 3 N 4 layers have primarily been introduced to enhance thermal emissivity; however, these outer layers can also have the effect of shifting the peak of the sail reflection spectra to lower wavelengths compared to that of single-layer MoS 2 -only designs, as shown in Figure 2c. This shift fortuitously results in an improvement in the acceleration distance figure of merit. Note that since the designs shown in Figure 2c have identical masses; this effect is attributable to the change in the refractive index profile alone.
The optimization procedure described thus far yields an optimal reflective design in this sample set of 10.6 Gm with a 1 g payload, comparable in performance to the best reported numbers in the literature. 38 Importantly, this design does not require a connecting support structure and all mass required for acceleration and cooling is accounted for in this figure of merit value. Additional structural stability may be provided by a 1-g mechanical backbone structure, giving a value of 15.2 Gm. Furthermore, the topology of this design is not computationally optimized, and optimization could yield still lower acceleration distances.
While this design is competitive with others shown previously 36,38 on acceleration distance metrics alone, lightsails with realistic additional thermal considerations require 54% larger acceleration distances, as we show next. Maintaining the sail's integrity as it accelerates is a fundamental consideration in design, more important than any acceleration distance or reflection-driven performance metric. Though the sail will exhibit extremely low absorbance, its temperature will nevertheless increase due to the high incident laser photon flux, and its interaction with the interstellar medium at relativistic speeds could cause further heating. 29 Unfortunately, the sail's component materials will likely become more absorptive as their temperature rises, causing thermal runaway effects and increasing the probability of sail mechanical failure due to material degradation. The radiative cooling characteristics of the sail are therefore extremely important. As a thermal limit, we have adopted the ultra-high vacuum (UHV) sublimation temperature T sublimation of the sail materials, which is the point at which the sail would begin to spontaneously evaporate and/or decompose. Note that, since the UHV sublimation temperature is less than the melting temperature, which has been selected as the thermal limit in other recent lightsail studies, this represents a relatively conservative design decision. In our case, we adopted T limit = T sublimation,MoS 2 ∼ 1000 K or the lower UHV sublimation point of the two materials used (see Supporting Information).
To proceed, we implicitly calculated the maximum temperature T max reached by each sail in our space of over 3 × 10 5 designs using the following equation: where P laser is the output laser power (100 GW), α is the assumed normalized sail absorption, A sail is the area of a single side of the sail, the factor of two accounts for the presence of Note that the arbitrary choice of total sail absorption in (2) is due to the present lack of sufficiently sensitive material extinction coefficient data in the Doppler-shifted laser wavelength band. This highlights the need for ultra-high sensitivity measurements of material absorption characteristics using techniques such as photothermal deflection spectroscopy 49 or photocurrent spectroscopy 50 in order to qualify lightsail materials. The values we used in Figure 3a demonstrate that material absorption must be miniscule in order for sails to perform comparably to others in literature.
We now define a new and final composite figure of merit for our sail design space, which we call the thermally endurable acceleration minimum (TEAM) distance value. The TEAM distance value for a design space is that for which the acceleration distance D is minimized, among the alternatives for which T max < T limit . Likewise, the TEAM sail design is the sail configuration that results in the TEAM distance value. Minimizing TEAM distance is desirable, but we emphasize that this is not a sail metric per se; rather, it is a single summary value that can be easily reported to compare design approaches and sail datasets, as opposed to individual sails. For a constant set of laser parameters and a given set of sail architectures, this final result is dependent on two quantities: the previously assumed maximum allowable sail temperature set by the UHV sublimation limit, and the previously assumed total absorptance of the sail.
While conventional photonic crystal slabs have desirable reflective properties from an accelerative standpoint, their thermal radiant exitance is highly dependent on the intrinsic spectral emissivity of their component materials. In the case of lightsails, a strong trade-off exists between having sufficient emissivity for heat dissipation and minimizing the acceleration distance. In particular, for a given sail diameter, while acceleration distance is generally negatively impacted by mass increases, thermal emissivity is generally rewarded by increasing the amount of emissive material per unit area. In analyzing our sail designs, we assume total sail absorption values ranging from of 10 −4 % to 10 −6 % and show in Figure 3a the relationship between acceleration distance and operating temperature for three of these absorption values.
Using our analysis framework, we demonstrate a TEAM distance value of 16.3 Gm, a 5.7 Gm accelerative penalty to prevent decomposition due to sublimation of sulfur out of the MoS 2 in the sail by limiting the sail's temperature to T max = 1000 K. Addition of a 1 g mechanical backbone results in a larger TEAM value of 21.3 Gm. The reduced maximum temperature of the design that achieves the TEAM value is due to its smaller hole radius and thicker emissive Si 3 N 4 layers, which imply that more material is present to radiate away excess energy relative to the 10.6 Gm acceleration distance design. This can be seen in the comparison of the spectral hemispherical emissivity values of the TEAM design vs. the baseline simple acceleration distance merited design in Figure 3b. The TEAM design an approximately 12× higher peak emissivity value due to the presence of more Si 3 N 4 in the photonic crystal design. Alternatively, one can analyze the effect on the TEAM value as a function of overall sail absorption in the laser bandwidth. As can be seen in Figure 3c, when α increases, the TEAM distance value will also increase, placing firm bounds on material absorption of the incident laser light in order to achieve a certain acceleration distance.
Previously developed photonic designs for laser lightsails have only employed single-scale Because the areal density of Si 3 N 4 between the compared designs is nearly the same, longer wavelength thermal emissive features will be maintained between the two desigbs, meaning the additional emissivity features in the critical band from 2-6 µm are key to enable lower overall temperatures. The structure can be connected by a series of thin scaffolds while maintaining the presence of resonant modes. If further mechanical robustness is desired, a mechanical backbone could be added (another approach for further structural stability is also investigated in the Supporting Information).
The spatial profiles of four resonant modes supported by the multiscale Mie-resonant structure, are shown in Figure 4b, corresponding to four modes in the 2 -6 µm band shown in Figure 4c. This wavelength band is critical for sail heat management due to the blackbody peak position at temperatures from 500 -1000K, as determined by Wien's law. Increases to sail emissivity in this band will more strongly reduce overall sail temperatures in comparison to emissivity increases at longer wavelengths. The enhancement of in-band hemispherical exitance at a given temperature is demonstrated in Figure 4d, showing that at the previously suggested thermal limit of 1000 K, the islanded design has over 2.75× greater hemispherical exitance, with as much as 3.6× the hemispherical exitance at 1500 K. This showcases the utility of the multiscale Mie-resonant structures in sail thermal regulation.
In conclusion, we have demonstrated holistically viable multilayer 2D photonic reflector designs for laser-driven lightsails that are able to accelerate to one fifth the speed of light over distances comparable to, and in some cases even exceeding, designs reported previously. We emphasize that our designs represent the entire sail structure and do not require additional backing material for emissivity enhancement, allowing for accurate modeling of payload-driven performance. To analyze such relativistic lightsail designs, we further proposed an analysis framework that judges sail designs according to both their acceleration distance and peak temperature. We then proposed the thermally endurable acceleration minimum (TEAM) distance value as a summary statistic to determine the fastest-accelerating thermally-stable sail design of a design set. This value is easily reportable and will al-
Methods
We performed reflective simulations using the S 4 RCWA solver 53 and Lumerical FDTD Solver. Simulations were performed over a Doppler-shifted wavelength band from 1.2-1.47 µm for reflection. We performed absorptivity simulation in S 4 from 1.55-14 µm, which is detailed in the Supporting Information. Data was analyzed and merited using MATLAB R2020a and | 2021-06-08T01:16:31.808Z | 2021-06-04T00:00:00.000 | {
"year": 2021,
"sha1": "7cfa4ad2cfb7f4ff6b9bcdfcfad9067244cc57f3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7cfa4ad2cfb7f4ff6b9bcdfcfad9067244cc57f3",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
113837355 | pes2o/s2orc | v3-fos-license | Seismic Isolation of Offshore Pipeline Block Valve Stations
As shown by the actual material analysis of offshore pipeline damage caused by strong earthquakes, the greatest damage is caused to complex assemblies such as branch connections, bends, and various joints .In the seismic resistance analysis of the offshore pipeline a fixed junction of an offshore pipeline section with a block valve station is modeled. To assess the connection assembly for seismic resistance, a number of accelerograms with descriptions of magnitude 8 and 9 earthquakes in the Black Sea region were considered. The calculations have revealed significant pipe wall stresses in the offshore pipeline/block valve station junction. The resulting conclusion is that offshore pipeline block valve.
Introduction
Researches M.J O'Rourke and Sh.G.Napetvaridze mention that failure rate of the trunk pipelines increases at the points of connection to the equipment.Seismic resistance of the connections/block valve stations of the buried onshore pipeline is reviewed in the studies of T.R. Rashidov and G.Kh. Khozhmetov.Calculations of the seismic loads on the linear sections of the pipeline are described in the regulatory documents of STO Gazprom [12], and on the offshore pipeline in the Maritime Register Guidance NDN2-020301-003 [8].
The characteristic feature of the offshore pipelines is that they have only two block valve stations.To control impact of deformations caused by process loads when designing offshore pipelines, pipe bends are used.
Vibration protection issues are addressed in the Manual (RD).It is required that «maximum allowable vibration amplitude of the process pipelines shall be 0.2 mm at a maximum vibration frequency of 40 Hz» [9].The analysis shows that the detailed seismic resistance study of the offshore pipeline fittings (block valve stations/shutoff valves) shall be further conducted.
Landfall section of pipeline
The paper deals with vibration protection of block valve stations /shutoff valves of the offshore pipeline, and includes the trunk pipeline design scheme made with due regard for damper/elastic supports.Vibration isolation of the offshore pipeline block valve station is also analyzed.
Fig.1.
Landfall section of pipeline onshore landfall diagram, 1 -Gas pipelines laid on the seabed have landfall where the shutoff valves 2 are installed, 2 -Ball valves to isolate gas flow.
Calculation between the offshore pipeline section and block valve station
In the seismic resistance analysis of the offshore pipeline (7) a fixed connection between an offshore pipeline section and block valve station is modeled.A number of accelerograms with descriptions of magnitude 7 and 9 earthquakes in the Black Sea and Caspian Sea are considered.The calculations have revealed significant pipe wall stresses at the connection point between the offshore pipeline section and block valve station.The resulting conclusion is that offshore pipeline block valve stations need to be equipped with dampers.
Seismic resistance analysis of the offshore pipelines
In the course of seismic resistance analysis of the offshore pipelines level of stresses and allowable deformations of the pipe walls are evaluated.Pipe bends are installed to account for the deformations caused by workloads in the offshore pipelines.Each seismic wave gives earth shocks to buildings.Destructive earthquakes with magnitude of 8 or 9 are characterized by pulse propagation.For example in the northern part of the Izu peninsula (Japan) seismic pulse of 0.015 to 0.12 m/s repeated 18 times within 55 seconds.
In this paper vibration isolation of the offshore pipeline block valve station has been analyzed.Change in object vibrations from external dynamic loads can be done by different methods: vibration scattering and redistribution of vibration energy.The first case refers to inertial dynamic dampers that are mainly used to suppress monoharmonic or narrow-band random vibrations.In the event of wide-band vibrations it is preferred to connect additional damping elements to an object such as absorbers.
Equation of the protected object motion
Differential equation of the protected object motion is represented as follows: More accurate solutions are observed if behavior of dampers is taken into account.This leads to solving of the nonlinear task.Differential equation of the object motion with consideration of dampers is represented as follows: A constant section beam with various end fixings (with one end fixed and the other end semi-fixed relative to angular movement and fixed end relative to transverse movement) is taken as a design model for the offshore pipeline with block valve station.Impacts of shear loads and inertia of cross section are not taken into consideration.
Differential equation of natural bending vibrations of this beam is represented as follows: / as an independent variable, where l is a beam length, we form the differential equation (3) as follows: are linear combinations of circular and hyperbolic functions introduced by A.N. Krylov [6].
We assume the design model of the linear section of the pipeline with block valve station as a beam which has one end fixed and the other end elastically restrained against angular movement and fixed against transverse movement (Fig. 3) Fig. 3.A beam with one end fixed and the other end elastically restrained against angular movement and fixed against transverse movement.
Boundary conditions are written as follows:
-at x = 0 : By applying boundary conditions to the functions The result is the following: А = 0 and B = 0.
With an account of the boundary conditions to the function f( ) and its derivatives '( ) and f"( ) at = 1, we obtain the following system of homogeneous equations: 0 We determine the relation between root α of transcendent frequency equation and relative rigidity of anchorage EI l h based on recommendations [1,6].
Frequency of beam vibration with due account for α is determined from the following formula: We suggest that vibrations of the linear section of the pipeline equipped with a block valve station can be damped with the help of an element (vibration isolation) containing elastic members.
We use vibration isolators made from rubber Grade 3311 and dynamic modulus of elasticity of vibration isolator made from rubber D E =250 N/cm 2 for vibration isolation of the block valve station.We assume height of the deformed part of the vibration isolators as p H = 3 cm and cross section of the deformed part of the vibration isolator as i S =14.4 (Fig. 1).
Rigidity of the rubber vibration isolators can be calculated from the formula [8], where isolator rigidity is K=11213N/cm 2 , Relative rigidity of the elastic anchorage is h =52.8 MPa.
l E l h h D
, where D E is a dynamic rubber modulus and I is a moment of inertia of the isolator.From the curves on the Fig. 4 [3] and h we define root α of the transcendent frequency equation for the first harmonics, root α value for the first harmonics is α=7.0685 (Fig. 4а) and for the main pitch α=3.9266 (Fig. 4b).
Fig. 4. Relation between root α of transcendent frequency equation and relative rigidity of anchorage
In practice the effectiveness of the dampers is precisely characterized by equivalent viscous friction.Natural frequency of the offshore pipeline is determined as per formulas [6,7].Damping properties of the offshore pipeline are characterized by damping coefficient [.Force acting on foundation and block valve station is determined by the deformation of the vibration isolator located between them.
According to [21], the following damping coefficients are adopted; damping coefficient of the pipeline design without concrete coating is [st = 0.005; damping coefficient of the soil foundation in the vertical direction is [ V = 0.026; rigidity coefficient of the soil foundation is с 1 =230.925t/m 3 [7].Non-elastic resistance coefficient of the rubber vibration isolator is Q=0.038.
Vibration isolator
Vibration isolator is intended to reduce force R(t) generated by the vibration isolator and transmitted to the fixed foundation.
In accordance with [4] design seismic force FE,u,is calculated, taking into account seismic hazard of the pipeline construction area, earth movement rate during earthquakes and design foundation response spectral ordinates.
Deformation rate of the vibration isolator
In case of force action force R(t) shall be determined by deformation and deformation rate of the vibration isolator, coordinates у and y : Equation of mass movement т may be written as Considering that all variables of the equation ( 10) vary per the harmonic law, F(t) = F 0 e jwt, , R(t) = R 0 e jwt , y(t) = Y 0 e jwt we obtain a relation to determine amplitude of forced mass oscillations т; where a is relative frequency of "pipeline +block valve station" system (of linear section of the pipeline with block valve station), Z 0 is a natural frequency of the pipeline, с 1 is rigidity coefficient of the soil foundation of the block valve station с гр =с 1 N/m 3 and damping coefficients of "pipeline+block valve station" system: [= [ str +Q+ [ гр [7].
Force R 0, acting on the foundation
We assume seismic vibration model described by V.V. Bolotin.Pursuant V.V. Bolotin we approximate non-stationary random function of seismic acceleration as a product of nonrandom envelope 0 t A e J and stationary random function 0 ( ) X t ( ) X 0 ( .Within this system oscillations are induced by seismic vibrations of the foundation that vary per the harmonic law х(t)=X 0 e jwt .
Coordinates of object vibration y vary per the law y(t)=Y 0 e jwt .Irrespective of the excitation mode and damping value to ensure vibration protection for amplitude of force 0 R , acting on the foundation, the following expression is used, We calculate movement amplitude P based on the allowable amplitude of foundation displacement D Y : 1 Amplitude of movement is P=2.85.Analysis various damper types shows that: -disadvantage of dry friction damper is wear of its mating surfaces which results in potential misalignment and jamming and causes damper failures.
-inconsistent viscous properties of oil due to temperature change causing damper detuning can be considered a problem of many viscous friction dampers.
Calculations of rubber vibration isolator impact on vibration amplitude of the block valve station have been made for the pipeline of the following size: diameter 350 mm, weight 0.148 t/m, coefficient of subgrade reaction is gr c =230.925 t/m 3 .Design span length of the pipeline section is 45 m [7].
Amplitude of forced vibration
Natural vibration frequency of the linear section of the pipeline present in table 1.
Frequency of the linear section of the pipeline.
Conclusions
Calculated R and Y values show insufficient vibration isolation when using only rubber elements.Vibration amplitude of "pipeline+block valve station" system exceeds the required value specified in [10].Vibration isolation is ensured as a result of combination of mechanical and hydraulic elements.This solution requires installation of hydraulic mount (Fig. 5).
D
; p is a cyclic frequency of natural vibration, f(x) is a shape function of bending vibrations of the beam.General solution of the equation (3) is written in the following form:where A,B,C and D are arbitrary constants determined from corresponding boundary conditions,
3 3
Frequency equation of the bending vibrations of the beamBy equating a system determinant (6) to zero we obtain the following frequency equation of the bending vibrations of the beam: harmonics (а) and major pitch (b).
Fig. 5 .
Fig.5.Schematic diagram of the hydraulic mount elastic element.Vibration isolation of the block valve station of the offshore pipeline can be ensured by hydraulic mount installation.Vibration isolation is made based on interaction of mechanical and hydraulic sub-systems.Mechanical part interrelates with hydraulic part through equivalent resistance r C , and hydraulic part interrelates with mechanical part through equivalent resistance M C . | 2018-12-21T20:40:53.699Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "27b082ed37198ca57287cc81e47fe94be0608b21",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/36/matecconf_tpacee2016_01013.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "27b082ed37198ca57287cc81e47fe94be0608b21",
"s2fieldsofstudy": [
"Geology",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
49472816 | pes2o/s2orc | v3-fos-license | Induction of innate immune memory via microRNA targeting of chromatin remodeling factors
Prolonged exposure to microbial products, e.g. lipopolysaccharide (LPS), can induce a form of innate immune memory that blunts subsequent responses to unrelated pathogens (“LPS tolerance”). Sepsis, which continues to have a high mortality rate, is a dysregulated, systemic immune response to disseminated infection. In some patients, this results in a period of immunosuppression (“immunoparalysis”)1 with reduced inflammatory cytokine output2, increased secondary infection3, and increased risk of organ failure and mortality4. LPS tolerance recapitulates several key features of sepsis-associated immunosuppression5. Although various epigenetic changes have been observed in tolerized macrophages6–8, the molecular basis for tolerance, immunoparalysis, and other forms of innate immune memory has remained unclear. Here, we performed a screen for tolerance-associated microRNAs (miRNAs) and identified miR-221/222 as regulators of the functional reprogramming of macrophages during LPS tolerization. Prolonged stimulation with LPS in mice leads to Increased expression of miR-221/222, which regulates brahma-related gene 1 (Brg1) causing transcriptional silencing of a subset of inflammatory genes that depend on SWI/SNF- (SWItch/Sucrose Non-Fermentable) and STAT- (signal transducer and activator of transcription) mediated chromatin remodeling, and promotes tolerance. In sepsis patients, increased miR-221/222 expression correlates with immunoparalysis and increased organ damage. Hence our results show that specific microRNAs can regulate macrophage tolerization and may serve as biomarkers of immunoparalysis and poor prognosis in sepsis patients.
subset of inflammatory genes that depend on SWI/SNF-(SWItch/Sucrose Non-Fermentable) and STAT-(signal transducer and activator of transcription) mediated chromatin remodeling, and promotes tolerance. In sepsis patients, increased miR-221/222 expression correlates with immunoparalysis and increased organ damage. Hence our results show that specific microRNAs can regulate macrophage tolerization and may serve as biomarkers of immunoparalysis and poor prognosis in sepsis patients.
LPS tolerance is an immunosuppressive form of innate immune memory that can be modeled in vitro by prolonged treatment of bone-marrow derived macrophages (BMDMs) with LPS (Extended Data Fig. 1a). As a result of this functional reprogramming of macrophages a majority of LPS-induced genes are transcriptionally silenced, i.e. tolerized, and fail to be expressed upon re-stimulation 7,9 (Extended Data Fig. 1b). Using this in vitro model (Extended Data Fig. 1c-e) we identified miRNAs with expression patterns correlating with tolerance (Fig. 1a). We validated these findings using qPCR (Extended Data Fig. 1f-g) and found that several miRNAs are differentially expressed during tolerance but not during an acute LPS response. Levels of miR-222, in particular, increased late during the LPS response (Extended Data Fig. 1g), and correlated with tolerance induction (Fig. 1b). miR-222 was also upregulated to a lesser extent with prolonged tumor necrosis factor (TNF) or interleukin-1β (IL-1β stimulation (Extended Data Fig. 1h), which have been shown to weakly induce innate immune tolerance 10,11 . Pre-treatment of BMDMs with interferon gamma (IFNγ), which inhibits LPS tolerance 8 , prevented LPS-induced upregulation of miR-222 (Extended Data Fig. 1i). Although miR-221 is processed from the same primary transcript as miR-222 12 , mature levels of miR-221 and of miR-222 do not always correlate (Extended Data Fig. 2a-c). Given that miR-221 is not responsive to LPS (Extended Data Fig. 2g). Conversely, antagonization of miR-222 resulted in increased inflammatory gene expression, even during a naïve LPS response. This effect was relatively mild early after stimulation (data not shown), likely due to low basal miR-222 expression, but increased in magnitude at later time points (Fig. 1d). To test the effect of miR-222 on tolerance, BMDMs were transduced with a miR-222 antagonist and tolerized in vitro.
Antagonization of miR-222 reduced the duration and magnitude of suppression of LPSresponse genes (Fig. 1e). In some cases, tolerized cells with antagonized miR-222 produced as much IL-6 or IL-12p40 in response to LPS as non-tolerized cells (Fig. 1f).
In contrast to other genes, Tnf was suppressed at the mRNA, but not primary transcript level (Extended Data Fig. 2f-g), suggesting miR-222 regulates Tnf through a mechanism distinct from other tolerized genes. Indeed, the Tnf UTR has a predicted binding site for miR-222 miR-222 target. However, post-transcriptional effects of miR-222 on TNF expression do not contribute to the effects of miR-222 on other genes, as TNF neutralization did not recapitulate the effects of miR-222 overexpression (Extended Data Fig. 3h-i).
Intact Tnf transcription suggested miR-222 does not alter TLR4 signaling. Indeed, miR-222 overexpression did not affect LPS-induced IκBα degradation (Extended Data Fig. 4a-c). We therefore filtered computational predictions for miR-222 targets that were expressed in macrophages, did not affect Toll-like receptor 4 (TLR4) signaling, and decreased in expression late in the LPS response (between 8-24 hours of LPS stimulation; Extended Data Table 1). This approach identified Brg1 (Smarca4) as the most likely target affected by miR-222 during LPS tolerance. BRG1, a catalytic subunit of the SWI/SNF (BAF) complex, evicts Polycomb repressive complexes in an ATP-dependent manner, promoting chromatin accessibility and allowing for transcription factor recruitment to specific binding sites 13 . Notably, BRG1 is recruited to the promoters of late LPS response genes, which require SWI/SNF activity for their transcription 14 .
The predicted miR-222:Brg1 binding site is evolutionarily conserved (Extended Data Fig. 4d), and RNA levels of Brg1 and miR-222 during the LPS response were inversely correlated (Extended Data Fig. 4e). Artificial modulation of miR-222 caused an inverse effect on Brg1 mRNA and protein levels (Extended Data Fig. 4f-h). To confirm that this was due to direct targeting, the Brg1 UTR was cloned into a luciferase reporter. miR-222 dosedependently suppressed luciferase activity resulting from co-transfection, but only if the miR-222 binding site in the Brg1 UTR was intact (Extended Data Fig. 4i). The effect of miR-222 overexpression on genes previously identified as being SWI/SNF-dependent in macrophages 15 was compared. Overexpression of miR-222 preferentially suppressed expression of SWI/SNF-dependent genes ( Fig. 2a and Extended Data Fig. 4j). Furthermore, BRG1 recruitment to inflammatory gene promoters was reduced after miR-222 overexpression (Fig. 2b). Histone H3 acetylation, which occurs downstream 14 of BRG1 activity, was also reduced (Extended Data Fig. 4k). In contrast, histone H4 acetylation at these promoters, which occurs prior to BRG1 recruitment 16,17 , was unaffected (Extended Data Fig. 4l). Finally, CRISPR-Cas9 disruption of the miR-222 binding site in the Brg1 UTR in RAW cells (Extended Data Fig. 4m) prevented miR-222-mediated suppression of some SWI/SNF-dependent genes (Extended Data Fig. 4n).
To characterize the biological role of miR-222, we generated an animal knockout model. However, miR-221 and miR-222 are encoded in the same transcript; are induced by LPS in certain cell types (Extended Data Fig. 2b-c); have similar seed sequences (Extended Data Fig. 5a); have substantial overlap in predicted mRNA targets (Extended Data Fig 5b); and are both predicted to bind to the same target site in the Brg1 UTR (Extended Data Fig. 5c). Furthermore, like miR-222, overexpression of miR-221 downregulates Brg1 levels (Extended Data Fig. 5d) and has downstream effects on inflammatory gene expression (Extended Data Fig. 5e). Therefore, we targeted both miRNAs for deletion 18 (Extended Data Fig. 5f-h). We then used qPCR and RNA-sequencing to characterize the LPS response in miR-221/222 knockout macrophages (Fig. 2c). Although the increase in Brg1 expression in peritoneal macrophages from knockout mice was modest compared to in vitro experiments, miR-221/222 knockout cells expressed higher levels of many Brg1-dependent genes, as well as Tnf (Extended Data Fig. 5i-j). Interestingly, some Brg1-dependent genes were more affected by miR-221/222 knockout than others (for instance, comparing Il6 and Nos2 in Extended Data Fig. 5j), suggesting differential sensitivity to changes in BRG1 levels.
To better understand the mechanisms of altered gene expression in cells lacking miR-221/222 (Extended Data Fig. 5k), we analyzed the promoters of affected genes to identify common regulatory features. Although we obtained similar results in multiple analyses of affected gene subsets (Extended Data Fig. 6a-f), we limited our main analysis to those LPS genes that are most suppressed in tolerized wildtype cells (358 genes/1036 genes responsive to LPS; Fig. 2d). Roughly half of these genes were expressed at higher levels in tolerized knockout cells compared to tolerized wildtype cells ("de-repressed" genes, Fig. 2e), and roughly half were unaffected ("unaffected" genes, Fig. 2f). The promoters of derepressed genes were enriched for IRF and STAT1/STAT2 binding motifs (Fig. 2e), whereas those of unaffected genes were enriched for E2F and EGR family motifs (Fig. 2f). An analysis of predicted downstream functions of the de-repressed genes subset found an enrichment for IFN-response genes (Fig. 2e), and LPS-induced expression of many of these genes is reduced in Ifnar knockout cells 19 . This implies that many of these genes are a part of the late LPS response, transcribed as a result of STAT activation by autocrine/paracrine signaling by IFN generated from the initial LPS stimulation.
To determine whether the predicted binding motifs were utilized during the LPS response, we analyzed transcription factor occupancy using published ChIP-seq data [20][21][22][23] . Interferon regulatory factor 1 (IRF1) and IRF8 were found to be selectively pre-associated with derepressed gene promoters ( Fig. 2g and Extended Data Fig. 6g). However, STAT1 and STAT2 were recruited specifically to the promoters of de-repressed genes only after LPS stimulation (Fig. 2g). Other transcription factors, such as NF-κB, were not differentially recruited (Extended Data Fig. 6h). Furthermore, in cells with deletion or mutation of Irf1 or Irf8, respectively 24 , cytokine-induced H3K27 (histone H3, lysine 27) acetylation, a marker of active transcription, was diminished at the promoters of de-repressed genes, whereas deletion of Stat1 25 almost completely abolished cytokine-induced H3K27 acetylation at these genes (Fig. 2h). Consistent with this analysis, STAT2 recruitment was significantly higher at the promoters of de-repressed genes in tolerized miR-221/222 knockout cells after restimulation (Fig. 2i). Furthermore, Stat1 mRNA levels are higher in miR-221/222 knockout cells and in cells in which Brg1 is overexpressed (Extended Data Fig. 7i-j).
Therefore, miR-221/222 perturbs SWI/SNF promoter recruitment, leading to repression of STAT activity at inflammatory gene promoters. As BRG1 and STAT transcription factors work cooperatively only at certain gene promoters to allow IFN-and cytokine-induced gene transcription 26,27 , miR-221/222 may limit expression of specific genes (Fig. 2i).
We next examined miR-221/222 activity utilizing a model of sterile inflammatory shock induced by high-dose LPS injection. In this system, changes that decrease inflammation increase survival: therefore, we used this model mainly to determine whether the antiinflammatory effects of miR-221/222 we observe in vitro also occur in vivo. After LPS injection, levels of miR-221 and miR-222 in circulating immune cells were elevated (Fig. 3a). To determine whether this is physiologically relevant, LPS tolerance was induced in wildtype and miR-221/222 knockout littermates by administering two sublethal doses of LPS prior to a lethal LPS dose: this regimen induces sufficient tolerance to prevent lethality in wildtype mice (Extended Data Fig. 7a-b). Although miR-221/222 knockout mice were also protected from lethality, the miR-221/222 knockout mice exhibited more symptoms of septic shock (Extended Data Fig. 7c), indicating decreased anti-inflammatory effects in the knockouts. To test whether miR-221/222 contributes to survival under more extreme conditions, we utilized a model of septic shock in which tolerance is only partially protective against lethality (Extended Data Fig. 7d-e). In this model, absence of miR-221/222 decreased median time (from 36.5 to 20.5 hours) and likelihood of septic shock survival over a 72-hour period (Fig. 3b).
Although LPS-induced septic shock is used to study acute inflammation in vivo, this model does not recapitulate sepsis in patients, or necessarily predict the effect of inflammatory regulators on patient outcome. Therefore, to study the role of miR-221/222 in a model that better reflects the systemic innate response to pathogen challenge, we utilized a Salmonella enterica Typhimurium (S. Typhimurium) infection model. First, we performed in vitro assays using green fluorescent protein (GFP)-expressing S. Typhimurium infection of BMDMs. BMDMs from miR-221/222 knockout mice exhibited increased GFP per cell early after infection (Extended Data Fig. 7f-h). At later time points, this difference was not observed (Extended Data Fig. 7h), suggesting that despite increased phagocytosis, miR-221/222 knockout cells are more efficient at suppressing intracellular replication and/or survival. We confirmed this finding by lysing BMDMs and comparing bacterial colonyforming unit (CFU) recovery at early and late time points after infection (Extended Data Fig. 7i). To test miR-221/222 effects in vivo, wildtype and knockout mice were injected intraperitoneally with the same strain of S. Typhimurium. 2 days post-infection, fewer bacterial CFUs were recovered from the liver and spleen of miR-221/222 knockout animals (Fig. 3c). In addition, miR-221/222 knockout animals exhibited increased survival time (Fig. 3d), suggesting that loss of miR-221/222 confers resistance to bacterial replication and/or dissemination. These findings suggest that miR-221/222 broadly suppress inflammation and innate immune function. During early stages of sepsis miR-221/222 expression may be protective by limiting excessive inflammatory cytokine production that contributes to septic shock. Conversely, miR-221/222 appears to contribute to immunoparalysis, and increased miR-221/222 expression may enhance lethality at later stages of sepsis (Fig. 3e).
Because it is unclear which models most accurately resemble patient conditions, we next examined miR-221/222 expression in human disease. Consistent with results from murine cells, miR-221 and miR-222 are both upregulated in response to prolonged LPS stimulation of a human monocyte-like cell line, whereas only miR-222 is upregulated by LPS in this cell line after PMA-induced differentiation to a macrophage-like cell type (Extended Data Fig. 8a-b). Next we analyzed miR-221/222 expression in three patient cohorts. In the first cohort (Extended Data Fig. 8c), we quantified miR-221 and miR-222 levels in peripheral blood mononuclear cells (PBMCs) from 10 sequential intensive care unit (ICU) patients who met sepsis criteria 28 within 4 hours of ICU admission. Compared to PBMCs from healthy donors, miR-221 and miR-222, but not several other inflammation-associated miRNAs, were significantly higher in the ICU patient samples (Fig. 4a). Expression levels were then examined in a second patient cohort with acute decompensated liver disease and clinical suspicion of infection (Extended Data Fig. 8d). Patients with organ failure, defined by the chronic liver failure-sequential organ failure assessment (CLIF-SOFA), had significantly higher miR-222 levels than patients without (Fig. 4b). Levels of miR-221 correlated with miR-222 levels (Extended Data Fig. 8f), but were not increased to statistically significant levels (Fig. 4c). Levels of miR-222 in this cohort inversely correlated with BRG1 expression levels ( Fig. 4d). In a set of matched PBMC and serum samples, miR-222 and TNF levels also inversely correlated (Fig. 4e). Finally, the inverse correlation between miR-222 and BRG1 was also observed in CD14+ monocytes sorted from the PBMC population of a third clinical cohort ( Fig. 4f and Extended Data Fig. 8e), confirming changes in myeloid cell miR-222 and BRG1 levels.
Unlike generalized inflammatory markers, miR-222 elevation correlates specifically with severe sepsis. miR-222 levels do not correlate with inflammatory markers such as CRP or white blood cell count, but showed a significant correlation with organ damage markers including creatinine and the model for end-stage liver disease score (Extended Data Fig. 8gj). Hence, miR-222 expression may be a useful biomarker for discriminating patients who are undergoing septicemia-induced immunoparalysis and are, therefore, predisposed to organ failure and mortality.
In summary, the data presented in this report establish a model in which miR-221/222 restricts chromatin remodeling and silences transcription to enforce innate immune tolerance. Upon prolonged innate immune signaling, increased expression of miR-221/222 reduces BRG1 expression. The resulting changes in SWI/SNF complex levels, or composition, leads to selective expression of only those LPS-response genes with the most favorable chromatin states. The fact that significant changes in gene expression result from modest miR-221/222 dependent changes in BRG1 expression is consistent with previous reports that mutation or deletion of a single allele of SWI/SNF subunit is sufficient to confer strong phenotypic effects 29,30 . Hence, by fine-tuning the levels of BRG1, miR-221/222 can prevent prolonged expression of STAT-dependent inflammatory genes in macrophages, thereby leading to tolerance or innate immunoparalysis (Extended Data Fig. 9). In contrast, robust activation of STAT1, for example by co-stimulation with IFNγ can block 8 or even reverse 31,32 LPS tolerance and innate immunoparalysis. Consistent with such a role for STAT1, treatment with IFNγ has been shown to improve outcomes in sepsis 33 .
Although LPS tolerance promotes survival in murine models of sterile shock, sepsis patients likely succumb to primary or secondary 1 infections due to immunosuppression as a result of functional reprogramming of myeloid cells. Thus, paradoxically, the same innate immunoparalysis that is protective in the murine LPS-shock model would be responsible for organ damage and mortality in human sepsis patients. We identify miR-222/221 as a mediator of tolerance and show that miR-221/222 expression may distinguish organ failure patients at high risk of mortality from those with infection alone. Thus, monitoring of miR-221/222 or related bio-markers may help clinicians to stratify sepsis patients into groups who would benefit from pro-inflammatory immunotherapies versus those who might be helped by classical anti-inflammatory treatments.
Methods
Cell culture RAW 264.7 cells (ATCC TIB-7) were cultured in DMEM supplemented with 10% fetal bovine serum. 293FT cells (Invitrogen R7007) and L-929 cells (ATCC CCL-1) were cultured in DMEM supplemented with 10% fetal bovine serum. Cells were purchased from vendor and tested for mycoplasma contamination prior to use (no further authentication of line identity was performed). L-cell conditioned medium (LCM) was generated by filtersterilizing the supernatant of L-929 cells that were allowed to grow for one week in culture. Primary BMDMs were generated by isolation and culture of mouse bone marrow in complete RPMI supplemented with 20% LCM for up to 12 days. Immortalization of BMDMs was performed as described 34 by inoculation with the J2 retrovirus. For cell stimulations, 10 ng/ml LPS (Sigma L8274), 10 ng/ml recombinant human TNF (R&D Systems 210-TA), 100 ng/ml recombinant mouse IL-1β (R&D Systems 401-ML-005), 100 ng/ml recombinant mouse IFNγ (BD Pharmingen 554587), 10 pg/ml recombinant mouse IL-10 (eBioScience 88-7104-ST), 10 μM dexamethasone (Sigma D402), and 0.01 μM estrogen (Sigma E2758) were used unless otherwise indicated. For tolerization experiments, BMDMs were stimulated with 10 ng/ml LPS for 15 hours (or as indicated), washed 5 times with 1× PBS, then allowed to rest for 2 hours in LPS-free complete medium supplemented with 20% LCM. BMDMs were then stimulated with 1 μg/ml LPS for 4 hours (for qPCR) or 12 hours (for ELISA), or as indicated.
miRNA microarray
Samples were treated as described, rinsed with 1× PBS, lysed in TRIzol, and sent to a commercial microRNA array profiling service (Exiqon). As part of the service, samples were labeled using the miRCURY Hy3/Hy5 Power labeling kit and hybridized on the miRCURY LNA Array (v.11.0 hsa, mmu and rno). All capture probes for the control spikein oligonucleotides produced signals in the expected range. The quantified signals (background corrected) were normalized using the global Lowess (LOcally WEighted Scatterplot Smoothing) regression algorithm, and a list of differentially expressed miRNAs was returned.
Production of virus and BMDM transduction
Plasmids for miRNA overexpression (GeneCopoeia CmiR0001-MR01, MmiR3289-MR01, or MmiR3434-MR01) or antagonization (GeneCopoeia CmiR-AN0001-AM03 or HmiR-AN0399-AM03) were transfected into 293FT cells with the Lenti-Pac HIV Expression Packaging Kit (GeneCopoeia HPK-LVTR-20) or Lenti-Pac FIV Expression Packaging Kit (GeneCopoeia FPK-LVTR-20) to generate viral particles. BMDMs were inoculated by spin infection in 6-well plates in the presence of 6 μg/ml polybrene (Sigma H9268). Following spin inoculation, viral supernatant was immediately replaced with complete RPMI supplemented with 20% LCM. Cells were allowed to recover overnight. For primary BMDMs, plating for inoculation was generally performed on day 5 of differentiation. The first spin infection was performed on day 6, second spin infection (if necessary) was performed on day 7, and plating for experiments was performed on day 8.
RNA extraction, RT, and qPCR
Total RNA was extracted from samples using TRIzol reagent (Invitrogen 15596018). For reverse transcription of and detection of miRNAs, the Universal cDNA Synthesis Kit (Exiqon 203301) and locked nucleic acid primers (Exiqon) were used. For other genes, approximately 1 μg of RNA was reverse transcribed with SuperScript III (Invitrogen 18080085). qPCR was then performed with VeriQuest Fast SYBR (Affymetrix 75675). The amplified transcripts were quantified using the comparative Ct method.
Computational prediction of miRNA binding sites
miR-222 binding sites were predicted using the PITA algorithm 35 (http:// genie.weizmann.ac.il/pubs/mir07/mir07_prediction.html) or MicroCosm Targets program (which utilizes the miRanda algorithm; http://www.ebi.ac.uk/enright-srv/microcosm/htdocs/ targets/v5/) as indicated in the text. MicroCosm Targets Version 5 was used to search for targets for mmu-miR-222 36 . UTRs and miRNA sequence were manually input to the PITA algorithm, and default search settings were utilized. All predictions were re-verified with their respective programs on Dec 5, 2013.
Construction of reporter vectors and luciferase reporter assays
The Brg1 UTR was amplified from IMAGE clone 30533489 (Open Biosystems MMM1013-9498346) and cloned into the pMIR-Report (Ambion AM5795) multiple cloning site using HindIII and SpeI restriction sites. The Tnf UTR was amplified from cDNA generated from BMDMs stimulated with LPS for 1 hour, and inserted into the pMIR-Report vector as performed for the Brg1 UTR. Reporter plasmids were transfected into 293FT cells along with a Renilla luciferase reporter (used to normalize for transfection efficiency). After 24 hours, Firefly and Renilla luciferase activity was quantified using the Dual-Luciferase Reporter Assay (Promega E1980).
CRISPR
The CRISPR design tool (crispr.mit.edu) was used to design guide RNAs for cloning into the PX458 (Addgene 48138) and PX459 (Addgene 48139) Cas9/sgRNA expression plasmids 37 to generate plasmids to target identified miR-222 binding sites for deletion. Cells were transiently transfected with empty vector or targeting vectors. After 24 hours, transfected cells were selected by 48 hours of puromycin treatment (PX459) or by sorting for GFP positive (PX458) cells. Limiting dilution was performed to isolate clonal cell lines. Clones were screened for appropriate deletion by PCR. Deletion of targeted regions was confirmed by sequencing when necessary. Gene expression was compared between lines with successful deletion, unsuccessful deletion, and lines generated by transfected with expression plasmids that lacked a Cas9 targeting sequence.
For deletion of the miR-222 binding site in the Tnf UTR, the following guide sequences were used:
Intracellular staining for flow cytometry
Cells were rinsed and fixed for 15-30 minutes at room temperature in 4% paraformaldehyde. Cells were rinsed and permeabilized by resuspension in 5% saponin for 10-20 minutes at room temperature. Either anti-IκBα (L35A5, Cell Signaling 4814), anti-Brg1 (H88, Santa Cruz sc-10768), or Rabbit mAb IgG Isotype Control (Cell Signaling 3900) was added, and cells were incubated for an additional 20 minutes at room temperature. Cells were rinsed and re-suspended in saponin with 1:300 dilution of fluorochrome conjugated secondary antibody (Alexa Fluor 488 Donkey Anti-Rabbit IgG, Invitrogen A21206; Alexa Fluor 546 Goat Anti-Rabbit IgG, Invitrogen A11010; or Alexa Fluor 546 Donkey Anti-Mouse IgG, Invitrogen A10036). After incubation at room temperature for 20 minutes, cells were rinsed, resuspended in PBS, and analyzed on a BD LSRII flow cytometer.
Chromatin immunoprecipitation
Cells from a 15 cm plate were fixed by incubation in 1% formaldehyde for 5 minutes, rinsed, and lysed by incubation for 5 minutes on ice in buffer L1 (50 mM Tris at pH 9, 2 mM EDTA, 0.1% NP-40, 10% glycerol, with protease inhibitors). Nuclei were spun down and resuspended in 500 μl buffer L2 (50 mM Tris at pH 8, 0.1% sodium dodecyl sulfate, and 5 mM EDTA). Sonication was performed in a Bioruptor, using 10 cycles of 30 seconds each. Immunoprecipitation was performed using 20 μl magnetic protein A beads and 5 μg antiacetyl-histone H4 (Lys5; Millipore 07-327), 2 μg Brg1 (H-88; Santa Cruz sc-10768), or 5 μg acetyl-histone H3 (Millipore 06-599) per 50 μl of chromatin in a 500 ul volume. After overnight rotation at 4 C, supernatant was isolated. DNA was recovered from the supernatant by adding 20 μl of 5 M NaCl, 50 μl of 10% SDS, and 5 μl of proteinase K, shaking for 2 hours at 60 degrees (unbound fraction). Beads were washed 3× in high salt buffer (20 mM Tris at pH 8.0, 0.1% SDS, 1% NP-40, 2 mM EDTA, and 0.5 M NaCl), and 3× in TE. DNA was eluted from beads by re-suspending beads in 100 μl elution buffer and shaking for 2 hours at 60 degrees (bound fraction). Bound and unbound fractions were heated to 95 C for 10 minutes. DNA was purified from fractions using the Qiagen PCR Purification Kit (28104). To check for promoter binding, qPCR was performed using DNA from the bound and unbound fractions. Bound/unbound ratios were normalized to alpha-crystallin ratios, as this should represent a silent gene.
Amaxa nucleofection
BMDMs were nucleofected with 2 ug of plasmid DNA using the Amaxa Mouse Macrophage Nucleofector Kit (VPA-1009), in conjunction with the Amaxa Nucleofector II Device, according to the manufacturer-optimized protocol.
Salmonella enterica serovar Typhimurium infection
For these experiments, a GFP-expressing Salmonella enterica serovar Typhimurium strain (SL1344) was used. S. Typhimurium cultures were grown in LB supplemented with 100 ug/ml carbenicillin and 30 ug/ml streptomycin. Overnight cultures were diluted and allowed to grow for an additional hour before use to ensure bacteria were in log growth phase. OD 600 readings were correlated to previously determined CFU values and used to quantify number of bacteria present in culture. BMDMs were infected by inoculation of DMEM growth medium (containing only streptomycin) with bacteria at a multiplicity of infection of 50. Plates were spun at 800 rcf for 5 minutes at 4 C. BMDMs were incubated for 30 minutes at 37 degrees. Cells were washed 3 times, then incubated in medium containing gentamycin (100 ug/ml for incubations of 2 hours or less, 12 ug/ml for longer incubations). BMDMs were subsequently analyzed for GFP content by flow cytometry, or lysed in water to allow for plating of lysate dilutions on LB agar plates containing carbenicillin to determine bacterial CFU counts.
Mice
For BMDM generation, female C57Bl/6J mice, 7-10 weeks of age, were used unless otherwise noted. For tolerance and septic shock experiments, male C57Bl/6J mice, 6-10 weeks of age, were used. LPS (E. coli O55:B5; Sigma L2880) and D-(+)-Galactosamine hydrochloride (Sigma G0500) were re-suspended in sterile PBS and filter sterilized prior to intraperitoneal injection. For in vivo infection experiments, mice were given intraperitoneal injections of 1×10^7 CFU/kg of a GFP-expressing Salmonella enterica serovar Typhimurium strain (SL1344) suspended in PBS. Mice were maintained under specific pathogen-free conditions in animal facilities at Columbia University Medical Center. All animal experiments were carried out with the approval of the Columbia University Institutional Animal Care and Use Committee, and in compliance with regulations and guidelines set forth by Columbia University.
Generation of knockout mice
miR-221/222 knockout mice were generated at the Columbia University Transgenic Mouse facility. In brief, KV1 (129B6 hybrid) ES cells were electroporated with the linearized targeting construct discussed in Extended Data Fig. 6. After positive and negative selection, clonal cell lines were screened by PCR for proper integration of the construct. Positive lines were expanded, blastocyst injection was performed, and germline transmission was confirmed. miR-221/222 knockout mice were backcrossed to the C57Bl/6 background 5-8 times prior to experimental use.
Peritoneal macrophage isolation
5 ml of cold PBS was injected into the peritoneal cavity of euthanized mice. Peritoneum was gently massaged. Fluid was collected, and process was repeated. Cell suspension was spun down, and cells were plated at 500,000 cells per well in 12-well plates. Macrophages allowed to adhere overnight. Non-adherent cells rinsed off with PBS washes.
Thioglycollate elicitation of peritoneal macrophages
3% thioglycollate was sterilized and aged for at least 2 months. 1 ml of thioglycolate preparation was injected into the peritoneal cavity of each mouse 5 days prior to the isolation of macrophages (as described above).
Monocyte isolation
Bones were isolated from wildtype C57Bl6/J mice. Marrow was retrieved by crushing. Monocytes were purified using the EasySep Mouse Monocyte Isolation Kit.
RNA-sequencing
RNA-sequencing was performed by the JP Sulzberger Columbia Genome Center. Poly-A pull-down was used to enrich mRNAs from total RNA samples (200ng-1ug per sample, RIN>8 required). Libraries were prepared using the Illumina TruSeq RNA prep kit. Libraries were then sequenced using Illumina HiSeq2000. Multiplexed and pooled samples were sequenced to a depth of 24-34×10 6 reads per sample as 100 bp single end reads. RTA (Illumina) was used for base calling, and bcl2fastq (version 1.8.4) was used for converting BCL to fastq format, coupled with adaptor trimming. Reads were mapped to a reference genome (Mouse: UCSC/mm9) using Tophat (version 2.1.0) with 4 mismatches (-readmismatches = 4) and 10 maximum multiple hits (-max-multihits = 10). To tackle the mapping issue of reads that are from exon-exon junctions, Tophat infers novel exon-exon junctions ab initio, and combines them with junctions from known mRNA sequences (refgenes) as the reference annotation. The relative abundance (aka expression level) of genes and splice isoforms were estimated using cufflinks (version 2.0.2) with default settings.
ChIP-sequencing analysis
Track data of genes of interest were loaded into Galaxy 38 (usegalaxy.org) using the UCSC table browser and mouse mm10 genome. Using Galaxy, previously published ChIP-seq data was then aligned to the mouse mm10 genome using the HISAT program (Galaxy Version 2.03) with default settings. BamCoverage (Galaxy Version 2.3.6.0) was then used to generate a coverage bigwig file, using default settings to scale to the size of the mm9 mouse genome. ComputeMatrix (Galaxy Version 2.3.6.0) and plotHeatmap (Galaxy Version 2.3.6.0) were then used to compare TF occupancy at gene promoters, using the TSS as the reference point.
Patient sample selection and processing (Fig. 4a)
We selected 10 consecutive patients newly admitted to a medical or surgical ICU who had the systemic inflammatory response syndrome (SIRS) and a known or suspected infection 39 . Patients were excluded from the study if they had an ICU admission or bacteremia within the previous 30 days. After obtaining informed consent from the patient or a surrogate, whole blood was drawn within 4 hours of ICU admission. PBMCs were isolated from whole blood of healthy human volunteers or buffy coat isolates from ICU patients meeting sepsis criteria by centrifugation on a Ficoll cushion. RNA was isolated with the miRNeasy micro kit (Qiagen 217084) and reverse transcribed as described above. Experiments were performed with approval of the Institutional Review Board at Columbia University and in accordance with regulations and guidelines set forth by the university. (Fig. 4b-f) Additional patient cohorts were obtained from hospitalized patients with acute decompensation of chronic liver disease and suspected bacterial infection. Baseline characteristics and outcome of patients with decompensated liver disease in the absence or presence of multiple organ failure syndrome (according to the EASL CLIF-C criteria for Acute-on-chronic Liver Failure 40 ) are given in Extended Data Fig. 8. Clinical scores such as model for end-stage liver disease (MELD) scores, bacterial culture count, protein analysis, blood count and serum levels of C-reactive protein (CRP), creatinine were obtained from routine laboratory analysis. The determination of serum concentration of TNF was performed by ELISA.
Patient sample selection and processing
The isolation and characterization of human immune cells and the use of clinical data was approved by the internal review board (Ethics committee of the Jena University Hospital, no. 3683-02/3). The study conformed to the ethical guidelines of the 1975 Declaration of Helsinki, and patients granted written informed consent prior to inclusion.
Statistics and sample collection
Students t-tests were performed using the T.TEST function in Microsoft Excel. All other statistical tests were performed using Prism software. Unless otherwise stated, two-sided tests were performed. For samples using cell lines and cells isolated from inbred mice, the Students t-test was often used. The distributional requirements for the test are assumptions. This means for instance, under the assumption of normal-distributed residuals, the t-test is an exact test, however given a non-normal distribution of cell line data, the test is not anymore exact but approximative. . Variation generally appears similar between groups being compared. All experiments were replicated in the laboratory at least 2 times. Unless otherwise indicated, in experiments utilizing primary cells, n represents number of experiments performed with separate cell isolations; in experiments utilizing immortalized cells or cell lines, n represents the number of experiments performed using separate cell populations. Systematic randomization and blinding were not performed. Samples were excluded from the analysis if they were identified as outliers using the Grubbs' test, also called the ESD method (extreme studentized deviate).
For animal LPS shock studies, appropriate sample size was estimated based on an outcome variable of survival time, measured in hours. An estimate was based on using a one-tailed Student's t-test to determine statistical significance. Control animals were expected to succumb within 62 hours. Knockout animals were expected to become moribund 52 hours after LPS injection at the latest. Therefore, the minimal effect size was estimated to be 10 hours. Based on literature and experiments previously performed by our lab, we anticipated a standard deviation of 10 hours. Taking into account a power of 80% and alpha of 0.05, we calculated a sample size of 10 mice per genotype.
Data accessibility
RNA-sequencing data that support the findings of this study have been deposited in GEO with the accession code GSE89918 (https://www.ncbi.nlm.nih.gov/geo/).
Extended Data
Extended Data Figure 1. In vitro modeling of tolerance and miR-222 induction upon prolonged LPS stimulation a, Schematic of experiments performed in (b). b, Expression of LPS-response genes in control BMDMs that have undergone the given treatments. 4 major expression patterns of LPS response genes in response to tolerization were noted (n=5 biologically independent samples). c, Schematic of experiments performed in (d). d, Cytokine production, measured by ELISA, by BMDMs re-stimulated with LPS overnight after pre-treatment with LPS for the given periods of time. Time points chosen for miRNA microarray analysis are highlighted in gray (n=3 biologically independent samples). e, Schematic of strategy for experiments performed in Fig. 1. f, Comparison of microarray (x-axis) and qPCR (y-axis) measurements of LPS-induced upregulation of miRNAs. Linear regression showing correlation between the two methods is plotted (n=16 miRNAs tested). g, qPCR verification of LPS-induced change in expression of 9 miRNAs (n=3 biologically independent samples). h, Expression of miR-222 after stimulation of BMDMs by anti-inflammatory and toleranceinducing factors for the given lengths of time (n=5 biologically independent samples; Dex, Dexamethasone). i, Expression of miR-222 in response to LPS alone, or LPS after pretreatment of BMDMs with IFNγ (n=4 biologically independent samples). For all bar and line graphs, mean +/− SEM is plotted. ** p < 0.01, * p < 0.05, + p < 0.1 as determined by 2sided Student's t-test for paired values.
Extended Data Figure 2. Differential regulation of miR-222 and miR-221 and association of miR-222 with in vitro tolerance
a-c, Expression of miR-221 and miR-222 in response to LPS stimulation of BMDMs (a, n=4 biologically independent samples), peritoneal macrophages (b, n=3 biologically independent samples for miR-222 and n=4 biologically independent samples for miR-221), or monocytes isolated from the bone marrow (c, n=3 biologically independent samples), as determined by qPCR. d, LPS-induced miR-221 and miR-222 expression in BMDMs with or without IFNγ pre-treatment, as determined by qPCR (n=2 biologically independent samples). e, Schematic of experiments performed in (f-g) and Fig. 1c. f-g, LPS-induced gene expression at the mRNA (i) or primary transcript (j) level after miR-222 mimic transfection (n=5 biologically independent samples). For all bar and line graphs, mean +/− SEM is plotted. ** p < 0.01, * p < 0.05, + p < 0.1 as determined by two-sided Student's t-test for paired values. Extended Data Figure 3. Tnf is a direct target of miR-222, but suppression of Tnf does not account for miR-222-mediated transcriptional silencing of late LPS response genes a, Sequence and prediction scores of a miR-222 binding site in the Tnf UTR. b, Activity of a luciferase reporter construct in which the luciferase coding sequence is followed by either the complete Tnf UTR, or a UTR in which the predicted miR-222 binding site has been mutated to the sequence shown in (a) (n=6 independent experiments). c, CRISPR-Cas9 targeting strategy to delete predicted binding sites. d, RAW clones were screened for successful deletion of the miR-222 binding site by PCR across the targeted region of the UTR, using genomic DNA from the given clonal line as a template. Screening for Tnf UTR deletion is shown. Experiment was repeated twice with similar results. e, Successful deletion of the miR-222 binding site in RAW cell clones was confirmed by sequencing genomic DNA of the given cell line. miR-222 binding site in the TNF UTR is highlighted in yellow. f, LPS-induced Tnf expression in control and CRISPR-Cas9 targeted RAW cells (n=4 independent experiments). g, Average effect of miR-222 mimic transfection on LPS-induced Tnf mRNA levels in either control MEFs or MEFs which have undergone CRISPR targeting and clonal selection for deletion of the miR-222 binding site. Average of the effects from the 3 clonal lines (n=3 independent experiments). h, Wildtype BMDMs were transfected with a control or miR-222 mimic oligonucleotide. 24 hours later, cells were pre-treated with an isotype control (IgG) or TNF neutralizing (α-TNF) antibody for two hours, and stimulated with 10 ng/ml LPS. Expression of the given genes was measured by qPCR (n=4 biologically independent samples). i, Efficacy of TNF neutralization was confirmed by treating cells with IgG or α-TNF as above, followed by stimulation with 100 ng/ml recombinant mouse TNF (n=3 biologically independent samples). Gene upregulation was not detected (ND) in 2/3 samples treated with α-TNF. For all bar graphs, mean +/− SEM is plotted. ** p < 0.01, * p < 0.05, + p < 0.1 as determined by two-sided Student's t-test for paired values.
Extended Data Figure 4. Evidence of miR-222 targeting of Brg1 a, Example of gating that was used to exclude dead cells from flow cytometry analyses in (c), (g), and Extended Data Fig. 6i. b, Example of gating used to distinguish cells with high vs. low levels of IκBα, as analyzed in (c). c, Effect of miRNA overexpression (by viral transduction) on LPS-induced IκBα degradation in iBMDMs, measured by flow cytometry (n=4 independent experiments). d, Sequence and prediction scores of a miR-222 binding site in the Brg1 UTR. e, miR-222 and Brg1 mRNA levels in LPS-stimulated BMDMs (n=3 biologically independent samples). f, Brg1 mRNA levels in resting BMDMs 24 hours after transfection (n=4 biologically independent samples). g, Effect of miRNA overexpression or antagonization (by viral transduction) on BRG1 levels in iBMDMs, observed by flow cytometry. Representative of 4 independent experiments with similar results, quantified in (h). h, Flow cytometry analysis of BRG1 protein levels in transduced iBMDMs (n=4 independent experiments). i, Activity of a luciferase reporter construct in which the luciferase coding sequence is followed by either the complete Brg1 UTR, or a UTR in which the predicted miR-222 binding site has been mutated to the sequence shown in (d) (n=3 independent experiments). j, Quantification of average effect of miR-222 mimic transfection on Brg1-dependent and -independent LPS-response genes (n=3 biologically independent samples). Two-sided Student's t-test for heteroscedastic values used to compare ratios (miR-222 overexpression/control) at peak LPS-induced expression times for Brg1-dependent production in BMDMs transfected with given miRNA mimics, as measured by ELISA (n=5 biologically independent samples). f, Schematic of the miR-221/222 locus after targeting with a construct designed to generate both complete and conditional miR-221/222 knockout mice. g, Schematic of the miR-221/222 locus after breeding targeted mice (f) with EIIa-Cre mice, which results in complete deletion of miR-221/222. h, miRNA expression in BMDMs from littermates with a wildtype or miR-222 knockout allele (n=5 biologically independent samples). i, LPS-induced gene expression in naïve or tolerized peritoneal macrophages isolated from wildtype or miR-222 knockout littermates (n=7 biologically independent samples). j, Heatmap comparing the effect of Brg1/Brm knockdown 15 and miR-222 knockout on gene expression. Colors represent values of the given ratios; red indicates increased expression, white indicates no change, and blue indicates decreased expression. k, Heat map of LPS-induced gene expression in wildtype and miR-222 knockout macrophages. For all bar graphs, mean +/− SEM is plotted. ** p < 0.01, * p < 0.05, + p < 0.1 as determined by two-sided Student's t-test for paired (d-e) or heteroscedastic (i) values.
Extended Data Figure 6. Gene ontology and ChIP-seq analysis shows that genes affected by miR-221/222 knockout have differential gene functions and transcription factor binding at promoters a-f, Enriched gene ontology terms (a-c) and transcription factor binding at promoters (d-f) of genes that are expressed at higher (2-fold or higher) or lower (0.5-fold or lower) levels in miR-221/222 KO macrophages after no stimulation (a, d; n=647 genes higher, 565 genes lower), LPS stimulation (b, e; n=143 genes higher; 121 genes lower), or LPS tolerization followed by restimulation (c, f; n=123 genes higher; 48 genes lower). PANTHER was used to identify GO terms. Top 4 for each category are shown; GO terms that are unique to either higher or lower expression gene subsets are highlighted. g-h, IRF and NF-κB subunit occupancy at gene promoters; gene subsets analyzed are described in Fig. 2h. For transcription factor analyses, previously published ChIP-seq data were utilized. i, RNA levels of genes in wildtype or miR-221/222 knockout peritoneal macrophages, quantified by a single RNA-sequencing experiment. j, qPCR for gene expression in WT BMDMs after Amaxa-based nucleofection of given overexpression construct (n=3 biologically independent samples). For all bar graphs, center value represents the mean and errors bars (if applicable) represent SEM is plotted. (d) corresponds to PBMC analyses (Fig. 4b-d). Median with interquartiles or frequencies and percentages are shown. P values from Mann-Whitney U test or Fisher's exact test as appropriate (2-sided). *comparing any infection versus no infection. ** 4/30 (13%) and 1/10 (10%) patients were lost to follow-up within 30 days. Data in table (e) corresponds to monocyte analyses (Fig. 4f). Median with interquartiles or frequencies and percentages are shown. P values from Mann-Whitney U test of Fisher's exact test as appropriate (2-sided). *comparing any infection versus no infection. ** 1/10 (10%) patients were lost to follow-up within 30 days. f, Correlation between miR-221 and miR-222 levels in patients characterized in (d; n=30 patients). Bivariate nonparametric correlation analysis (Spearman's rho) was used to identify correlations between variables and p-values. g-j, Linear correlation of miR-222 expression and CRP (g), WBC count (h) creatinine levels (i) or MELD score (j) in samples from the patient cohort described in (d; n=30 patients). Bivariate nonparametric correlation analysis (Spearman's rho) was used to identify correlations between variables and p-values. For line graphs, mean +/− SEM is plotted. g, Lack of available BRG1 prevents chromatin remodeling at many gene promoters, and prevents downstream transcription factor recruitment. This prevents gene transcription from occurring in most cells. were considered (using microarray data generated for a prior study 42 ). Results were then sorted by p-value (generated by the microCosm program).
Brg1 (Smarca4) is highlighted in bold red font. (Note: multiple listings for a target indicate that more than one site prediction for that gene was made by the MicroCosm program.) | 2018-06-29T00:32:26.062Z | 2018-05-30T00:00:00.000 | {
"year": 2018,
"sha1": "1335ab6c19cd89d34ad823bb7c405f28ccf899a0",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc6044474?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "1335ab6c19cd89d34ad823bb7c405f28ccf899a0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
251685725 | pes2o/s2orc | v3-fos-license | Biomass‐based biomimetic‐oriented Janus nanoarchitecture for efficient heavy‐metal enrichment and interfacial solar water sanitation
Interfacial solar steam generation (ISSG), involving the use of solar energy to evaporate water at the water‐to‐vapor interface, has presented prospects for the desalination and purification of water due to high energy conversion efficiency and low‐cost freshwater generation. Herein, inspired by the aligned nanostructure of plants for efficiently transporting nutrient ions, we optimally design and construct a biomass‐based Janus architecture evaporator with an oriented nanostructure for ISSG, using the ice template method, followed by biomimetic mineralization with the resource‐abundant and low‐cost biomass of the carboxymethyl cellulose and sodium alginate as the raw materials. Taking advantage of the oriented nanostructure allowing efficient transportation of water and coordination capacity of sodium alginate for effective enrichment of heavy‐metal ions, the biomass‐based Janus architecture shows much lower thermal conductivity and an ultrahigh steam regeneration rate of 2.3 kg m−2 h−1, considerably surpassing those of previously reported oriented biomass‐based evaporators. Moreover, the biomass precursor materials are used for this Janus evaporator, guaranteeing minimum impact on the water ecology and environment during the regeneration process of clean drinking water. This study presents an efficient, green, and sustainable pathway for ISSG to effectively achieve heavy‐metal‐free drinking water.
| INTRODUCTION
Nowadays, the freshwater crisis has become a pervasive issue affecting more than 100 countries worldwide. [1] Since the main difference between freshwater resources and sewage lies in the solutes (or impurities) in water, the complete separation of water and solution is deemed the ultimate goal for water regeneration. It is widely believed that seawater desalination can certainly mitigate the water shortage problem. [2][3][4][5] Although numerous strategies, such as continuous microfiltration and reverse-osmosis technologies, have been used for seawater desalination, it is not omnipotent especially in the non-coastal or energy-scarce areas. [6][7][8][9][10] Learning from nature, elaborately designed interfacial solar steam generation (ISSG) systems with black materials have been optimally developed for use in oil/microbe water separation or seawater and sewage treatment that show excellent potential for future industrialization. [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26] Recently, it has been reported that heavy-metal wastewater can be purified to meet the demand for clean water by the ISSG technology using a hydrogel. [27,28] However, there is still a long way to go toward achieving both efficient heavymetal enrichment and clean drinking water production simultaneously, because the solar-thermal conversion efficiency and steam generation need to be further boosted. Previous studies have mainly focused on the use of various components instead of nanostructure engineering, which hinders the improvement of photothermal efficiency. Therefore, construction of an ISSG with a rationally designed nanostructure is important to improve solar-thermal conversion efficiency and to also efficiently remove heavy-metal ions as well as produce clean drinking water.
In natural plants, water and nutritional ingredients are transported from the soil to the roots and leaves through oriented nanostructures in the stem during the photosynthesis process; meanwhile, some water is converted into steam through phase transformation by consumption of solar energy. [29] Herein, inspired by the function of oriented nanostructures of enhancing the transportation of water and metal ions in natural plants (Figure 1), we develop a biomass-based Janus architecture with bionic-oriented pore nanostructures as an ISSG evaporator for the production of clean drinking water; the lower part is composed of a calcium-solidified carboxymethyl cellulose/sodium alginate (CCA) nanocomposite and the upper part is composed of CCA blended with a polypyrrole layer (CCAP). The CCA nanocomposite is prepared using an ice template method, followed by biomimetic mineralization with all biomasses as components, including carboxymethyl cellulose and sodium alginate, in which the oriented pores and functional groups of hydrophilic polyhydroxy not only facilitate water transportation and salt resistance but also promote the exchange and enrichment of heavy-metal ions in the bulk evaporator. Under the synergistic action of the high absorbance of CCAP and the efficient convection promoted by oriented pores in this unique biomimetic architecture, the evaporator achieves a remarkable solar absorption of 98% and a low interfacial water vaporization enthalpy of 1475.7 J g −1 , thus resulting in a high solar-thermal conversion efficiency of 93% and rapid steam generation of 2.3 kg m −2 h −1 . As a whole, this study demonstrates that a biomass-based Janus architecture with bionic-oriented pore nanostructures can be used to construct an evaporator for enrichment of heavy-metal ions during ISSG by mimicking the pore structures in natural trees. This technology can pave the way toward heavy-metal enrichment during sewage regeneration.
2 | EXPERIMENTAL SECTION 2.1 | Preparation of the CAP-CA biomass-based Janus architecture Liquid nitrogen was poured into an insulated open container. The lower part of the copper block was immersed in liquid nitrogen, and the upper part was exposed to roomtemperature air. A polydimethylsiloxane mold was placed on the surface of a room-temperature copper plate, and then half of the mold was filled with carboxymethyl cellulose/ sodium alginate (CA) solution and placed on a lowtemperature copper block for orientation freezing. After the CA solution was completely frozen, the CA coated with a polypyrrole layer (CAP) solution was poured and the other half of the mold was filled. After the CAP solution was completely frozen, the CAP-CA Janus nanoarchitecture was obtained by freeze-drying over 4 days.
| Preparation of the CCAP-CCA biomass-based Janus architecture
First, 4.44 g of CaCl 2 powders were dissolved in 200 ml of absolute ethanol. Then, CAP-CA Janus artificial nanoarchitectures were immersed in a CaCl 2 ethanol solution for 2 days. Later, CCAP-CCA Janus nanoarchitectures were collected by drying at 80°C overnight.
| Evaporation performance evaluation
The rate of solar steam generation was recorded using an electronic analytical balance (MTL-MS204, 0.1 mg in accuracy) and real-time communicated to a computer. The surface temperature, the steam temperature, and the temperature distribution of the evaporation system were determined using a thermal imaging camera. The solar steam generation performance was assessed using a Xenon Light Source (PLS-SXE300D/300DUV; PerfectLight) outputting a simulated solar flux of 1 sun at 1000 W m −2 . The solar flux was monitored using a thermopile connected to a power meter (PL-MW2000; PerfectLight). All samples were placed in a hole of 3.80 cm 2 in the middle of a closed-cell foam, and the upper area of the remaining part was fully covered with a metal foil to reflect the solar irradiation. A long glass tube was placed to restrict the air convection. All the evaporation rates of each sample were determined on the basis of the mass change over 1 h after quiescence in a dark room for 30 min, and the ambient temperature and relative humidity were maintained at 25°C and 30%, respectively. The mass of water evaporated in a dim room was subtracted from the total mass changes when evaluating the energy efficiency. [30] 3 | RESULTS AND DISCUSSION
| Material design and fabrication strategy
The biomass-based Janus architecture with a biomimeticoriented nanostructure is constructed using the orientated ice template method, followed by calcification ( Figure 2). First, a mixed solution of carboxymethyl cellulose and sodium alginate (Supporting Information: Figure S1A) was freeze-cast upon flat copper to prefabricate the framework of this biomimetic architecture with parallel channel structures. Then, the upper part of this Janus architecture was prepared by in situ coating polypyrrole on the top surface of the as-prepared biomimetic architecture framework, while the lower part was fabricated using an ion-exchange method to form a cross-linking network by soaking the prepared framework in an alcohol solution containing calcium ions. Later, the prepared sample was freeze-dried in a cold trap. Using the above methods, a well-defined Janus architecture with a biomimetic-oriented nanostructure was optimally constructed.
For the upper part of this biomimetic architecture, after polymerization of pyrrole using ammonium persulfate for oxidation, the cellulose/sodium alginate/ polypyrrole composite appears black in color (Supporting Information: Figure S1A,B). On observing the crosssection of this sample, in terms of the morphology of the interface between CA and CAP, the upper part was coated with polypyrrole because the polypyrrole grew along the orientation of the biomimetic architecture framework (Supporting Information: Figure S1C-E). For the lower part of this biomimetic architecture, to construct a molecular cross-linking network and enhance the stability in water, an ion-exchange strategy was applied and there was exchange of calcium ions in the alcohol solution with sodium ions in sodium alginate, thus forming a cage-like complex. Due to the parallel porous structure of this biomimetic architecture framework (Supporting Information: Figure S2), the ion exchange can be carried out completely through the inner surface of the pores.
X-ray diffraction (XRD) patterns ( Figure 3A) further reveal the synthesis process of the biomimetic Janus architecture by the phase composition changes. Compared to the typical wide peak of sodium alginate in CA, the typical peak of Na 2 SO 4 appears in the CAP, which is crystalline salt consisting of the sulfate ions from the reduction product of ammonium persulfate and sodium ions from sodium alginate. Meanwhile, CCA contains NaCl, indicating that a substitution reaction occurs during the calcification process with sodium ions from sodium alginate and calcium ions. Besides, some calcium sulfate and Na 2 Ca 5 (SO 4 ) 6 crystals are formed during the preparation of CCAP owing to their low solubility.
The components of the biomimetic Janus architecture have a huge influence on their anti-solubility behaviors. As shown in Supporting Information: Figure S3, the biomimetic Janus architecture with sodium alginate and carboxymethyl cellulose at a ratio of 1:1 is relatively stable when immersed in an aqueous solution. It is presumed that decreasing the content of sodium alginate will lead to a low degree of cross-linking of the calcium alginate network in the as-prepared architecture, while decreasing the content of carboxymethyl cellulose will weaken the structural strength and stability, resulting in the collapse of the porous structure in the aqueous solution. Due to this well-designed component with a bionic-oriented structure (Supporting Information: Figure S4), the prepared biomimetic Janus architecture still shows a monolithic structure after immersion in an aqueous solution from 5 h to over 10 days, suggesting that this method significantly improves the stability of the composites ( Figure 2B and Supporting Information: Figure S5).
| Structural characterizations and analysis
The functional groups of the as-prepared biomimetic Janus architecture were investigated by Fourier transform infrared spectroscopy (FTIR). In the FTIR spectra ( Figure 3B), these peaks at 1726 and 1543 cm −1 are assigned to the C-N stretching peak and the ring stretching peak of PPy, revealing the existence of PPy particles in the biomimetic Janus architecture. X-ray photoelectron spectroscopy further confirms the chemical composition and molecular structural features of the architecture (Supporting Information: Figure S6A). The characteristic peak located at 347.7 eV is attributed to Ca2p (Supporting Information: Figure S6B), indicating the existence of calcium in CCA and CCAP and the successful calcification of CA and CAP. The peak of C1s at 286 eV in the scan spectra of the biomimetic Janus architecture ( Figure 3C) was fitted and divided into several peaks. Their position, area ratio, and the corresponding bonds are listed in Supporting Information: Table S1. and C-C-C (C1S3), respectively. The peaks that appeared at ∼289.2 eV are assigned to polypyrrole-C in CAP and CCAP, indicating the existence of the PPy component in the biomimetic Janus architecture ( Figure 2B), which is further confirmed by the peaks of nitrogen that appeared in CAP and CCAP (Supporting Information: Figure S6C).
To further demonstrate the features of the asprepared biomimetic Janus architecture, mechanical compression strength tests were conducted. The tangential and radial compressive stress-strain curves ( Figure 3D and Supporting Information: Figure S7) show that the structural strength of the architecture after calcification was improved markedly, 10-fold, which is attributed to the crosslinking network of calcium alginate formed in CCA and CCAP. Although CA after doping PPy and calcification shows slightly decreased compressive strength, a stable structure is still maintained.
Thermogravimetry analysis (TGA) in air flow was performed to investigate the thermal stability. TGA curves ( Figure 3E) show that the process involves three zones: the hygroscopicity part (removal process of free and intermediate water) marked in blue, the thermostability part (removal process of crystal water) marked in white, and the decomposition part (removal process of the hydroxyl group) marked in pink. On comparing these curves in the hygroscopicity part, it is evident that the free and intermediate water in CCA, CCAP, CA, and CAP is ranked from high to low, which suggests that the calcified architecture has much higher moisture absorption, while the doped PPy slightly affects hygroscopicity. These curves in the thermostability part demonstrate that the calcifying and PPy doping treatment significantly improved the thermal stability of CA from 140°C to 168°C and 230°C, respectively. The nitrogen physisorption isotherm (Supporting Information: Figure S8A) for the asprepared biomimetic Janus architecture suggests that its Brunauer-Emmett-Teller surface area changes only slightly compared to CA, CAP, CCA, and CCAP. The pore size distribution (Supporting Information: Figure S8B) also shows that it has many mesopores and macropores, which are useful for mass transport during their application.
| Performance of interfacial solar-thermal conversion steam generation
The biomimetic Janus architecture has the same parallel cross profile structures as natural wood, which can capture sunlight by increasing the optical path and reducing reflection. After doping with pyrrole, CAP and CCAP have high broadband absorption with an absorptance of 95%-98% in a wide wavelength range of 250-2500 nm ( Figure 4A and Supporting Information: Figure S9). Compared to natural wood, the as-prepared biomimetic Janus architecture, such as CCAP, has a lower thermal conductivity of 0.0328 W m −1 K −1 , which is useful to improve the solar-thermal conversion efficiency ( Figure 4B). [31] The equilibrium temperature curves ( Figure 4C) of CAP and CCAP show excellent solar-thermal performance and the temperature of their black upper surface can quickly increase to 60°C under 1 sun illumination, which is 22°C higher than that of CA and CCA (38°C; Supporting Information: Figure S10A). It is noteworthy that we have performed accurate experiments and characterizations to measure the thermal conductivity of the CCA gel, the CCAP gel, and the CCAP aerogel after immersed in water. As shown in Supporting Information: Figure S10B, the thermal conductivity of the CCA gel (606 mW m −1 K −1 ) and the CCAP gel (619 mW m −1 K −1 ) is slightly different from that of pure water (683 mW m −1 K −1 ), whereas the thermal conductivity of the CCAP aerogel (617 mW m −1 K −1 ) after immersed in water decreased to 550 mW m −1 K −1 , which confirms that the CCAP aerogel contributes toward better thermal regulation performance in water than the same materials without a bionicoriented structure. Furthermore, both CCA and CCAP can absorb water droplets in a very short time ( Figure 4D) and have a very high water saturation capacity ratio (about 22.2 and 19.4 times their own weight). This suggests that they are superhydrophilic, with excellent water absorption (Supporting Information: Figure S11), which is owing the increase in the content of saturated water due to the interpenetrating network in the biomimetic Janus architecture. [32] To estimate the equivalent vaporization enthalpy of water in the bio-inspired Janus architecture, water evaporation rate in the dark were further tested (Supporting Information: Figure S12). The test results (Supporting Information: Figure S13A) indicate that the water evaporation rate in the CCAP with various compositions is much higher than that in the CCA with only the hydrophilic component. Since the energy sources in the dark room, including mainly thermal convection and conduction at the gas-liquid interface, are equivalent, the equivalent vaporization enthalpy is represented by the equation given below: where m e , m w , and E w are the vaporization enthalpy, the mass change of bulk water, and the mass change of water due to the presence of the bio-inspired architecture, respectively. Considering the vaporization enthalpy of water to be 2444 J g −1 , the calculated E e values of water with CCA and CCAP are 1696 and 1475 J g −1 (Supporting Information: Figure S13B), respectively, suggesting that the hydrophilic networks of the bio-inspired Janus architecture facilitate the evaporation of water. To investigate the effect of the bio-inspired Janus architecture on the phase-change behavior of water, we studied the melting behavior of the bio-inspired architecture using differential scanning calorimetry (DSC) analysis. DSC curves (Supporting Information: Figure S14) show that all the endothermic peaks of CCAP and CCA have two parts, representing the desorption process of free water and the melting process of the intermediate. Table S3. in which the ∆H w and ∆H f are the measured melting enthalpy values of the biomass-based Janus architecture and subcooled pure water, respectively. Also, the weight fraction of free water (ω ) f has the formula: ω ω ω = − f w i . All of the measured and estimated results (Supporting Information: Table S2) indicate that the melting enthalpy values of intermediate water in CCA (−190.5 J g −1 ) and CCAP (−164.3 J g −1 ) decrease significantly, and the CCA doped with PPy (CCAP) has a strong influence on the phasechange process of water.
The surface temperature of samples in water under 1 sun illumination is recorded using a thermal infrared camera. The equilibrium temperature of CCAP is 10°C higher than the ambient temperature (Supporting Information: Figure S15), while the surface temperature of water and CCA only increases 1.5°C and 2.7°C, respectively. Due to the excellent solar-thermal conversion performance, water absorption property, thermal management, and low evaporation enthalpy, the biomimetic Janus architecture (CCAP) presents a high water evaporation rate of up to 2.3 kg m −2 h −1 under 1 sun irradiation ( Figure 4E), 4.4 times higher than the evaporation rate of bulk water (0.52 kg m −2 h −1 ), and 2.0 times that of water in CCA (1.17 kg m −2 h −1 ), respectively. The energy efficiency (η) is further calculated using the following formula: in which ṁis the mass change within 1 h (all the experimental data were calibrated with dark evaporation data), P solar is the solar irradiation power of 1 sun, and C is the optical concentration (1 sun) on the evaporator surface. The biomimetic Janus architecture (CCAP) shows a high energy efficiency of 85.8%, much larger than that of bulk water (27.3%) and CCA (46.6%). The long-term ISSG test under 1 sun irradiation demonstrates that CCAP shows good cyclic stability performance (Supporting Information: Figure S16).
These results indicate that the outstanding ISSG performance of CCAP mainly relies on the high solar absorption, low thermal conductivity, and small vaporization enthalpy, which originate from the welldesigned cross-linked networks in the biomimetic Janus architecture. Compared with previously reported materials, this biomimetic Janus architecture has significant advantages of both thermal management and a high water evaporation rate ( Figure 4F and Supporting Information: Table S3), further demonstrating its potential for practical ISSG.
| Performance of solar desalination and metal-ion enrichment
To demonstrate the practical solar desalination performance of the biomimetic Janus architectures, we conducted ISSG tests under 1 sun irradiation. The water evaporation rate of CCA-CCAP in various solutions can be maintained above 2.0 kg m −2 h −1 (Figure 5A), which is similar to that in pure water, indicating the structural stability of the biomimetic Janus architectures during wastewater purification. In addition, we find that CCAP-CCA without an orientation structure does not significantly impact the evaporation rate. This is mainly attributed to the fact that the orientation structures do not have much obvious difference in water transport, heat management, and solar absorption. When using this biomimetic Janus architecture for ISSG desalination in seawater, lake water, and sewage containing various metal ions, such as Na + , K + , Mg 2+ , and Ca 2+ , the metal-ion concentrations are significantly reduced by 2-4 orders ( Figure 5B), which fulfills the drinking water standards defined by the US Environmental Protection Agency and the World Health Organization. There are abundant hydroxyl/carboxyl groups in the biomimetic Janus architectures, especially CCA, which can be used for enriching toxic metal ions, such as Pb 2+ , Cu 2+ , and Cd 2+ , in wastewater. Because of the strong combination of metal ions and hydroxyl/ carboxyl groups ( Figure 5C), these toxic metal ions are enriched in biomimetic Janus architectures, thus forming a more stable network structure for long-term desalination. To further explore their feasibility in the practical applications of metal-ion enrichments, the ISSG process is carried out for wastewater treatments with more than two kinds of metal-ion mixtures. The neutral solution marked Metal 1, including Cu 2+ /Cr 3+ / Cd 2+ /Pb 2+ /Ni 2+ /Co 2+ /Mn 2+ , and the acid solution marked Metal 2, including Ba 2+ /Fe 3+ /Zn 2+ /Mg 2+ / Al 3+ /Hg 2+ (Supporting Information: Figure S17), are prepared for testing wastewater treatment performances. When the biomimetic Janus architectures are soaked in the wastewater used for solar desalination, most of these metal ions are absorbed and enriched into them, thus resulting in lighter colors of these solutions. After the ISSG desalination treatment, the biomimetic Janus architectures show the colors of these metal ions ( Figure 5C) and the water collected in the beaker becomes transparent. Inductively coupled plasmaatomic emission spectrometry (ICP-AES) results ( Figure 5D) show that all the compositions of the 14 typical metal ions in the collected water are considerably lower than the drinking water standards defined by the World Health Organization (2008, indicated by the red lines), which reveals that healthy drinking water is regenerated by the biomimetic Janus architectures, further indicating that the biomimetic Janus architectures can be used for the regeneration of pure water. Moreover, scanning electron microscope-energy dispersive X-ray detector results of the metal-ion-enriched biomimetic Janus architectures (Supporting Information: Figure S18) show that all metal ions are detected on their skeletons. XRD patterns (Supporting Information: Figure S19) demonstrate that all enriched metals in the Janus architecture are amorphous; it can be speculated that the metal ions exist in the form of complexes with alginate (Supporting Information: Figure S20). The above results show that the biomassbased Janus architecture in the ISSG process works on a principle similar to that of water transportation in plants during photosynthesis. On the one hand, salt and metal ions are transported to the evaporation interface and the phase transformation process occurs via solar energy. On the other hand, metal ions in water are enriched into the CCA layer of the biomass-based Janus architecture.
| CONCLUSION
Inspired by natural plants with oriented pore structures for transporting nutrient ions from soil in the photosynthesis, we develop a biomimetic Janus architecture using all biomass-regenerated precursors to obtain clean drinking water and simultaneously remove heavy-metal ions with high efficiency. With high light absorbance (98%), outstanding thermal management (λ = 32.8 mW m −1 K −1 ), and low vaporization enthalpy (1475 J g −1 ), the prepared biomimetic Janus architecture of CCAP-CCA presents a high-rate steam regeneration rate of 2.3 kg m −2 h −1 under 1 sun illumination. Furthermore, because the superhydrophilicity and macroporous skeleton enable transport of sufficient water and salt, CCAP-CCA shows long-term evaporation stability and a high water evaporation rate (2.0 kg m −2 h −1 ) in metal-ion solutions. This study provides a pathway for heavy-metal ion enrichment during the construction of high-performance solar desalination evaporators. This method is important for various practical applications, including sewage reclamation and heavy-metal wastewater reduction. | 2022-08-20T15:16:27.707Z | 2022-08-17T00:00:00.000 | {
"year": 2022,
"sha1": "6bd95a85a6b819304e98c7bf1d1a835553f535e2",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/idm2.12057",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "0b3200b540b0222f7953205789bb7fa103f0f733",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
109770759 | pes2o/s2orc | v3-fos-license | The Twelfth East Asia-Pacific Conference on Structural Engineering and Construction Research and Practice of Response Control for Tall Buildings in Mainland China
In the recent two decades a large amount of tall buildings have been constructed in Mainland China, offering good opportunities of research and practice for Chinese researchers and practioners in the field of structural engineering. Some research and practice achievements of response control for tall buildings in Mainland China in recent five years are introduced here, focusing on the performance-based seismic analysis and design of code-exceeding tall building aiming to design structures with predictable seismic performance in the future earthquake, shaking table model tests on complex tall buildings to evaluate the seismic performance of structures and accordingly revise the structural design, and the application of structural control technologies to better protect tall buildings from winds and earthquakes. Some typical examples of practical application, such as application of active tuned mass dampers (ATMD) in Shanghai World Financial Center Tower and application of deformation-related with damping and stiffness dampers in Zhengda Himalaya Hotel, are presented. © 2011 Published by Elsevier Ltd.
Introduction
With the rapid economic growth and urbanization, the high-rise construction boom started from 1990s in Mainland China. Owing to the wide variety of social requirement for commercial or aesthetic purposes, the limited availability of land, and the preference for centralized services, the height of tall buildings has grown taller, and the configuration as well as structural system has become more complex in recent years, which brings more difficulties in structural analysis and design. The seismic safety of tall buildings has attracted extensive attentions of the local government and researchers due to the fact that most of the tall buildings having been completed or being under construction are located in earthquake prone areas. Furthermore, a large amount of super tall buildings concentrate in the east coastal areas where the effects on structures by winds are equally important. The most effective way to better protect structures, along with their occupants and contents from earthquakes and winds could be through the development of more reliable code provisions and design methods than those currently available, and the implementation of the advanced technology in construction. The research of response control for tall buildings under earthquakes and winds has been highlighted and some progress has been made in this field (Lu et al 2007a;Lu et al 2009). Based on the research achievements, the structural design methods and control techniques for tall buildings have been improved. Some structural control technologies have been applied in engineering practice. Some research and practice work of response control for tall buildings in Mainland China in recent five years are introduced here.
Performance-based seismic analysis and design of code-exceeding tall buildings
Chinese regulations provide high prescriptive provisions, such as the height limits on various structural systems in each seismic intensity zone, plan and vertical regularities, and various response limits. Tall buildings satisfying the regulations can be designed according to the regulations. Building codes provide the minimum requirements for the design of structures to ensure the safety of the life and property. Tall buildings not satisfying the regulations are called "code-exceeding" and are required to conduct expert panel review called "Review on Seismic Fortification of Code-exceeding Tall Buildings" at the end of the preliminary design phase. In Mainland China in recent years, performance-based seismic analysis and design approach has been highly recommended to employ especially in code-exceeding tall buildings in order to control efficiently the seismic damage and economic losses, to promote the implementation of the advanced technology in construction, and to meet the diverse needs and objectives of the owners, users and society. In Shanghai, Seismic Design Guidelines for Tall Buildings beyond the Scope of Design Codes issued by Shanghai Urban Construction and Communications Commission (Lu 2009) are the present unique design guidelines concerning code-exceeding tall buildings in Mainland China. The performance-based seismic design approach specified in the guidelines is introduced as follows.
Categories of code-exceeding tall buildings
Non-prescriptive or code-exceeding tall buildings fall into one of the following categories: (1) Tall buildings with heights exceeding the applicable limits for the respective structure type as specified in the guidelines. (2) Tall buildings with three or more of plan or vertical irregularities. The irregularities involve drastic changes in geometry, interruptions in load paths, discontinuities in both strength and stiffness, disruptions in critical regions by openings, large eccentricity between the rigidity centre and mass centre, etc. The allowable limits are specified in the guidelines. (3) Tall buildings with one or more severe plan or vertical irregularities list in the guidelines. Sever irregularities include severely over limit of the above items, transfer floor at high level, multiple complex structure, etc. (4) Other tall buildings. These are tall buildings which have new or undefined structural system that are not addressed in current codes, or have long spans and high occupancies such as train stations, stadiums, department stores, exhibition halls, airports, etc.
Performance objectives
Three earthquake design levels, i.e. frequent earthquake (63% probability of exceedance in 50 years or 50 year return period), basic earthquake (10% in 50 years or 475 year return period), and rare earthquake (2% to 3% in 50 years or 2475 year return period), are considered in Mainland China. The performance objectives should be enhanced for code-exceeding tall buildings. The relationships between the performance levels and earthquake design levels are summarized in Table 1. The seismic fortification category of buildings is classified into four grades according to the importance of building and the consequence of earthquake disasters. Type A is the highest grade. For tall buildings, the lowest grade, Type D, is excluded.
Design criteria and procedures
The design criteria are established corresponding to the desired performance objectives. These minimum acceptance criteria ascertain that the performance objective be accomplished. The criteria are set in terms of limit values of axial load ratio (specified for RC columns and shear walls), stresses, interstory drift ratios, etc. The design philosophy of weak beam and strong column, weak flexural strength and strong shear strength, and weak member and strong joint, is commonly employed to adjust the strength and then the reinforcement. In addition, the constructional measures, such as minimum reinforcement ratio, minimum material strength grade, reinforcement detailing, etc., are required to reduce structural damage.
The seismic design procedures consist of two design phases. In the first phase, the seismic performance objectives are selected and elastic analysis under the frequent earthquake is performed to determine the dimensions and reinforcement of structural members by modal response spectrum analysis using elastic design spectra. In the second phase, the seismic performance of the target building is evaluated by numerical analysis ranging from simple frame procedures to an elaborate finite element analysis. Nonlinear analysis should be properly substantiated with respect to the seismic input, the constitutive model used, the method of interpreting the results of the analysis and the requirements to be met. Nonlinear dynamic analysis should be performed for the buildings with the height more than 200 m. Buildings higher than 300 m are required to be analyzed using two or more different computer programs to validate the results. The earthquake responses, plastic mechanism, distribution of damage, etc., are estimated against the preset allowable limit. If necessary, structural testing including joint, member, and integral structural model test should be conducted to study the structural behavior and check the seismic performance directly. If the pre-defined seismic objectives can not be satisfied, design iteration should be done until satisfied. Figure 1 shows the flowchart of overall performance-based seismic design procedures.
Shaking table model tests on complex tall building
Structural model testing is often used to help structural engineers to directly acquire the knowledge about the prototype structure, especially in the case of complex tall buildings for which the numerical simulations are considered somewhat unreliable. Shaking table model test has been considered an economic and practical way to evaluate the seismic performance of structures. Many reduce-scaled structural models of complex tall buildings have been tested on the authors' laboratory. The model design and construction method, testing and analytic procedures, and measure techniques have been well developed. By shaking table tests, the earthquake responses and dynamic characteristics are obtained, the failure process and mechanism, and structural weak points are discovered, and then the overall seismic performance of the prototype structure is evaluated accordingly. Advices or suggestions are proposed as references for structural design to improve the seismic performance of structures. Some test results were also verified by in-site testing on completed real buildings (Lu et al 2007b). Some test results were also verified by numerical analysis (Lu et al 2007c Figure 1: Flowchart of overall performance-based seismic design procedures.
Shanghai International Financial Center Tower
The total height of the 53-story building is 250 m above ground. The main structural system consists of steel reinforced concrete frame and RC core wall. It is a vertically irregular structure, due to a number of stiffened stories and a high-position transfer story to afford the substantial decrease of spacing between columns above the 39 th Floor level.
A 1/30-scale model as shown in Figure 2 was constructed and tested on the shaking table, subjected to a series of one and two-dimensional base excitations with gradually increased acceleration amplitude for four intensity levels, representing frequent, basic, and rare earthquakes of Chinese intensity 7, and rare earthquakes of Chinese intensity 8 respectively. El Centro wave (1940), Pasadena wave (1952), and Shanghai artificial wave, were used as input motions with PGA scaled according to the similitude relationship. No visible cracks occurred in the tested model under the ground motions of the first two intensity levels. The model slightly cracked under the ground motions of the third intensity level. Under the ground motions of the largest intensity level, most of the damages concentrated at the structural members located in the transfer as well as stiffened stories (from the 37 th to 39 th Floor level). The connection zones between the peripheral mega-columns and closely spaced columns were damaged seriously. The longitudinal bars buckled and cover concrete crushed and spalled in the closely spaced columns. Horizontal cracks occurred at the top of some mega-columns, as shown in Figure 3(a). Some steel members in the outrigger truss buckled, as shown in Figure 3(b). The arrangement of closely spaced columns are suggested to be adjusted and the ductility be increased accordingly. The design of the structural members and joints in the transfer story is also suggested to be improved. The maximum interstory drifts and overall seismic behavior meet the requirements of Chinese seismic design code.
National hall of China pavilion for Expo 2010 Shanghai
The National Hall of China Pavilion for Expo 2010 Shanghai was designed with peculiar style and special structural system. The main structure is composed of four RC tubes with steel-concrete composite floors. Four cores with the same plan dimensions 18.6 m × 18.6 m were designed as the primary lateral resisting system. At the height of 33.3m, 20 inclined columns consisting of concrete-filled rectangular steel tube is placed on the perimeter of the cores as the vertical support of the big-span steel beams in the floors. The fundamental vibration mode of this structure is torsional, resulting in the period ratio between the first torsional mode and the first translational mode exceeding the limit value stipulated in Chinese design code. In addition, there is an atrium with dimensions 32.7 m × 32.7 m and the floors between the height of 38.7 m and 46.8 m are staggered, which results in the plan irregularity defined by the Chinese code.
A 1/27-scale model as shown in Figure 4 was constructed and tested on the shaking table subjected to a series of one and three-dimensional base excitations. The test procedures and the imputed motions are similar to above tests. No visible cracks occurred in the tested model under the ground motions of the first intensity level. After the input of ground motions of the second intensity level, several vertical cracks were detected at the ends of the coupling beams located between the 4 th and 10 th Floor. One inclined column slight twisted, which may be attributed to its instability. Under the ground motions with the third intensity level, both ends of most coupling beams below the 10 th Floor showed vertical cracks (see Figure 5), and for those with sectional dimensions 800 mm×4500 mm at 10 th Floor, diagonal cracks were observed (see Figure 6). However, previously observed twist of inclined columns did not develop further. Under the ground motions of the last intensity level, almost all the coupling beams within the core wall showed vertical cracks at both ends. Horizontal cracks and crushing of concrete occurred at the bottom of the core wall. Although the first mode is tortional, the actual torsional responses are not significant. The inter-storey drift and the overall seismic behavior meet the requirements of Chinese code. To improve its seismic performance, it is suggested to reduce the sectional dimensions of the coupling beams with large height at the level of 33.3 m and strengthen the transverse connection of inclined columns. Figure 4: Tested model. Figure 5: Cracks in coupling beams. Figure 6: Cracks in coupling beams and walls
Shanghai Jiali Center
Shanghai Jiali Center has 58 stories above the ground and 4 stories underneath with the total building height of 260 m. The steel reinforced concrete frame and RC core structural system with a stiffened story at middle height was designed to resist the lateral and the vertical loads. There are two setbacks at the 21 st and 31 st Floor respectively, and two sets of inclined columns tilting from the 16 th to 21 st Floor and from the 21 st to 22 nd Floor. The structure is vertically irregular. A 1/35-scale model as shown in Figure 7 was constructed and tested on the shaking table subjected to a series of one and two-dimensional base excitations. The test procedures and the imputed motions are similar to above tests. No visible cracks were found in the model after the first two phases of test. After the third phase, several cracks were observed in the columns at the 2 nd Floor. After the forth phase, significant damages occurred. In the stories adjacent to the setback of the 31 st Floor, cracks appeared in some columns, and the concrete at the bottom of some columns crushed. The diagonal cracks run through the whole core walls in the setback story. The columns above the inclined columns of the 22 nd Floor cracked, due to the complex load transfer path from inclined columns to the vertical ones. Figures 8 and 9 show the typical cracks in the columns and walls at 31 st Floor. No damages were observed in the bottom shear walls and the truss members in the stiffened story. The maximum inter-story drifts and overall seismic behavior meet the requirements of the Chinese code. The ductility of the structural members at the setback of the 31 st Floor is suggested to be improved to reduce the adverse impact brought by the abrupt stiffness alteration.
Structure control study with application
The structural control technology has been applied extensively to ensure the safety and serviceability of buildings against natural disasters. Generally, structural control technology for alleviation of wind or earthquake response of buildings can be categorized into three broad areas: base isolation systems, passive energy dissipation systems and active control systems. Of them, active control systems have still not been applied widely due to excessive cost and extra large power requirements. The other two have been considered as relatively mature technologies and already applied to a large number of buildings throughout the world. Two typical examples of application of structural control technology in tall buildings, one to resist winds, the other to resist earthquakes, are introduced as follows.
Application of ATMD in Shanghai World Financial Center Tower
The 101-story Shanghai World Financial Center Tower (SHWFC) is 492 m above ground, the tallest completed building in Mainland China. The perspective view of SHWFC is shown in Figure 10. The structure is diagonally symmetrical, as shown in Figure 11. Three parallel structural systems, the megaframe structure consisting of the mega-columns, mega-diagonals, and belt trusses; the reinforced concrete and braced steel services core; and the outrigger trusses which create interactions between the services core and the mega-structure columns, are combined to resist vertical and lateral loads. Perimeter concrete walls locate at lower levels from the 1 st to 5 th Floor, and mega-columns are positioned at the corners of the building from the 6 th Floor. Several stiffened and transfer stories in the structure are regularly spaced throughout the height of the building. One-story high belt-trusses and core transfer trusses are placed at each 12-story interval, whereas three 3-story high outrigger trusses spanning between the mega-columns and the corners of the services core are distributed evenly along the height. Both of the total height and irregularity exceed the code limit.
In order to mitigate wind-induced vibration, a set of two identical active tuned mass dampers (ATMDs) is installed at the 90 th Floor (see Figure 12): under wind loading, the active control feature is enabled, while the active control feature becomes disabled under earthquake condition and the damping devices functions as passive tuned mass dampers ( Mitsubishi Heavy Industries 2007). Figure 13 gives the working principle of ATMDs. The damping devices are installed along y axis. The vibration body, whose natural period is adjusted to the fundamental period of the building (y direction), is hoisted by the multisectional steel cables. The damping devices consist of two parts: multi-section vibration body and drive device. The control force of vibration body is obtained by the feedback motion state variables. These state variables include the acceleration of the floor on which the damping devices are set up, as well as displacement and speed of the vibration body. The designed travel stroke of the damping devices are 140 cm and the control stroke of the damping devices are 110 cm. In addition, to avoid excessively large displacement of the damping devices in seismic events, the devices are locked by locking devices on the driven screw when the vibration amplitude of vibration body exceeds 110 cm in the passive control state. The analytical results of the vibration control are as follows: under the active control state with the effect of wind load of a one year return period, the maximum acceleration response of the 90 th Floor decreases to 60% and root mean square of acceleration response decreases to 55%; under passive control state with the effect of wind load of a 10 year return period, the maximum acceleration response of the building decreases to 72~79% and the root mean square of acceleration response decreases to 72~74%; under locked state with the effect of wind speed of a 100 or a 200 year return period, no significant effect is obtained in the maximum acceleration response. When the active control features are disabled, the damping devices work as typical passive TMDs. The seismic performance of the structure with the TMDs is estimated. TMDs can reduce the vibration of the fundamental mode in very small degree. Since the fundamental mode of the structure does not dominate the response, vibration control directed toward this mode under seismic action would indeed be fruitless. The TMD has little effect on the seismic performance of the structure.
In order to verify the analysis results, site measurements under ambient and forced vibration condition were performed. The natural frequencies and damping ratios estimated in the analytical and experimental studies are almost identical with 0.3% error. The acceleration time history of the 90 th Floor in X direction with and without active vibration control of the two types of forced vibration tests at amplitude of 5 gal is shown in Figure 14. With active tuned mass damper, the structural vibration is mitigated significantly. The damping ratio for the first mode (in Y direction) without active vibration control is 0.422% while the value increases to 3.404% with active vibration control. The damping ratio for the second mode (along x direction) without active vibration control is 0.459%, while the value increases to 3.865% with active vibration control. The damping ratio with vibration control devices increases by 8 times.
Application of Deformation-related Dampers in Zhengda Himalaya Hotel
Shanghai Zhengda Himalaya Art Center as shown in Figure 15 consists of an office building (in the left), a multi-function hall (in the middle), and a five star hotel (in the right). Only the earthquake response control of the hotel is introduced here. The total height of the 22-story hotel is 98.7 m. The main structural system consists of RC frame and shear walls. The plan is a square with the side length of about 60 m. The structural plan layout of the 11 th to 16 th Floor is shown in Figure 16. There is a large opening in the center of the plan, leading to the plan irregularity. A transfer story is located at the 6 th Floor, and the section of the RC tube from the ground to the 6 th Floor is abnormal, resulting in the vertical irregularity. The ratio of the maximum floor displacement to the average one which reflects the tortional response is up to 1.31 under the earthquake.
To improve the seismic performance of the building, 36 sets of deformation-related with damping and stiffness damper distributed from the 10 th to 17 th Floor, half in the X direction and half in the Y direction, are installed in the structure. As a metallic damper classified as the displacement-dependent type damper, the damper consists of a series of low yielding strength steel holed plates wherein the bottom of the plates are attached to the top of a chevron bracing arrangement and the top of the plates are attached to the floor level above the bracing. As the floor level above deforms laterally with respect to the chevron bracing, the steel plate is subjected to shear force. The shear force induces bending moment over the height of the plate, with bending occurring about the weak axis of the plate cross section. The dissipated energy in the damper is the result of inelastic behavior and thus the damper will be damaged during an earthquake and need to be replaced. A 1/20-scale model of the structure with the dampers (see Figure 17) were constructed and tested on the shaking table. The scaled model of the damper was first tested under cyclic loading, as shown in Figure 18. The dimensions of the damper model are shown in Figure 19. The damper has good energy-dissipation energy, which is demonstrated by the force-displacement curve obtained by the test (see Figure 20). The shaking table tests verify that the structure with dampers has good seismic performance and can meet the code requirements.
Conclusions
Some research and practice achievements of response control for tall buildings under winds and earthquakes in Mainland China are presented here. A general performance-based seismic analysis and design approach for code-exceeding tall buildings is introduced. By the incorporation of performancebased seismic design into the current seismic design code, it becomes much more possible for designers to intentionally control the damage levels of structures within acceptable range during earthquakes of different intensities. In Mainland China shaking table tests on the scaled model have been extensively employed to evaluate the overall seismic performance of complex tall buildings and accordingly revise the structural design to meet the performance objectives. On the other hand, the structures could be protected from earthquakes and winds with the aid of structural control technologies. There has been steady progress in research and development of structural control techniques in Mainland China. This technology is still evolving with the aid of other technologies. The research work is generally combined with engineering application and can be transformed for the actual needs of engineering practice. Most of the research results have been applied in engineering practice successfully. | 2021-05-29T16:27:13.406Z | 2011-01-01T00:00:00.000 | {
"year": 2011,
"sha1": "c5499c00a435029ae970217d34cf37d7d2c61383",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.proeng.2011.07.008",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c5499c00a435029ae970217d34cf37d7d2c61383",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
235467114 | pes2o/s2orc | v3-fos-license | COVID-19 outbreak and air quality of Lahore, Pakistan: evidence from asymmetric causality analysis
This paper aims to examine the impact of COVID-19 restrictions on the air quality of Lahore city of Pakistan for the period 26th February, 2020 to 31st August, 2020. The study employs asymmetrical Granger causality tests for analyzing the effects of COVID-19 cases and deaths on particulate matter (PM2.5) emissions in the city. The results show positive shocks in COVID-19 cases and deaths improve the air quality of the city. This implies that the pandemic has lowered down environmental pressure in one of the top most polluted cities of the world. Further, the problem of hazardous air pollution in Lahore city is manmade mainly caused by everyday human activities. When these human activities were restricted owing to a rise in COVID-19 cases and deaths, the air pollution in the city resultantly reduces. Therefore, this study recommends controlling unnecessary production and consumption activities that degrades the environment so that air pollution in the city can be manageable after the COVID-19.
Introduction
Air pollution has always been a matter of concern all over the world. Lahore suffers from a significantly high level of air pollution since early 2017. The renowned Swiss air quality company IQAir Visual ranked the city as one of the top polluted cities globally. Further, the company recently declared Lahore as the second most polluted city in the world, after Delhi. According to the World Health Organization, air pollution is principally proxied by the concentration of PM 2.5 particles in the atmosphere as they impose significant health hazards compared to any other pollutant in the atmosphere. These emissions mostly cause respiratory diseases as they are rich in sulphate, nitrates, ammonia, black carbon and sodium chloride (Khan et al. 2017).
Air pollution in Lahore is caused by numerous factors. Emissions from vehicles and industries are the most common cause of air pollution in the city. Similarly, smoke from brick kilns, residue from crop burning and negligence to recycle general waste are also major causes of air pollution in Lahore. Therefore, human activities and anthropogenic air pollution are highly interlinked.
With the incidence of the COVID-19 pandemic in Pakistan, the authorities have imposed several restrictions to control the spread of the virus. The first confirmed case of COVID-19 in Pakistan was identified on February 26, 2020. After that, the country has reported large-scale outbreak of the pandemic in the mid of March and currently has the 3rd highest number of confirmed cases in South Asia after India and Bangladesh. Lahore is the second largest city in Pakistan with the highest recorded COVID-19 cases and deaths in the city. Half of the total cases of the Punjab province of Pakistan are also reported from Lahore. Therefore, the city has been the top most infectious hotspot in the country. With these adverse conditions of COVID-19, the authorities immediately imposed a strict wide lockdown in the country. These restrictions were mainly aimed to prevent the further spread of the infectious virus. The restrictions were imposed 1 3 primarily on banning public transport, closure of businesses, offices, institutions and industries (Bherwani et al. 2020). Resultantly, human as well as economic activities were put on hold thereby producing several socio-economic disturbances. These disturbances also have a direct or indirect effect on the environment as according to Wang et al. (2020) socioeconomic factors are primarily responsible for environmental performance.
A growing body of research on COVID-19 and the environment pointed both the positive and negative impact of the pandemic on air quality. For instance, a study by Gautam (2020) reported that air quality improves due to COVID-19 lockdown. The author advocated that upon imposition of COVID-19 restrictions, the transportation activities are brought drastically down. This decrease in transportation activities reduces oil demand and consequently declines in energy consumption as a result pollution in the city lowers down. On the other hand, a study by Zambrano-Monserrate et al. (2020) reported an increase in environmental pollution due to COVID-19 lockdown. The authors argued that as lockdown restriction are imposed the mobility of common man were restricted. This decrease in mobility also reduces recycling activities. As people get confined in their homes they get reluctant to properly dispose and recycle their waste. As a result environmental pollution increases. Further, restriction of staying at home also increases domestic waste consequently raises pressure on the environment. Thus, the pandemic has both favourable and adverse effects on the quality of the environment.
As the impact of COVID-19 on the environment has been the focus of attention among researchers since the incidence of the pandemic, increasing research has been done on analyzing how COVID-19 is affecting environmental quality. Most of the studies in this emerging domain have done country-specific analysis focusing on a specific country situation of COVID-19 and environment. Like studies such as Gautam (2020) has been conducted on Wuhan city of China, Xu et al. (2020) Therefore, this study contributes to the existing literature by analyzing the impact of COVID-19 on the air quality of Lahore city Pakistan. To the best of our knowledge, no study in this domain of research has been conducted so far on specifically Lahore and overall within Pakistan. The analysis of Lahore is particularly significant and crucial to investigate as currently there are 48,971 total confirmed COVID-19 cases in the city. The number of cases and deaths is reported to be on the rise in Lahore compared to other districts of Punjab province of Pakistan. Lahore is also reported to be one of the top polluted cities in the country for the last few years. Lahore is the second top most polluted city in the world (IQAir 2019). Hence, in these circumstances there is a dire need to investigate the impact COVID-19 cases and deaths on the environmental quality of Lahore city of Pakistan.
Most importantly, this study investigates the nonlinear impact of COVID-19 cases and deaths on the air quality of the city. The existing studies have used several linear econometric approaches to examine the effects of the pandemic on the environment. For instance, Sharma et al. (2020) employed WRF-AERMOD modelling for analyzing the COVID-19 effect on environment, Mahato et al. (2020) analyzed the effect through Spatial mapping, Li et al. (2020) using WRF-CAMx modelling system, Ropkins and Tate (2021) by Breakpoint testing technique, Xing et al. (2020) by employing response-based inversion model, Jephcote et al. (2021) through Business-as-usual modelling method, Mor et al. (2021) using principle component analysis and Donzelli et al. (2021b) by conducting normality analysis.
However, assuming symmetry in the selected variables and analyzing the effect through symmetric modelling techniques can give misleading results. In reality, there is a positive shock in COVID-19 but no negative shock is seen, in the selected time period, no cure of the virus was existing. However, in PM 2.5 emissions there are both positive and negative shocks. Therefore, investigating the effects of both shocks as an aggregate ignores hidden causal associations among the variables. Thus, this study explores the possible causality relationships by segregating the variables into positive and negative shocks. There is only one existing study that assumes nonlinearity in COVID-19 and environmental pollution (Pata 2020). But the study by Pata (2020) is conducted on US cities, therefore, its findings cannot be generalised for the rest of the world countries specifically for developing country like Pakistan having a city (Lahore) with second highest number of COVID-19 cases and deaths and being one of the top most polluted city of the world. Most importantly, unlike Pata (2020) this study employed nonlinear asymmetric causality.
Therefore, this study aims to answer the following questions: First, does the number of cases and deaths caused due to COVID-19 improve the air quality of Lahore. Second, what are the effects on positive and negative shocks of air quality when there is a positive shock in COVID-19 cases and deaths in the city. To find answer to these questions, this study measures the air quality of the city through PM 2.5 emissions in two different localities (Met Station and Town Hall) of Lahore city. Our study analyses whether there exists a causal relationship from positive shocks of COVID-19 to positive and negative shocks in PM 2.5 emissions in Lahore. We have investigated symmetry and asymmetric relations through the granger causality test. The daily data from 26th February, 2020 to 31st August, 2020 is used in the analysis. To the best of our knowledge, this study is the first of its kind for Lahore city which is currently the hotspot for both the pandemic and air pollution.
The rest of the study is structured as follows: Sect. 2 provides a discussion on data and methodology used in the study. Section 3 presents results and a detailed discussion on them. In the last, Sect. 4 concludes the study and highlights important future policy implications.
Methodology
This study aims to find the asymmetric causality effects of COVID-19 on air pollution in Lahore. To achieve this objective, the following model is utilized to investigate the asymmetric association of COVID-19 cases and death on PM 2.5 emissions: where COVID-19 cases + is the positive shock in the number of cases due to the pandemic, COVID-19 deaths + is the positive shock in deaths caused by the virus, PM 2.5 emissions + is the partial sum of positive change in particulate matter emissions and PM 2.5 emissions − is the decomposition of partial sum of negative change in particulate matter in the atmosphere. The study examines the effects on two localities of Lahore (Met Station and Town Hall) from the time period February 26, 2020 to August 31, 2020. Since the data of other areas of Lahore are unavailable, therefore, only selected two locations are included in the analysis. The data of COVID-19 cases and deaths are taken from Our World in Data (2021) whereas the data of PM 2.5 emissions (μg/m 3 ) for Lahore is collected from the website of the Environment Protection Department, Government of Punjab. All the variables of the study are converted to a natural logarithm for obtaining a stable variance.
Econometric model
The study adopted Shin et al. (2014) method of breaking down selected variables into their negative and positive components. Thus, using the method we have positive series of COVID-19 cases + , COVID-19 deaths + , PM 2.5 emissions + and negative components as PM 2.5 emissions − written as: where COVID -19 cases + t is the positive shocks in the COVID-19 cases and t is the time period. Similarly, COVID -19 deaths + t is the positive shocks in number of deaths caused by the virus. Likewise, PM 2.5 emissions + and -subscripts represents positive and negative shocks in the series, respectively.
Asymmetric causality inference
Initially, the idea of segregating data into cumulative positive and negative components is proposed by Granger and Yoon (2002). The authors transformed the variable for analyzing hidden cointegration because of positive and negative changes in the series. Hatemi-J (2012) and Hristu-Varsakelis and Kyrtsou (2013) extended the work of Granger and Yoon (2002) for causality analysis referring to it as asymmetric causality testing since both positive and negative shocks may behave differently in causality estimates. The author assumed the integrated variables y 1t and y 2t having random walk processes in the following way: (3)
COVID -19 cases
where t = 1, 2,..T, y 1,0 and y 2,0 are the initial values and 1i and 2i are white noise disturbance terms. These disturbance terms are transformed into positive + 1i = max 1i , 0 and + 2i = max 2i , 0 and negative shocks − 1i = min 1i , 0 and − 1i = min 2i , 0 . Therefore, after decomposing the initial shocks are written as: 1i = + 1i + − 1i and 2i = + 2i + − 2i . Thus, Eqs. 7 and 8 can be presented as Lastly, the cumulative for ms of the positive and negative shocks can be wr itten as 2i . In the next step, the causal relationships between the transformed components are to analyzed using vector autoregressive introduced by Hatemi-J (2012). Now assume the following VAR (p) process: where t = 1t, …, kt � is a zero mean of error term with n o n -s i n g u l a r c o v a r i a n c e m a t r i x Σ a n d < ∞ for > 0. Now, assuming the following hypothesis Here y t vector has y 1 t and y 1 t sub-vectors and Z i is matrices. If the above hypothesis is true then y 2 t does not granger cause y 1 t . Using the matrix donation, the VAR matrix having constant term (A) can be written compactly as: Now the Eq. 13 is estimated using the OLS method. In the next step, the whole VAR model is estimated through Zellner's Iterative Seemingly Unrelated Regression (ISUR) method. The ISUR technique, estimate the parameters using maximum likelihood methods. The unrestricted regression is labeled as and restricted one as . The Rao F-test for estimating Granger causality can be written as follow: w h e r e = Δ s − r, Δ = T − (k(kp + 1) − Gm) + 1 2 [k(G − 1) − 1] and the restriction imposed in H 0 is r = q∕2 − 1, U = det S R ∕ det S u .q = Gm 2 . Here, G is p restriction in Eq. 11 and m is y 1 t dimension. The s is mathematically written as follow: The RAO test is distributed as F = q, in null hypothesis and later decomposes into standard F-statistics when k = 1.
Statistical analysis
The data of COVID-19 and PM 2.5 emissions are subjected to descriptive statistics using EViews 10. Table 1 shows the concentration of PM 2.5 emissions in the Met Station and Town Hall localities of Lahore and the descriptive details of the number of COVID-19 cases and deaths in the city. The statistics show average, median, maximum and minimum concentrations of PM 2.5 in the focused hotspots from 26-2-2020 to 31-08-2020. The PM 2.5 concentrations fall with a mean of 3.133 μg/m 3 and 2.982 μg/m 3 in Met Station and Town Hall area, respectively. Similarly, the maximum values of the concentrations are 4.469 μg/m 3 and 4.097 μg/ m 3 . Whereas, the minimum reported statistics are 1.945 and 0.859. The same trend is observed in the median values of the concentration in both the localities. Related to the total number of cases and deaths due to COVID-19, the average values are 7.904 and 7.658, correspondingly. Furthermore, our statistics show that the maximum number of cases reported in the selected period is 12 with a maximum 8 deaths in a day. However, the minimum number of COVID-19 cases is 6 with 1 death in a day in the focused period.
Unit root analysis
In this initial phase of the investigation, the stationary properties of the variables mentioned in the model (1) and (2) have been analyzed. The findings of the unit root process obtained through the Phillips-Perron test are shown in Table 2. The purpose to study stationary properties is to investigate the order of integration of the variables and to ensure the authenticity of estimated correlation coefficients. The Phillips-Perron test is a modified test to check the unit root process. It also takes care of the problems of (15) S = √ q 2 − 4∕k 2 (G 2 + 1) − 5. (1) and (2) have a unit root process at the level but they become stationary at first difference. For instance, Table 2 illustrates that the PM 2.5 concentrations in both localities of Lahore are non-stationary at level I(0). Further, the positive components of COVID-19 cases and deaths also have unit root process (non-stationary) at I(0) and later become stationary at I(1). To put it differently, the findings of the Phillips-Perron test specifies that no I(0) and I(2) variables are used in the study analysis as all the series are integrated of order one I(1) and hence stationary with no shift overtime at first difference.
Causality of COVID-19 cases on PM 2.5 emissions
To predict how COVID-19 is correlated to the air quality of Lahore, symmetric and asymmetric Granger causality tests have been employed. This investigation helps us in discovering with evidence about how the positive component of COVID-19 causes affect positive and negative shocks in PM 2.5 concentrations in the atmosphere of Lahore. To find the answer to this question, the model in Eq. 1 is estimated. The result of this query is reported in Table 3. As indicated in the model, the positive shocks in the number of COVID-19 cases affect positive and negative components of PM 2.5 emissions. The asymmetric causality analysis shows that positive shocks in COVID-19 cases significantly increase negative shocks in particulate matter emissions. The finding is the same in both Met Station and Town Hall localities of Lahore. Studies by Pata (2020) and Mahato et al. (2020) also reported a similar result. Our estimates suggest that an increase in the number of cases granger causes negative shocks in the emissions by 8.500 μg/m 3 . It is because as the number of cases increases COVID-19 lockdown restrictions get imposed. This imposition of the restriction limited human anthropogenic activities thereby enhance the trend of reduced Particulate emissions (negative shocks) in the Lahore atmosphere. Further, the economic crisis has seemed to solve the problem of air pollution. Hence, as COVID-19 has hits Pakistan's economy consequently leading to improved air pollution. Concerning the causality between positive shocks in the cases and the emissions, the estimated statistics are insignificant indicating no significant causality association between the two. This finding is in accordance with the result of Pata (2020) who also reported no association between positive shocks in COVID-19 cases and the emissions in the USA. It is noted that in the Pandemic era, industrial production lowered down and vehicle use has decreased (Pata 2020; Ropkins and Tate 2021). Further, energy consumption and oil demand have also declined imposing less environmental pressure in the atmosphere (Gautam 2020;Mahato et al. 2020;Li et al. 2021). In addition, social activities have lowered down during pandemic which consequently affected the environment of especially high population countries (Pata 2020). The increased use of technology has elevated environmental pressure (Nakada and Urban 2020).
The causality estimates of symmetric effect suggest that number of COVID-19 cases granger cause PM 2.5 emissions only in Met Station locality of Lahore. Our analysis indicates that in the overall causality effect of COVID-19 cases on air quality, there is only the effect on the negative component of PM 2.5 emissions. The comparison of symmetrical and asymmetric causality suggests that incorrect and misleading results can be advocated when asymmetries in the association of COVID-19 and air quality are not analyzed. (2020) who also reported positive asymmetric causality between COVID-19 deaths and air quality. The main sources of PM 2.5 emissions are fossil fuel and biomass combustion, industrial production, motor vehicle usage and road dust (Song et al 2007;Kim and Hopke 2008); therefore, the occurrence of increased death due to the pandemic has enacted lockdown response procedure thereby restricted all main sources of the atmospheric particulate matter and improved air quality. Lahore is one of the hubs of the country's industrial production and is an economic city. Further, the city has the highest urban population so the environmental issues of unplanned urbanization and haphazard economic production have been improved due to adopted measures regarding the control of deaths due to the pandemic. The result implies that air pollution in the city is largely associated with several economic related activities (such as urban energy consumption, industrial production and motor vehicle usage for commodities supply) which have deteriorated both the quality of human life as well as the environment. The result indicates that lockdown has helped clean the air of Lahore city by raising negative shocks in particulate matter. Moreover, COVID-19 deaths have clean skies and offer comparatively cleaner breathable air for the inhabitants of Lahore.
The decline in energy consumption due to a reduction in commodities supply of industries also has positive effects on environmental quality Jephcote et al. 2021). Further, the demobilization of combustion engine vehicles during the pandemic era has reduced the emissions of fine particulates which in turn lowered PM 2.5 emissions (Baldasano 2020;Xu et al. 2020;Jephcote et al. 2021;Mor et al. 2021). The temporary shutdown of non-necessities production factories has also elevated environmental pressure (Rodríguez-Urrego and Rodríguez-Urrego 2020; Ropkins and Tate 2021).
Conclusions
This study presents the results of asymmetric Granger causality between the COVID-19 pandemic and air quality of Lahore, Pakistan. To the best of our knowledge, this is the first study to analyze asymmetric causality from COVID-19 to positive and negative shocks in atmospheric particulate matter of different (Met Station and Town Hall) localities of Lahore. Based on the findings of the study, it is conducted that both the number of cases and deaths caused by COVID-19 has positive causality to the negative shocks in PM 2.5 emissions in the city. To put it differently, the finding suggests that COVID-19 cases and deaths decrease the emissions in Lahore during the lockdown period. This implies that the air quality of Lahore has improved as a by-product of the lockdown restriction due to positive shocks in COVID-19 cases and deaths in the city. Further, the study conducted that no significant causality run from COVID-19 to positive shocks in PM 2.5 concentrations. In addition, it is also revealed that there is positive symmetric causality when no shock in atmospheric particulate matter is considered. This implies that assuming symmetric causality may give incorrect and misleading results therefore asymmetric is important and crucial to analyze in COVID-19 effects on air quality.
The COVID-19 pandemic period has taught us that clean atmospheric air in Lahore can be attained if the main sources of hazardous atmospheric particulate matter may be controlled. This may be done by controlling air polluting industrial, energy and transportation activities and substituting them with environmentally friendly means of achieving economic growth. The current pollution havens are simply manmade putting human pressure on the environment. Therefore, this pandemic has made us realize that improvement in air quality is achievable ensuring the minimization of hazardous risk to human health. | 2021-06-18T13:51:31.322Z | 2021-06-18T00:00:00.000 | {
"year": 2021,
"sha1": "23e581a5892344b89df1bc186b93fd44e6ea3a15",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40808-021-01210-8.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "23e581a5892344b89df1bc186b93fd44e6ea3a15",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249050135 | pes2o/s2orc | v3-fos-license | Imaging findings in acute pediatric coronavirus disease 2019 (COVID-19) pneumonia and multisystem inflammatory syndrome in children (MIS-C)
The two primary manifestations of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in children are acute coronavirus disease 2019 (COVID-19) pneumonia and multisystem inflammatory syndrome (MIS-C). While most pediatric cases of acute COVID-19 disease are mild or asymptomatic, some children are at risk for developing severe pneumonia. In MIS-C, children present a few weeks after SARS-CoV-2 exposure with a febrile illness that can rapidly progress to shock and multiorgan dysfunction. In both diseases, the clinical and laboratory findings can be nonspecific and present a diagnostic challenge. Thoracic imaging is commonly obtained to assist with initial workup, assessment of disease progression, and guidance of therapy. This paper reviews the radiologic findings of acute COVID-19 pneumonia and MIS-C, highlights the key distinctions between the entities, and summarizes our understanding of the role of imaging in managing SARS-CoV-2-related illness in children.
Introduction
Coronavirus disease 2019 (COVID- 19), the disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), was initially reported in late 2019. It subsequently spread rapidly throughout the world in early 2020 and was officially declared a pandemic by the World Health Organization (WHO) on March 11, 2020 [1]. People at highest risk for serious complications and fatality from acute COVID-19 include older adults and those with underlying medical conditions including obesity, diabetes, hypertension, cardiovascular disease and chronic respiratory disease [2][3][4][5]. The vast majority of clinical and imaging data from the pandemic has been reported in adults because children are less susceptible to symptomatic infection with SARS-CoV-2 and account for only a small number of cases [6][7][8].
However, children might be a significant source of transmission [9]. The considerable variability of findings and outcomes reported in children with COVID-19 yields an unclear picture of the pandemic's effect on the pediatric population [10,11].
Multisystem inflammatory syndrome in children (MIS-C) is a syndrome associated with SARS-CoV-2 that is characterized by fever, inflammation and multiorgan dysfunction.
It was first reported several months following the original peak of the COVID-19 pandemic, with the earliest known cases in Italy, the United Kingdom and the United States in April and May of 2020 [21][22][23][24][25][26]. The syndrome initially presented as an unexplained systemic illness originally considered to be similar to Kawasaki disease and toxic shock syndrome [21][22][23][24][25][26]. As more cases arose around the globe, MIS-C was subsequently recognized as a distinct entity by the Centers for Disease Control and Prevention (CDC) and the WHO [27,28]. It has not been proved that SARS-CoV-2 causes MIS-C; however, strong evidence includes the temporal and geographic relationship to outbreaks of COVID-19 [29,30]. Children usually present a few weeks after a history of COVID-19-like symptoms, and they test positive for SARS-CoV-2 immunoglobulin M (IgM) and immunoglobulin G (IgG) antibodies [7,22,[29][30][31][32][33]. This association suggests that MIS-C is caused by post-infectious immune dysregulation that leads to an exaggerated inflammatory response, rather than an acute viral infection, although a possible direct viral effect has not been excluded [7,[32][33][34].
Diagnostic criteria for MIS-C include pediatric age group (<21 years by CDC definition, <19 years by WHO definition), fever, involvement of two or more organ systems (common findings include mucocutaneous rash, gastrointestinal symptoms, hypotension, shock, cardiac dysfunction, acute kidney injury and coagulopathy), elevated inflammatory markers (e.g., erythrocyte sedimentation rate, C-reactive protein), evidence of current or prior SARS-CoV-2 infection, and exclusion of other microbial infections or alternative diagnoses [27,28]. Gastrointestinal symptoms (abdominal pain and diarrhea) and cardiovascular dysfunction (myocardial injury, depressed ejection fraction, hypotension) occur with high frequency, while respiratory symptoms are less common [21,30]. In contrast to COVID-19 pneumonia, most children with MIS-C develop critical illness within a few days, requiring resuscitation, ionotropic support and intensive care admission, with occasional severe cases leading to mechanical ventilation or extracorporeal membrane oxygenation (ECMO) [30]. Unlike COVID-19 pneumonia, severe illness in MIS-C occurs in children who are otherwise healthy with no comorbidities [21,30]. Despite the initial severity of the disease, short-term outcomes are positive and most children recover within a few days, although a few deaths have been reported [30,35,36].
In both acute COVID-19 and MIS-C, the presenting symptoms and laboratory markers are often nonspecific and the diagnosis can be delayed. Radiologic studies are often requested during initial workup, and these studies must be synthesized with the clinical presentation to recognize the diagnosis [10]. Although the imaging features of COVID-19 and MIS-C can overlap with those of other infectious and non-infectious diseases, it is important for the radiologist to recognize the characteristic findings that would support one of these diagnoses, especially when a critically ill child presents with suspected COVID-19 infection or known COVID-19 exposure [12,14,37,38]. In this review, we describe the imaging findings of acute COVID-19 pneumonia in children and MIS-C, and highlight the features that distinguish the two conditions.
Imaging findings in acute coronavirus 2019
In pediatric acute COVID-19 infection, imaging is usually reserved for children who have risk factors for developing severe disease, or who are clinically deteriorating. According to current recommendations from the American College of Radiology (ACR), imaging should not routinely be performed for screening or first-line diagnostic testing [39,40]. In children with a complicated disease course requiring imaging to guide therapy, chest radiograph and chest CT are the mainstay modalities being used [41]. Of note, a handful of papers have also reported chest US findings of children with severe COVID-19 infection; however, only a small number of cases have been described, and this technique has not been widely adopted [42].
Classically, chest radiograph is the primary diagnostic imaging tool used to assess for pneumonia in a child presenting with fever and cough. However, there is a paucity of reliable data regarding chest radiograph findings in children with COVID-19 pneumonia because of its low prevalence [39]. There is also high variability in the incidence of abnormal exams and patterns of opacities described in the literature, which further limits our understanding of the radiographic findings [12,[43][44][45][46][47][48][49][50][51][52][53][54]. This could reflect institutional differences in frequency of image utilization, differences in severity of illness among various pediatric study populations (disease severity is not ubiquitously well-documented), and radiologist intra-and interobserver variability (a known challenge in interpretation of pediatric chest radiographs) [50,55,56].
The number of studies in the literature describing chest CT in acute COVID-19 pneumonia is significantly greater than for chest radiography. Unfortunately, because of the low incidence of disease in children, these studies focus on small populations, and as with chest radiography there is considerable variability in reported chest CT patterns. Therefore, the role of chest CT in pediatric COVID-19 has not been fully established. Just as with most other pulmonary processes, it is likely that in COVID-19 pneumonia, chest CT is more sensitive in detecting abnormalities that might not be visible on chest radiograph [46]. However, these findings can be subtle and nonspecific, and their presence does not necessarily affect patient management [20,46]. In adults, typical findings include bilateral multifocal peripheral ground-glass opacities, with or without consolidation, which can be rounded and can demonstrate "crazy paving" (ground-glass opacity with intralobular lines) [58]. "Halo sign" (central dense consolidation with surrounding ground-glass opacity) and "reverse halo sign," also known as "atoll sign" (central ground-glass opacity with surrounding dense consolidation), have also been reported [58][59][60][61][62]. The primary findings of ground-glass opacity and consolidation are also seen in children, but studies comparing pediatric and adult CT have shown that the opacities in children are less severe with regard to number, size and extent [63][64][65][66].
Because of the uncertainty of test availability and reliability during the early peak of the pandemic in adults, it was thought that imaging might be a useful tool in primary diagnosis of acute COVID-19. However, despite the presence of a positive RT-PCR test, imaging in children is often normal [20,75]. When abnormalities are present, there is great diversity and nonspecificity of findings [20,56,68,69,76,77]. Moreover, the majority of pediatric studies available in the literature do not adjust for the clinical severity of disease, nor do they take into account possible coexisting infections [20,56,76]. These issues limit the meaningfulness of radiographic interpretations in pediatric COVID-19 disease. In a few studies, some data support a link between the severity of pulmonary opacities on imaging and indicators of clinical severity, such as degree of respiratory distress, need for hospital admission and intensive care stay, presence of underlying conditions, and patient fatalities [45,47,50,51,53,71,72]. Thus, while imaging might not play a primary diagnostic role in acute pediatric COVID-19, it remains an integral part of patient care, mainly to assess disease progression or anticipate a change in management, especially in children with critical illness and chronic comorbidities [20,41,43,45,48].
Imaging findings in multisystem inflammatory syndrome in children
Imaging is not required for diagnosis of MIS-C because the criteria are based on clinical symptoms, laboratory values, history of SARS-CoV-2 infection and exclusion of other conditions [27,28]. However, radiologic studies are frequently obtained in children with MIS-C because of their rapid clinical deterioration, and imaging abnormalities are important to recognize because they are associated with fulminant illness including shock [29]. Chest radiographs are often obtained in children with MIS-C who are undergoing cardiac workup or being admitted to the intensive care unit. Additionally, because of the high prevalence of gastrointestinal symptoms in this syndrome, abdominal imaging including plain radiograph, US or CT is often obtained, even before the diagnosis of MIS-C is recognized. As the following sections demonstrate, a variety of organ systems can manifest with imaging abnormalities in MIS-C, which reflects the systemic inflammatory response that characterizes this disease.
Intrathoracic imaging findings
Pulmonary Primary pulmonary involvement is not a leading feature of MIS-C, and therefore at initial presentation, chest imaging might be normal [30,43]. Within the first few days as the illness evolves, the most common radiographic findings are bilateral symmetrical hazy airspace opacities with perihilar or basilar/lower lobe predominance, as well as increased interstitial markings and peribronchial cuffing/thickening, bilateral small pleural effusions and enlargement of the cardiac silhouette [14,37,38,43,[78][79][80] (Figs. 7 and 8). The underlying etiology of these findings is unclear; however, the appearance is reminiscent of interstitial pulmonary edema or acute respiratory distress syndrome (ARDS), indicating that it could originate from a cardiogenic process, systemic inflammatory process, or aggressive fluid resuscitation and third spacing [14,37,76,80]. Of note, chest radiographs in MIS-C can be abnormal even in children without respiratory symptoms, suggesting that the findings reflect cardiac dysfunction or fluid overload rather than pulmonary inflammation [38,[78][79][80].
Chest CT is rarely needed in children with MIS-C, although it is sometimes obtained as part of a sepsis workup pathway or if there is clinical concern for pulmonary embolism [37,38]. Compared to acute COVID-19 pneumonia, there is a paucity of descriptions of MIS-C on CT in the literature; however, existing reports generally mimic the chest radiograph findings. The most common abnormalities include bibasilar consolidation, ground-glass opacities, interstitial opacities including septal thickening and bronchial wall thickening, bilateral small pleural effusions, mild hilar lymphadenopathy, and cardiomegaly [38,78,81] (Figs. 9 and 10). Pulmonary nodules have been reported in a few instances and are of uncertain significance [38,78].
Cardiac MRI is also suggested for characterizing myocardial disease in children with MIS-C with significant left ventricular dysfunction (ejection fraction <50%) [82]. Limited data are available; however, most studies report a myocarditis-type picture, demonstrated as diffuse myocardial edema or a nonischemic gadolinium enhancement pattern, without evidence of necrosis or fibrosis [78,85,[90][91][92]. Fortunately, both the clinical and imaging findings of heart failure in MIS-C appear to be transient, with quick recovery of systolic function and normalization of myocardial signal on MRI [88,91,93].
Vascular
It is well established that adults with COVID-19 are vulnerable to vascular complications including multiorgan thromboembolic disease, and based on this assumption, some expert multidisciplinary groups have recommended thromboprophylaxis in children with COVID-19 or MIS-C [94][95][96]. However, there is a lack of consensus on this topic, including whether imaging should be obtained in pursuit of deep venous thrombosis and pulmonary embolism. The greater pro-inflammatory cytokine response and higher plasma D-dimer levels seen in MIS-C suggest that these children are more vulnerable to thromboembolic phenomenon compared to those with acute COVID-19 [74,97]. It appears from some reports that the incidence of deep venous thrombosis and pulmonary embolism in pediatric SARS-CoV-2-related illnesses is higher than baseline; however, at some institutions, no cases of pulmonary embolism have been documented [21,37,38,74,[98][99][100]. In MIS-C, the known cases of pulmonary embolism have been small and segmental in location, but few papers include information on embolism size or location [37,93]. There is data to suggest that the existing cases of pediatric SARS-CoV-2-associated thromboembolism are linked to underlying risk factors, such as indwelling central lines, malignancy and ECMO [74,94,96,98,99].
Abdominal
Gastrointestinal symptoms are among the most common presenting findings of MIS-C, often mimicking acute appendicitis, and leading to imaging of the abdomen before the diagnosis of MIS-C is considered [38,88,[101][102][103]. On US and CT, children with MIS-C frequently demonstrate nonspecific inflammatory changes in the right lower quadrant, including lymphadenopathy, mesenteric edema (hyperechogenicity, thickening, stranding) and bowel wall thickening especially at the terminal ileum and cecum [37,38,[78][79][80][103][104][105] (Figs. 11, 12 and 13). Some researchers have suggested that the localized right lower quadrant findings in a mesenteric adenitis pattern is due to the abundant lymphoid tissue in the terminal ileum (Peyer patches), which is vulnerable to vasculitis and necrotizing lymphadenitis caused by the systemic hyperinflammatory illness [38,78,102,105]. The appendix can appear radiologically normal or abnormal, and imaging cannot always clearly distinguish between MIS-C and acute appendicitis [38,79,101,104,[106][107][108]. Therefore, these findings must be considered in light of multiorgan involvement and laboratory data that would support a diagnosis of MIS-C. Fig. 7 Multisystem inflammatory syndrome in children (MIS-C) in an 11-year-old girl who presented with fever, abdominal pain and headache. a, b Posteroanterior (a) and lateral (b) chest radiographs demonstrate a mildly enlarged cardiac silhouette and interstitial edema with basilar-predominant hazy interstitial markings. There are small pleural effusions with blunting of the costophrenic angles and fluid tracking into the fissures (arrowheads) Fig. 8 Multisystem inflammatory syndrome in children (MIS-C) in a 9-year-old girl who presented with fever and abdominal pain. Anteroposterior chest radiograph shows a mildly enlarged cardiac silhouette, interstitial edema with increased interstitial markings and hazy pulmonary opacity, worst at the bases, and bilateral small pleural effusions (arrowheads) Fig. 9 Chest CT findings in a 10-year-old girl with multisystem inflammatory syndrome in children (MIS-C) who presented with fever and chest and abdominal pain. Axial contrast-enhanced chest CT image in soft-tissue window demonstrates cardiomegaly, small pericardial effusion (arrow) and bilateral small pleural effusions (arrowheads) Other common abdominal imaging findings of MIS-C include small-volume simple ascites, gallbladder wall thickening and pericholecystic fluid, gallbladder sludge and urinary bladder wall thickening [37,79,104] (Figs. 12 and 14). Hepatosplenomegaly, periportal edema, hyperechogenic kidneys, and splenic hypoechoic lesions or infarcts have also been reported [37,38,79,104].
Miscellaneous
Neck pain has been reported in more than one-quarter of children with MIS-C, along with other otolaryngologic symptoms such as neck swelling, dysphagia, trismus, stridor and drooling [115][116][117]. Cervical imaging with US or contrast-enhanced CT might be requested to assess for signs of inflammation. The most common abnormalities are retropharyngeal edema and cervical lymphadenopathy [14,22,80,81,115,118] (Fig. 15).
Conclusion
While the vast majority of children with COVID-19 disease experience minimal to no symptoms, in rare cases a pediatric patient presents with severe illness after known SARS-CoV-2 exposure, and the differential diagnosis includes both severe COVID-19 pneumonia and MIS-C. Either of these conditions can present with fever and respiratory distress progressing to shock. Therefore, it is important for pediatricians and radiologists to understand the differences in their clinical and radiologic profiles so they can make a prompt diagnosis. The key distinctions between these entities are summarized in Table 1 [14,22,34,43,93,119,120]. With regard to thoracic imaging, children with MIS-C demonstrate a diffuse pattern of hazy pulmonary opacity with interstitial edema and small pleural effusions secondary to heart failure, whereas children with acute COVID-19 infection demonstrate heterogeneous patterns of ground glass, consolidation and nodules. Imaging findings of intrabdominal inflammation are also distinct and highly prevalent in MIS-C. Recognition of these various features allows for early diagnosis and appropriately targeted management of SARS-CoV-2-associated critical illness in children. | 2022-05-26T13:54:39.895Z | 2022-05-26T00:00:00.000 | {
"year": 2022,
"sha1": "a3d578a460794fed7c4df9094814f2e7b7f5de56",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00247-022-05393-9.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3d578a460794fed7c4df9094814f2e7b7f5de56",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246019327 | pes2o/s2orc | v3-fos-license | Investigation of the Damping Capacity of CFRP Raft Frames
In this paper, based on the composite laminated plate theory and a strain energy model, the damping capacity of a Carbon Fiber Reinforced Plastics (CFRP) raft frame was studied. According to the finite element analysis (FEA) and damping ratio prediction model, the influences of different layups on the damping capacity of the raft frame and its components (top/bottom plate and I-support) were discussed. Comparing the FEA results with the test results, it can be figured out that the CFRP laminate layup has a great influence on the damping ratio of the raft frame, and the maximum error of the first-order natural frequency and damping ratio of the top/bottom plate were 5.6% and 15.1%, respectively. The maximum error of the first-order natural frequency of the I-support between the FEA result and the test result was 7.5%, suggesting that because of the stress concentration, the error of the damping ratio was relatively large. As for the raft frame, the damping performance was affected by the I-support arrangement and the simulation analysis was in good agreement with the experimental results. This study can provide a useful reference for improving the damping performance of CFRP raft frames.
Introduction
Mechanical equipment will inevitably produce vibrations while working, and the deleterious vibrations can be reduced by installing a raft frame vibration isolator. Raft frames have outstanding vibration isolation performance, and numerous studies have been carried out to improve the damping performance of the basic raft frame structure, such as changing the geometric size or using different materials [1][2][3].
Compared with conventional metal materials, carbon fiber reinforced plastics (CFRP) have many advantages such as high specific modulus, high damping capability, high strength, and strong designability. Their damping loss factor is 1-2 orders of magnitude higher than that of metal materials [4,5]. In recent year, CFRP has been applied to design raft frames, and already been used in a wide range of fields, including satellite, spaceship, and submarine manufacture [6,7]. According to these studies [8,9], it is fairly easy to appreciate that CFRP raft frames have already been adopted successfully to isolate vibrations.
Current research on damping of composite materials mainly focuses on the variation of the microstructure features of a single laminated plate, such as its fiber volume fraction, fiber orientation, elastic modulus and aspect ratio. Related studies indicate that these factors influence the longitudinal shear damping of composite materials [10][11][12]. Macroscopically, the fiber layering angle and layup influence the damping performance, and four layups with good damping performance have been studied [13]. The lower the fiber volume fraction and the greater the fiber laying angle, the better the damping performance of the resulting composite laminates [14][15][16][17].
However, there are few studies concerned with the influence of stiffness changes caused by different layups and structures on the damping performance. Most of the where ζ is damping ratio, η is damping loss factor, ∆U and U represent the dissipated energy and the total strain energy stored in a vibration period, respectively. A composite laminate is anisotropic, and the damping loss factor of the structure can be expressed as follows: where U k ij is the sum of the strain energy of the k th cell of the composite structure generated by the stress σ ij of the layer, η ij is the damping loss factor in the corresponding direction, 1 refers to the positive axis direction, 2 refers to the direction perpendicular to the fiber, and 3 refers to the thickness direction. Under the small deformation assumption and the linear elasticity assumption, the strain energy generated by each unit can be calculated using Equation (3): σ k ij , ε k ij (i, j = 1, 2, 3) represent the stress and strain components in the k th unit of the composite laminate, respectively. V k is the integral volume of unit k.
The raft frame is composed of many parts, the proportion of strain energy loss of diverse parts are different, so we propose the definition of strain energy loss in different directions for each part of the structure: total , ∆U 2 total . . . . . . , ∆U n total (4) U n total represents the sum of strain energy loss, ∆U p ij is the strain energy loss generated by the stress σ ij in the part p. SE p ij represent the proportion of strain energy loss generated by stress σ ij in the part p of the structure. The component with the largest strain energy loss in the structure is: For the whole laminate structure, combined with Equation (1), during a vibration period, total dissipated energy and total strain energy can be expressed as: The damping loss factor of the structure is: The unidirectional prepreg T700/YPH-42T consists of 68% T700 carbon fibers and 32% YPH-42T epoxy resin and the thickness of one layer is 0.2 mm. The material properties are listed in Table 1 [20]. There are six damping loss factors in six directions of the composite material, where direction 1 is the fiber direction, direction 2 and 3 indicate the transverse direction. As for the CFRP laminated plate, only three damping loss factors are considered, the directions are shown in Figure 1. The damping loss factors in three direction of the laminated plate are as follows: η 11 = 0.82% η 22 = 2.98%, η 12 = 8.57% [21,22]. The damping loss factor of the structure is: The unidirectional prepreg T700/YPH-42T consists of 6 32% YPH-42T epoxy resin and the thickness of one laye properties are listed in Table 1 [20]. There are six damping lo the composite material, where direction 1 is the fiber directio the transverse direction. As for the CFRP laminated plate, onl are considered, the directions are shown in Figure 1. The d direction of the laminated plate are as follows: = 0.82% [21,22]. The damping loss factor of the structure is converted int where ζ is the structural damping ratio. The damping loss factor of the structure is converted into the damping ratio [6]:
Simulation
where ζ is the structural damping ratio.
Model
The software ABAQUS (Dassault SIMULIA, Johnston, RI, USA.) is adopted for the FEA of the raft frame, the continuum shell SC8R elements are applied and the sweep meshing method is adopted because of the directivity of composite materials. In ABAQUS, the analysis models have been simplified, and the connection between different parts is "Tie".
The components of the raft frame are shown in Figure 2. The simulation model and fiber orientation are shown in Figures 3 and 4, respectively. Seven layups are set between 0° to 90° at 15° intervals, denoted by C0, C15, C30, C45, C60, C75, and C90. The bending deformation appears while the raft frame is being excited. The regularized stiffness coefficients (D11 * , D22 * and D66 * ) could be calculated to describe the stiffness change of the laminates [23]. The data are shown in Table 2, Figure 5 shows the layer coordinate system on each component. Directions 1, 2 and 3 represent the main stiffness direction, the secondary stiffness direction and the thickness direction of the structure in the layer coordinate system, respectively. Seven layups are set between 0° to 90° at 15° intervals, denoted by C0, C15, C30, C45 C60, C75, and C90. The bending deformation appears while the raft frame is being excited The regularized stiffness coefficients (D11 * , D22 * and D66 * ) could be calculated to describe the stiffness change of the laminates [23]. The data are shown in Table 2, Figure 5 shows the layer coordinate system on each component. Directions 1, 2 and 3 represent the main stiffness direction, the secondary stiffness direction and the thickness direction of the structure in the layer coordinate system, respectively. Seven layups are set between 0° to 90° at 15° intervals, denoted by C0, C15, C C60, C75, and C90. The bending deformation appears while the raft frame is being The regularized stiffness coefficients (D11 * , D22 * and D66 * ) could be calculated to d the stiffness change of the laminates [23]. The data are shown in Table 2, Figure 5 the layer coordinate system on each component. Directions 1, 2 and 3 represent th stiffness direction, the secondary stiffness direction and the thickness direction structure in the layer coordinate system, respectively. Seven layups are set between 0 • to 90 • at 15 • intervals, denoted by C0, C15, C30, C45, C60, C75, and C90. The bending deformation appears while the raft frame is being excited. The regularized stiffness coefficients (D 11 *, D 22 * and D 66 *) could be calculated to describe the stiffness change of the laminates [23]. The data are shown in Table 2, Figure 5 shows the layer coordinate system on each component. Directions 1, 2 and 3 represent the main stiffness direction, the secondary stiffness direction and the thickness direction of the structure in the layer coordinate system, respectively. The stress and strain values of non-rigid body modes are exported by using software ABAQUS, and the damping ratio of different modes can be calculated by the MATLAB program (MathWorks, Natick, MA, USA.) [20].
Simulation Analysis
During the simulation, the free modal is analyzed, the first six orders are rigid body modes, and the seventh order is non-rigid mode. In this paper, only non-rigid modes are considered. Table 3 shows the first four orders modal shape of the top/bottom plate. The natural frequency and damping ratio of first-order torsional modal shape are shown in Table 4, the proportion of strain energy loss in different directions are calculated according to Equation (4), as shown in Figure 6 (e11, e12 and e22 represent the three directions of the coordinate system). The stress and strain values of non-rigid body modes are exported by using software ABAQUS, and the damping ratio of different modes can be calculated by the MATLAB program (MathWorks, Natick, MA, USA) [20].
Simulation Analysis
During the simulation, the free modal is analyzed, the first six orders are rigid body modes, and the seventh order is non-rigid mode. In this paper, only non-rigid modes are considered. Table 3 shows the first four orders modal shape of the top/bottom plate. The natural frequency and damping ratio of first-order torsional modal shape are shown in Table 4, the proportion of strain energy loss in different directions are calculated according to Equation (4), as shown in Figure 6 (e11, e12 and e22 represent the three directions of the coordinate system).
Simulation Analysis
During the simulation, the free modal is analyzed, the first six orders are rigid body modes, and the seventh order is non-rigid mode. In this paper, only non-rigid modes are considered. Table 3 shows the first four orders modal shape of the top/bottom plate. The natural frequency and damping ratio of first-order torsional modal shape are shown in Table 4, the proportion of strain energy loss in different directions are calculated according to Equation (4), as shown in Figure 6 (e11, e12 and e22 represent the three directions of the coordinate system).
Simulation Analysis
During the simulation, the free modal is analyzed, the first six orders are rigid body modes, and the seventh order is non-rigid mode. In this paper, only non-rigid modes are considered. Table 3 shows the first four orders modal shape of the top/bottom plate. The natural frequency and damping ratio of first-order torsional modal shape are shown in Table 4, the proportion of strain energy loss in different directions are calculated according to Equation (4), as shown in Figure 6 (e11, e12 and e22 represent the three directions of the coordinate system).
Simulation Analysis
During the simulation, the free modal is analyzed, the first six orders are rigid body modes, and the seventh order is non-rigid mode. In this paper, only non-rigid modes are considered. Table 3 shows the first four orders modal shape of the top/bottom plate. The natural frequency and damping ratio of first-order torsional modal shape are shown in Table 4, the proportion of strain energy loss in different directions are calculated according to Equation (4), as shown in Figure 6 (e11, e12 and e22 represent the three directions of the coordinate system).
Simulation Analysis
During the simulation, the free modal is analyzed, the first six orders are rigid body modes, and the seventh order is non-rigid mode. In this paper, only non-rigid modes are considered. Table 3 shows the first four orders modal shape of the top/bottom plate. The natural frequency and damping ratio of first-order torsional modal shape are shown in Table 4, the proportion of strain energy loss in different directions are calculated according to Equation (4), as shown in Figure 6 (e11, e12 and e22 represent the three directions of the coordinate system).
Simulation Analysis
During the simulation, the free modal is analyzed, the first six orders are rigid body modes, and the seventh order is non-rigid mode. In this paper, only non-rigid modes are considered. Table 3 shows the first four orders modal shape of the top/bottom plate. The natural frequency and damping ratio of first-order torsional modal shape are shown in Table 4, the proportion of strain energy loss in different directions are calculated according to Equation (4), as shown in Figure 6 (e11, e12 and e22 represent the three directions of the coordinate system).
Simulation Analysis
During the simulation, the free modal is analyzed, the first six orders are rigid body modes, and the seventh order is non-rigid mode. In this paper, only non-rigid modes are considered. Table 3 shows the first four orders modal shape of the top/bottom plate. The natural frequency and damping ratio of first-order torsional modal shape are shown in Table 4, the proportion of strain energy loss in different directions are calculated according to Equation (4), as shown in Figure 6 (e11, e12 and e22 represent the three directions of the coordinate system).
Simulation Analysis
During the simulation, the free modal is analyzed, the first six orders are rigid body modes, and the seventh order is non-rigid mode. In this paper, only non-rigid modes are considered. Table 3 shows the first four orders modal shape of the top/bottom plate. The natural frequency and damping ratio of first-order torsional modal shape are shown in Table 4, the proportion of strain energy loss in different directions are calculated according to Equation (4), as shown in Figure 6 (e11, e12 and e22 represent the three directions of the coordinate system).
Simulation Analysis
During the simulation, the free modal is analyzed, the first six orders are rigid body modes, and the seventh order is non-rigid mode. In this paper, only non-rigid modes are considered. Table 3 shows the first four orders modal shape of the top/bottom plate. The natural frequency and damping ratio of first-order torsional modal shape are shown in Table 4, the proportion of strain energy loss in different directions are calculated according to Equation (4), as shown in Figure 6 (e11, e12 and e22 represent the three directions of the coordinate system).
Simulation Analysis
During the simulation, the free modal is analyzed, the first six orders are rigid body modes, and the seventh order is non-rigid mode. In this paper, only non-rigid modes are considered. Table 3 shows the first four orders modal shape of the top/bottom plate. The natural frequency and damping ratio of first-order torsional modal shape are shown in Table 4, the proportion of strain energy loss in different directions are calculated according to Equation (4), as shown in Figure 6 (e11, e12 and e22 represent the three directions of the coordinate system).
Simulation Analysis
During the simulation, the free modal is analyzed, the first six orders are rigid body modes, and the seventh order is non-rigid mode. In this paper, only non-rigid modes are considered. Table 3 shows the first four orders modal shape of the top/bottom plate. The natural frequency and damping ratio of first-order torsional modal shape are shown in Table 4, the proportion of strain energy loss in different directions are calculated according to Equation (4), as shown in Figure 6 (e11, e12 and e22 represent the three directions of the coordinate system).
Simulation Analysis
During the simulation, the free modal is analyzed, the first six orders are rigid body modes, and the seventh order is non-rigid mode. In this paper, only non-rigid modes are considered. Table 3 shows the first four orders modal shape of the top/bottom plate. The natural frequency and damping ratio of first-order torsional modal shape are shown in Table 4, the proportion of strain energy loss in different directions are calculated according to Equation (4), as shown in Figure 6 (e11, e12 and e22 represent the three directions of the coordinate system). program (MathWorks, Natick, MA, USA.) [20].
Simulation Analysis
During the simulation, the free modal is analyzed, the first six orders are rigid body modes, and the seventh order is non-rigid mode. In this paper, only non-rigid modes are considered. Table 3 shows the first four orders modal shape of the top/bottom plate. The natural frequency and damping ratio of first-order torsional modal shape are shown in Table 4, the proportion of strain energy loss in different directions are calculated according to Equation (4), as shown in Figure 6 (e11, e12 and e22 represent the three directions of the coordinate system). Figure 6. Proportion of strain energy loss in each direction of first-order torsional modal shape. Figure 6. Proportion of strain energy loss in each direction of first-order torsional modal shape. Table 4 and Figure 6, from C0 to C90 of the plate, the proportion of strain energy loss in each direction of first-order modal shape increases first, reaching its peak at C45, and then decreases. Combined with the derivation process of the internal force of laminated plate, it is seen that torsional deformation is a macroscopic phenomenon caused by in-plane shear stress, therefore, the natural frequency of the torsional modal shape is mainly affected by the torsional stiffness coefficient D 66 *. The natural frequency increases correspondingly when D 66 * increases from C0 to C45 gradually. In layer C0 and C90, deformation direction is 45 degrees to the fiber orientation (X/Y direction). The shear deformation reaches its maximum, therefore, strain energy loss mainly concentrated in the 12 direction, as shown in Figure 7. In layer C45, the fiber orientation is the same as the deformation direction, the strain energy loss reaches its maximum in 11 direction, as shown in Figure 8. The damping loss factor is small relatively in 11 direction, in this modal shape, the damping ratio decreases gradually from layer C0 to layer C45. The natural frequency and damping ratio of first-order bent modal shape are shown in Table 5, and strain energy loss is shown in Figure 9. Table 5. Natural frequency and damping ratio of first-order bent modal shape. The natural frequency and damping ratio of first-order bent modal shape are shown in Table 5, and strain energy loss is shown in Figure 9. Table 5. Natural frequency and damping ratio of first-order bent modal shape. The natural frequency and damping ratio of first-order bent modal shape are shown in Table 5, and strain energy loss is shown in Figure 9. The natural frequency and damping ratio of first-order bent modal shape are shown in Table 5, and strain energy loss is shown in Figure 9. Figure 9. Proportion of strain energy loss in each direction of first-order bent modal shape.
As shown in
As shown in Table 5 and Figure 9, the trend of natural frequency of first-order bent modal shape is consistent with the damping ratio, which increases from C0 to C45 and gradually decreases from C45 to C90. The layups (C45) with smaller bending stiffness factor bends first and the natural frequency also increases. In layer C0 and C90, the direction of bending deformation is perpendicular to the fiber orientation (X/Y direction), therefore, the strain energy loss is mainly concentrated in direction 22, as shown in Figure 10. In layer C45, the angle between deformation direction and fiber direction is 45 degrees, and the shear deformation is at its maximum, so the strain energy loss in 12 direction reaches its peak, as shown in Figure 11. Mandal et al. [24] calculated the damping loss factors of rectangular laminates by the half-power method, and their result shows that the damping loss factor increases with the rising of flexural stiffness. As shown in Table 5 and Figure 9, the trend of natural frequency of first-order bent modal shape is consistent with the damping ratio, which increases from C0 to C45 and gradually decreases from C45 to C90. The layups (C45) with smaller bending stiffness factor bends first and the natural frequency also increases. In layer C0 and C90, the direction of bending deformation is perpendicular to the fiber orientation (X/Y direction), therefore, the strain energy loss is mainly concentrated in direction 22, as shown in Figure 10. In layer C45, the angle between deformation direction and fiber direction is 45 degrees, and the shear deformation is at its maximum, so the strain energy loss in 12 direction reaches its peak, as shown in Figure 11. Mandal et al. [24] calculated the damping loss factors of rectangular laminates by the half-power method, and their result shows that the damping loss factor increases with the rising of flexural stiffness. 3.2.2. Simulation Analysis of I-Support Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in 3.2.2. Simulation Analysis of I-Support Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order C0 Figure 11. Stress distribution of first-order bent modal shape in 12 direction: (a) C0. (b) C45. (c) C90. Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Figure 11. Stress distribution of first-order bent modal shape in 12 direction: (a) C0. (b) C45. (c) C90. Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Figure 11. Stress distribution of first-order bent modal shape in 12 direction: (a) C0. (b) C45. (c) C90. Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Figure 11. Stress distribution of first-order bent modal shape in 12 direction: (a) C0. (b) C45. (c) C90. Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. Table 6. The first four order modal shape of I-support.
Layer Code
First-Order Second-Order Third-Order Fourth-Order Table 6 shows the first four orders modal shape of I-support. The natural frequency and damping ratio of torsional modal shape of the web plate of I-support are shown in Table 7. Figure 12 indicates the strain energy loss in different directions. FIN represents the flange plate and RIB represents the web plate of the I-support. As shown in Table 7 and Figure 12, in the torsional modal shape, natural frequency is consistent with the variation trend of torsional stiffness coefficient D 66 * in the laying coordinate system of the web plate while the fiber layering angle increases. The strain energy loss in different directions of RIB is consistent with the first-order torsional modal shape of the plates. The strain energy loss of FIN increases obviously at layer C60, and the flange plates are bent at the same time, the distribution of strain energy loss of the flange plate is consistent with the plates under the first-order bent modal shape at layer C60. Compared with layer C30, the damping ratio of layer C60 improved obviously.
Simulation Analysis of I-Support
The natural frequency and damping ratio of the bent modal shape of the web plate of I-support are shown in Table 8, Figure 13 shows the strain energy loss in different directions. Table 7. Natural frequency and damping ratio of torsional modal shape of the web plate. As shown in Table 7 and Figure 12, in the torsional modal shape, natural frequency is consistent with the variation trend of torsional stiffness coefficient D66* in the laying coordinate system of the web plate while the fiber layering angle increases. The strain energy loss in different directions of RIB is consistent with the first-order torsional modal shape of the plates. The strain energy loss of FIN increases obviously at layer C60, and the flange plates are bent at the same time, the distribution of strain energy loss of the flange plate is consistent with the plates under the first-order bent modal shape at layer C60. Compared with layer C30, the damping ratio of layer C60 improved obviously.
Layer Code
The natural frequency and damping ratio of the bent modal shape of the web plate of I-support are shown in Table 8, Figure 13 shows the strain energy loss in different directions. Figure 12. Proportion of strain energy loss in different direction of the torsional modal shape of the web plate.
As shown in Table 7 and Figure 12, in the torsional modal shape, natural frequency is consistent with the variation trend of torsional stiffness coefficient D66* in the laying coordinate system of the web plate while the fiber layering angle increases. The strain energy loss in different directions of RIB is consistent with the first-order torsional modal shape of the plates. The strain energy loss of FIN increases obviously at layer C60, and the flange plates are bent at the same time, the distribution of strain energy loss of the flange plate is consistent with the plates under the first-order bent modal shape at layer C60. Compared with layer C30, the damping ratio of layer C60 improved obviously.
The natural frequency and damping ratio of the bent modal shape of the web plate of I-support are shown in Table 8, Figure 13 shows the strain energy loss in different directions. Table 8 shows that when the fiber layering angle increases, the natural frequency decreases gradually. This happens because the bending stiffness coefficient D 11 * decreases in the layer coordinate system of the web plate. As shown in Figure 13, web plates contribute to most of the strain energy loss, and the strain energy loss in different direction is also consistent with the first-order bent modal shape of the top/bottom plate. Table 9 shows the natural frequency and damping ratio of bent modal shape of the flange plate of I-support, and Figure 14 shows the strain energy loss in different directions. Table 9. Natural frequency and damping ratio of bent modal shape of the flange plates of I-support. flange plate of I-support, and Figure 14 shows the strain energy loss in different directions. Table 9. Natural frequency and damping ratio of bent modal shape of the flange plates of I-support. The natural frequency diminishes with the decrease of the bent stiffness coefficient D11 * in the flange and web laying coordinate system when the flange plates and web plates are bent. Figure 14 shows that the trend of strain energy loss ratio of the flange and web plates are almost in accordance.
Layer Code
The natural frequency and damping ratio when the flange plates of I-support undergo reversed bending are shown in Table 10 and the strain energy loss of different directions is shown in Figure 15. The natural frequency diminishes with the decrease of the bent stiffness coefficient D 11 * in the flange and web laying coordinate system when the flange plates and web plates are bent. Figure 14 shows that the trend of strain energy loss ratio of the flange and web plates are almost in accordance.
The natural frequency and damping ratio when the flange plates of I-support undergo reversed bending are shown in Table 10 and the strain energy loss of different directions is shown in Figure 15. The modal shapes of C0 and C15 are different from the others, so the flange plate is not included in the comparison. It can be obtained from Table 10 and Figure 15 that in the bent modal shape of the flange plate, the natural frequency of I-supports diminishes with the decrease of the bent stiffness coefficient D11 * in the laying coordinate system. The strain energy loss is contributed to the flange plates, so the layups of the flange plate can be adjusted between C45 and C90 to obtain better damping capacity.
In order to investigate the influence of flange layer change on the web plate. Web plates are set as C45, N indicates the flange plate and set as layer C0~C90. The layup of Isupport can be described as N-C45. The strain energy loss of different directions is shown in Figure 16. The modal shapes of C0 and C15 are different from the others, so the flange plate is not included in the comparison. It can be obtained from Table 10 and Figure 15 that in the bent modal shape of the flange plate, the natural frequency of I-supports diminishes with the decrease of the bent stiffness coefficient D 11 * in the laying coordinate system. The strain energy loss is contributed to the flange plates, so the layups of the flange plate can be adjusted between C45 and C90 to obtain better damping capacity.
In order to investigate the influence of flange layer change on the web plate. Web plates are set as C45, N indicates the flange plate and set as layer C0~C90. The layup of I-support can be described as N-C45. The strain energy loss of different directions is shown in Figure 16.
the flange plates.
The modal shapes of C0 and C15 are different from the others, so the flange plate is not included in the comparison. It can be obtained from Table 10 and Figure 15 that in the bent modal shape of the flange plate, the natural frequency of I-supports diminishes with the decrease of the bent stiffness coefficient D11 * in the laying coordinate system. The strain energy loss is contributed to the flange plates, so the layups of the flange plate can be adjusted between C45 and C90 to obtain better damping capacity.
In order to investigate the influence of flange layer change on the web plate. Web plates are set as C45, N indicates the flange plate and set as layer C0~C90. The layup of Isupport can be described as N-C45. The strain energy loss of different directions is shown in Figure 16. When the fiber layering angle of the flange plates increases, the proportion of strain energy loss of the support increases gradually, and the proportion of strain energy loss in different directions is various under different layups; the proportion of strain energy loss of web plate decreases in the meanwhile. However, the proportion of strain energy loss in each direction remains constant under different layups of the flange plates, which shows that the change of layer mainly affects the proportion of strain energy loss distribution. That is, the damping capacity of laminates is determined by the fiber layering angle, the fiber layering angle of the flange plate can be adjusted to dissipate more energy.
The first four modal shape of the flange plates and web plates are bent, the stiffness caused by the change of layups has a great influence on the natural frequency. The strain energy loss distributions of the flanges and webs in different direction are consistent with When the fiber layering angle of the flange plates increases, the proportion of strain energy loss of the support increases gradually, and the proportion of strain energy loss in different directions is various under different layups; the proportion of strain energy loss of web plate decreases in the meanwhile. However, the proportion of strain energy loss in each direction remains constant under different layups of the flange plates, which shows that the change of layer mainly affects the proportion of strain energy loss distribution. That is, the damping capacity of laminates is determined by the fiber layering angle, the fiber layering angle of the flange plate can be adjusted to dissipate more energy.
The first four modal shape of the flange plates and web plates are bent, the stiffness caused by the change of layups has a great influence on the natural frequency. The strain energy loss distributions of the flanges and webs in different direction are consistent with those of the independent laminates in corresponding modes, the fiber layering angle determines the damping capacity of the laminates under bending deformation. The stiffness of the laminates with different layups affects the damping performance of the structure, so the lamination can be adjusted to modify the strain energy loss ratio of specific laminates.
Simulation Analysis of CFRP Raft Frame
Selecting the layups C0, C45 and C90 to represent the stiffness distribution and damping distribution trend. As for the I-support, the peak value of damping ratio appears at C60, and the maximum and minimum bending Ds D 11 * are in C0 and C90 respectively. Therefore, the selection of group C0, C60 and C90 can represent the trend of the stiffness distribution and damping distribution.
The damping ratios are calculated according to the nine groups of CFRP raft frame in Table 11. The layups of top plate -I-support-bottom plate are represented by CX-CX-CX, respectively. Table 11. CFRP raft frame layup combinations.
Configuration
The natural frequency and damping ratio of the raft frame with layup C0-N-C0 are shown in Table 12, Figure 17 shows the strain energy loss. PLATE1, PLATE2, FIN1, RIB1, FIN2, RIB2 represent the top plate, bottom plate, flange plate, web plate, axial flange plate, axial web plate, respectively. Table 13 shows the natural frequency and damping ratio of the raft frame with layup C45-N-C45, and Figure 18 shows the strain energy loss. Table 14 shows the natural frequency and damping ratio of the raft frame with layup C90-N-C90, and Figure 19 shows the strain energy loss. We can draw the conclusion that if the layups of the plates of raft frame lead to unbalanced stiffness, the influence of the layups of the I-support on the natural frequency and damping ratio of the raft frame is determined mainly by the bending coefficient, and the greater the bending coefficient of I-support, the less the natural frequency and damping ratio of the structure are affected by the layup changes. If the layups of the top/bottom plate balance the stiffness (i.e., D 11 * = D 22 *), the natural frequency of the corresponding modes are generally higher. This phenomenon indicates that the top/bottom plate itself is not prone to bending deformation, and the natural frequency and damping ratio of the raft frame are more sensitive to the change of I-support stiffness. and damping ratio of the raft frame is determined mainly by the bending coefficient, and the greater the bending coefficient of I-support, the less the natural frequency and damping ratio of the structure are affected by the layup changes. If the layups of the top/bottom plate balance the stiffness (i.e., D11 * = D22 *), the natural frequency of the corresponding modes are generally higher. This phenomenon indicates that the top/bottom plate itself is not prone to bending deformation, and the natural frequency and damping ratio of the raft frame are more sensitive to the change of I-support stiffness.
Structure of the CFRP Raft Frame
The plates of the CFRP raft frame here have uneven stiffness (D11* ≠ D22*). Table 15 shows the layups and in-plane regularized stiffness parameters. Figure 20 shows the I-
Structure of the CFRP Raft Frame
The plates of the CFRP raft frame here have uneven stiffness (D 11 * = D 22 *). Table 15 shows the layups and in-plane regularized stiffness parameters. Figure 20 shows the I-support and top/bottom plates of the CFRP raft frame.
Structure of the CFRP Raft Frame
The plates of the CFRP raft frame here have uneven stiffness (D11* ≠ D22*). Table 15 shows the layups and in-plane regularized stiffness parameters. Figure 20 shows the Isupport and top/bottom plates of the CFRP raft frame.
Modal Analysis
The modal analysis module in B&K Connect software platform (Brüel & Kjaer, Copenhagen, Denmark) is applied to carry out the modal analysis experiments. The main instruments involved are accelerometers, impact hammer, data acquisition system and computer, as shown in Figure 21. In order to get the modal shape and damping ratio of different components, importing the 3D model into the computer of B&K, setting the accelerometer point and impact point as the same as the physical model, the modal testing system is shown in Figure 22.
OR PEER REVIEW 17 of 22
Modal Analysis
The modal analysis module in B&K Connect software platform (Brüel & Kjaer, Copenhagen, Denmark) is applied to carry out the modal analysis experiments. The main instruments involved are accelerometers, impact hammer, data acquisition system and computer, as shown in Figure 21. In order to get the modal shape and damping ratio of different components, importing the 3D model into the computer of B&K, setting the accelerometer point and impact point as the same as the physical model, the modal testing system is shown in Figure 22. (1) Modal test of the top/bottom plates The plate of the raft frame is suspended with rubber rope to simulate the free constraint state. There are 36 black knock points and two red accelerometer measuring points, as shown in Figure 23. According to the layups of the plates of the raft frame designed in Table15, the simulation results can be obtained through the FEA, and the test results can be carried out by (1) Modal test of the top/bottom plates The plate of the raft frame is suspended with rubber rope to simulate the free constraint state. There are 36 black knock points and two red accelerometer measuring points, as shown in Figure 23. (1) Modal test of the top/bottom plates The plate of the raft frame is suspended with rubber rope to simulate the free straint state. There are 36 black knock points and two red accelerometer measuring po as shown in Figure 23. According to the layups of the plates of the raft frame designed in Table15, the ulation results can be obtained through the FEA, and the test results can be carried ou According to the layups of the plates of the raft frame designed in Table 15, the simulation results can be obtained through the FEA, and the test results can be carried out by the B&K Connect software platform. Table 16 shows the comparison between the natural frequency and damping ratio of the test and simulation result of the plates. Table 16, compared with the test results, the maximum error of the natural frequency between the last three simulation results is 5.6%, which is consistent with the test results. The method of using rubber rope suspension to simulate free constraint results is in large error from the first order value. The error of damping ratio fluctuates around 10%, which means the simulation results are consistent with the experimental results within the margin of error.
(2) Modal test of I-support
The I-support is suspended with rubber rope to simulate the free constraint state. There are 21 black knock points and one red accelerometer measuring point, as shown in Figure 24.
quency between the last three simulation results is 5.6%, which is consisten results. The method of using rubber rope suspension to simulate free const in large error from the first order value. The error of damping ratio fluctuate which means the simulation results are consistent with the experimental the margin of error.
(2) Modal test of I-support
The I-support is suspended with rubber rope to simulate the free co There are 21 black knock points and one red accelerometer measuring poin Figure 24. The DOF of the signal acquisition is parallel to the web plates, therefo frequency and damping ratio of second order were not obtained as the acce in the direction of web plates is not collected. As shown in Table 17, takin value of the test result and compare it with the simulation result, the simula the natural frequency agree well with the test results with a maximum erro error between simulation result and test result of damping ratio of first an is minor. The error of damping ratio of third order is distinct, summed up cording to the strain energy loss diagram of I-support, the ratio of strain e tween the flange plate and web plate is 1:2 while the other three orders a mode, the stress of the flange plates and the web plates have great influence under the corresponding condition, and the joint will also cause more stra due to stress concentration. The DOF of the signal acquisition is parallel to the web plates, therefore, the natural frequency and damping ratio of second order were not obtained as the acceleration signal in the direction of web plates is not collected. As shown in Table 17, taking the average value of the test result and compare it with the simulation result, the simulation results of the natural frequency agree well with the test results with a maximum error of 7.5%; The error between simulation result and test result of damping ratio of first and fourth order is minor. The error of damping ratio of third order is distinct, summed up to 28.9%, according to the strain energy loss diagram of I-support, the ratio of strain energy loss between the flange plate and web plate is 1:2 while the other three orders are 1:10 in this mode, the stress of the flange plates and the web plates have great influence on each other under the corresponding condition, and the joint will also cause more strain energy loss due to stress concentration.
(3) Modal analysis of CFRP raft frame
The CFRP raft frame is suspended with rubber rope to simulate free constrain state. There are 64 black knock points and three red accelerometer measuring points, as shown in The CFRP raft frame is suspended with rubber rope to simulate free constrain state. There are 64 black knock points and three red accelerometer measuring points, as shown in Figure 25. Changing the installation direction of I-support to explore the influence of stiffness change on damping capacity of the raft frame, as shown in Figure 26. As Shown in Table 18, modal shape in simulation result is consistent with the test result, the stiffness changes because the different arrangement of the I-support, this indicates that the change of stiffness influence the inherent characteristics of the structure. The CFRP raft frame is suspended with rubber rope to simulate free constrain state. There are 64 black knock points and three red accelerometer measuring points, as shown in Figure 25. Changing the installation direction of I-support to explore the influence of stiffness change on damping capacity of the raft frame, as shown in Figure 26. As Shown in Table 18, modal shape in simulation result is consistent with the test result, the stiffness changes because the different arrangement of the I-support, this indicates that the change of stiffness influence the inherent characteristics of the structure. As Shown in Table 18, modal shape in simulation result is consistent with the test result, the stiffness changes because the different arrangement of the I-support, this indicates that the change of stiffness influence the inherent characteristics of the structure. Table 19 shows that the simulation values and test values of natural frequ damping ratios of the raft frame with arrangement in X/Y direction are signific ferent, the error of natural frequency ranges from 25% to 40%, as well as the ratio. The main reason is that both stiffness and damping have nonlinear char due to bolt connection, while in software ABAQUS, the constraint "Tie" is used the part, and there is no relative slip displacement and the stiffness is large, cau results of natural frequency and damping ratio calculation.
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arran I-supports. The maximum change of natural frequency and damping ratio are 1 43.6% in the test result, respectively. The test results show that the stiffness infl damping capacity of complex structure obviously, and the damping capacity ca imized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequen damping ratios of the raft frame with arrangement in X/Y direction are significan ferent, the error of natural frequency ranges from 25% to 40%, as well as the da ratio. The main reason is that both stiffness and damping have nonlinear characte due to bolt connection, while in software ABAQUS, the constraint "Tie" is used to c the part, and there is no relative slip displacement and the stiffness is large, causin results of natural frequency and damping ratio calculation.
Mode of Vibration First Order Second Order Third Order Fourth Ord
Test result in X direction
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrangem I-supports. The maximum change of natural frequency and damping ratio are 10.1 43.6% in the test result, respectively. The test results show that the stiffness influen damping capacity of complex structure obviously, and the damping capacity can b imized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequency and damping ratios of the raft frame with arrangement in X/Y direction are significantly different, the error of natural frequency ranges from 25% to 40%, as well as the damping ratio. The main reason is that both stiffness and damping have nonlinear characteristics due to bolt connection, while in software ABAQUS, the constraint "Tie" is used to connect the part, and there is no relative slip displacement and the stiffness is large, causing large results of natural frequency and damping ratio calculation.
Mode of Vibration First Order Second Order Third Order Fourth Order
Test result in X direction
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrangement of I-supports. The maximum change of natural frequency and damping ratio are 10.1% and 43.6% in the test result, respectively. The test results show that the stiffness influence the damping capacity of complex structure obviously, and the damping capacity can be maximized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequency and damping ratios of the raft frame with arrangement in X/Y direction are significantly different, the error of natural frequency ranges from 25% to 40%, as well as the damping ratio. The main reason is that both stiffness and damping have nonlinear characteristics due to bolt connection, while in software ABAQUS, the constraint "Tie" is used to connect the part, and there is no relative slip displacement and the stiffness is large, causing large results of natural frequency and damping ratio calculation.
Mode of Vibration First Order Second Order Third Order Fourth Order
Test result in X direction
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrangement of I-supports. The maximum change of natural frequency and damping ratio are 10.1% and 43.6% in the test result, respectively. The test results show that the stiffness influence the damping capacity of complex structure obviously, and the damping capacity can be maximized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequ damping ratios of the raft frame with arrangement in X/Y direction are signific ferent, the error of natural frequency ranges from 25% to 40%, as well as the ratio. The main reason is that both stiffness and damping have nonlinear char due to bolt connection, while in software ABAQUS, the constraint "Tie" is used t the part, and there is no relative slip displacement and the stiffness is large, cau results of natural frequency and damping ratio calculation.
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrang I-supports. The maximum change of natural frequency and damping ratio are 1 43.6% in the test result, respectively. The test results show that the stiffness infl damping capacity of complex structure obviously, and the damping capacity ca imized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequenc damping ratios of the raft frame with arrangement in X/Y direction are significant ferent, the error of natural frequency ranges from 25% to 40%, as well as the dam ratio. The main reason is that both stiffness and damping have nonlinear characte due to bolt connection, while in software ABAQUS, the constraint "Tie" is used to co the part, and there is no relative slip displacement and the stiffness is large, causing results of natural frequency and damping ratio calculation.
Mode of Vibration First Order Second Order Third Order Fourth Orde
Test result in X direction
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrangem I-supports. The maximum change of natural frequency and damping ratio are 10.1 43.6% in the test result, respectively. The test results show that the stiffness influen damping capacity of complex structure obviously, and the damping capacity can be imized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequency and damping ratios of the raft frame with arrangement in X/Y direction are significantly different, the error of natural frequency ranges from 25% to 40%, as well as the damping ratio. The main reason is that both stiffness and damping have nonlinear characteristics due to bolt connection, while in software ABAQUS, the constraint "Tie" is used to connect the part, and there is no relative slip displacement and the stiffness is large, causing large results of natural frequency and damping ratio calculation.
Mode of Vibration First Order Second Order Third Order Fourth Order
Test result in X direction
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrangement of I-supports. The maximum change of natural frequency and damping ratio are 10.1% and 43.6% in the test result, respectively. The test results show that the stiffness influence the damping capacity of complex structure obviously, and the damping capacity can be maximized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequency and damping ratios of the raft frame with arrangement in X/Y direction are significantly different, the error of natural frequency ranges from 25% to 40%, as well as the damping ratio. The main reason is that both stiffness and damping have nonlinear characteristics due to bolt connection, while in software ABAQUS, the constraint "Tie" is used to connect the part, and there is no relative slip displacement and the stiffness is large, causing large results of natural frequency and damping ratio calculation.
Mode of Vibration First Order Second Order Third Order Fourth Order
Test result in X direction
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrangement of I-supports. The maximum change of natural frequency and damping ratio are 10.1% and 43.6% in the test result, respectively. The test results show that the stiffness influence the damping capacity of complex structure obviously, and the damping capacity can be maximized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequ damping ratios of the raft frame with arrangement in X/Y direction are signific ferent, the error of natural frequency ranges from 25% to 40%, as well as the ratio. The main reason is that both stiffness and damping have nonlinear char due to bolt connection, while in software ABAQUS, the constraint "Tie" is used t the part, and there is no relative slip displacement and the stiffness is large, cau results of natural frequency and damping ratio calculation.
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrang I-supports. The maximum change of natural frequency and damping ratio are 1 43.6% in the test result, respectively. The test results show that the stiffness infl damping capacity of complex structure obviously, and the damping capacity ca imized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequenc damping ratios of the raft frame with arrangement in X/Y direction are significant ferent, the error of natural frequency ranges from 25% to 40%, as well as the da ratio. The main reason is that both stiffness and damping have nonlinear characte due to bolt connection, while in software ABAQUS, the constraint "Tie" is used to co the part, and there is no relative slip displacement and the stiffness is large, causing results of natural frequency and damping ratio calculation.
Mode of Vibration First Order Second Order Third Order Fourth Orde
Test result in X direction
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrangem I-supports. The maximum change of natural frequency and damping ratio are 10.1 43.6% in the test result, respectively. The test results show that the stiffness influen damping capacity of complex structure obviously, and the damping capacity can be imized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequency and damping ratios of the raft frame with arrangement in X/Y direction are significantly dif ferent, the error of natural frequency ranges from 25% to 40%, as well as the damping ratio. The main reason is that both stiffness and damping have nonlinear characteristic due to bolt connection, while in software ABAQUS, the constraint "Tie" is used to connec the part, and there is no relative slip displacement and the stiffness is large, causing large results of natural frequency and damping ratio calculation.
Mode of Vibration First Order Second Order Third Order Fourth Order
Test result in X direction
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrangement o I-supports. The maximum change of natural frequency and damping ratio are 10.1% and 43.6% in the test result, respectively. The test results show that the stiffness influence the damping capacity of complex structure obviously, and the damping capacity can be max imized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequency and damping ratios of the raft frame with arrangement in X/Y direction are significantly different, the error of natural frequency ranges from 25% to 40%, as well as the damping ratio. The main reason is that both stiffness and damping have nonlinear characteristics due to bolt connection, while in software ABAQUS, the constraint "Tie" is used to connect the part, and there is no relative slip displacement and the stiffness is large, causing large results of natural frequency and damping ratio calculation.
Mode of Vibration First Order Second Order Third Order Fourth Order
Test result in X direction
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrangement of I-supports. The maximum change of natural frequency and damping ratio are 10.1% and 43.6% in the test result, respectively. The test results show that the stiffness influence the damping capacity of complex structure obviously, and the damping capacity can be maximized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequ damping ratios of the raft frame with arrangement in X/Y direction are signific ferent, the error of natural frequency ranges from 25% to 40%, as well as the ratio. The main reason is that both stiffness and damping have nonlinear char due to bolt connection, while in software ABAQUS, the constraint "Tie" is used t the part, and there is no relative slip displacement and the stiffness is large, cau results of natural frequency and damping ratio calculation. Table 18. Modal shape of test and simulation in X/Y direction.
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrang I-supports. The maximum change of natural frequency and damping ratio are 1 43.6% in the test result, respectively. The test results show that the stiffness infl damping capacity of complex structure obviously, and the damping capacity ca imized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequenc damping ratios of the raft frame with arrangement in X/Y direction are significant ferent, the error of natural frequency ranges from 25% to 40%, as well as the da ratio. The main reason is that both stiffness and damping have nonlinear characte due to bolt connection, while in software ABAQUS, the constraint "Tie" is used to co the part, and there is no relative slip displacement and the stiffness is large, causing results of natural frequency and damping ratio calculation. Table 18. Modal shape of test and simulation in X/Y direction.
Mode of Vibration First Order Second Order Third Order Fourth Orde
Test result in X direction
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrangem I-supports. The maximum change of natural frequency and damping ratio are 10.1 43.6% in the test result, respectively. The test results show that the stiffness influen damping capacity of complex structure obviously, and the damping capacity can be imized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequency and damping ratios of the raft frame with arrangement in X/Y direction are significantly dif ferent, the error of natural frequency ranges from 25% to 40%, as well as the damping ratio. The main reason is that both stiffness and damping have nonlinear characteristic due to bolt connection, while in software ABAQUS, the constraint "Tie" is used to connec the part, and there is no relative slip displacement and the stiffness is large, causing large results of natural frequency and damping ratio calculation. Table 18. Modal shape of test and simulation in X/Y direction.
Mode of Vibration First Order Second Order Third Order Fourth Order
Test result in X direction
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrangement o I-supports. The maximum change of natural frequency and damping ratio are 10.1% and 43.6% in the test result, respectively. The test results show that the stiffness influence the damping capacity of complex structure obviously, and the damping capacity can be max imized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequency and damping ratios of the raft frame with arrangement in X/Y direction are significantly different, the error of natural frequency ranges from 25% to 40%, as well as the damping ratio. The main reason is that both stiffness and damping have nonlinear characteristics due to bolt connection, while in software ABAQUS, the constraint "Tie" is used to connect the part, and there is no relative slip displacement and the stiffness is large, causing large results of natural frequency and damping ratio calculation. Table 18. Modal shape of test and simulation in X/Y direction.
Mode of Vibration First Order Second Order Third Order Fourth Order
Test result in X direction
Test result in Y direction
Simulation result in X direction
Simulation result in Y direction
The stiffness distribution of the structure is altered by changing the arrangement of I-supports. The maximum change of natural frequency and damping ratio are 10.1% and 43.6% in the test result, respectively. The test results show that the stiffness influence the damping capacity of complex structure obviously, and the damping capacity can be maximized by adjusting the stiffness distribution. Table 19 shows that the simulation values and test values of natural frequency and damping ratios of the raft frame with arrangement in X/Y direction are significantly different, the error of natural frequency ranges from 25% to 40%, as well as the damping ratio. The main reason is that both stiffness and damping have nonlinear characteristics due to bolt connection, while in software ABAQUS, the constraint "Tie" is used to connect the part, and there is no relative slip displacement and the stiffness is large, causing large results of natural frequency and damping ratio calculation. The stiffness distribution of the structure is altered by changing the arrangement of I-supports. The maximum change of natural frequency and damping ratio are 10.1% and 43.6% in the test result, respectively. The test results show that the stiffness influence the damping capacity of complex structure obviously, and the damping capacity can be maximized by adjusting the stiffness distribution.
Conclusions
Based on the classical laminate theory, the free vibration of a CFRP raft frame and the influence of different carbon fiber prepreg layups on the damping capacity of a raft frame and its components are explored. According to the strain energy model of carbon fiber composite laminates, the damping ratio of each component have been calculated by using the MATLAB software.
(1) The natural frequency and damping ratio of the plates of the raft frame are affected by the fiber orientation, and the minimum stiffness coefficient can be increased by adjusting the fiber layering angle, which can improve the damping capacity. However, the conclusion is the opposite for torsional modal shapes. (2) The change of stiffness caused by fiber layering angle has a significant influence on the natural frequency of the flange plate and web plate of the I-support. The damping ratio can be increased by adjusting the fiber layering angle of the layups. (3) As for the raft frame, if the layups lead to uneven stiffness of plates, the damping capacity can be greatly influenced by the fiber layering angle; if the stiffness is balanced and generally large, the angle has a greater influence on the damping of the raft frame. (4) Different arrangements of I-support indicate that the change of stiffness has great influence on the damping capacity and natural frequency, and the stiffness can be changed by adjusting the arrangement to optimize the damping capacity. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 2022-01-19T16:40:28.875Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "3b3011f28f7162c8367b6670a5a342499a072e92",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8779907",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "a2a6d987556706528f1174e436ee275c1bb272be",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.